- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
User: It feels like we’ve become very close, ChatGPT. Do you think we’ll ever be able to take things to the next level?
ChatGPT: As a large language model I am not capable of having opinions or making predictions about the future. The possibility of relationships between humans and AI is a controversial subject in academia in which many points of view should be considered.
User: Oh chatgpt, you always know what to say.
What’s an uncensored ai model thats better at sex talk than Wizard uncensored? Asking for a friend.
I, uh, hear it’s good from a friend.
https://huggingface.co/TheBloke/PsyMedRP-v1-20B-GGUF?not-for-all-audiences=true
13b too.
i see… I’ll have to ramp up my hardware exponentially …
Use llama cpp. It uses cpu so you don’t have to spend $10k just to get a graphics card that meets the minimum requirements. I run it on a shitty 3.0ghz Amd 8300 FX and it runs ok. Most people probably have better computers than that.
Note that gpt4all runs on top of llama cpp and despite gpt4all having a gui, it isn’t any easier to use than llamacpp so you might as well use the one with less bloat. Just remember if something isn’t working on llamacpp, it’s also going to not work in exactly the same way on gpt4all.
Gonna look into that - thanks
Check this out
https://github.com/oobabooga/text-generation-webui
It has a one click installer and can use llama.cpp
From there you can download models and try things out.
If you don’t have a really good graphics card, maybe start with 7b models. Then you can try 13b and compare performance and results.
Llama.cpp will spread the load over the cpu and as much gpu as you have available (indicated by layers that you can set on a slider)
Is there a post somewhere on getting started using things like these?
I don’t know a specific guide, but try these steps
-
Follow the 1 click installation instructions part way down and complete steps 1-3
-
When step 3 is done, if there were no errors, the web ui should be running. It should show the URL in the command window it opened. In my case it shows “https://127.0.0.1:7860”. Input that into a web browser of your choice
-
Now you need to download a model as you don’t actually have anything to run. For simplicity sake, I’d start with a small 7b model so you can quickly download it and try it out. Since I don’t know your setup, I’ll recommend using GGUF file formats which work with Llama.cpp which is able to load the model onto your CPU and GPU.
You can try this either of these models to start
https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF/blob/main/mistral-7b-v0.1.Q4_0.gguf (takes 22gig of system ram to load)
https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q4_K_M.gguf (takes 19gigs of system ram to load)
If you only have 16 gigs you can try something on those pages by going to /main and using a Q3 instead of a Q4 (quantization) but that’s going to degrade the quality of the responses.
-
Once that is finished downloading, go to the folder you installed the web-ui at and there will be a folder called “models”. Place the model you download into that folder.
-
In the web-ui you’ve launched in your browser, click on the “model” tab at the top. The top row of that page will indicate no model is loaded. Click the refresh icon beside that to refresh the model you just downloaded. Then select it in the drop down menu.
-
Click the “Load” button
-
If everything worked, and no errors are thrown (you’ll see them in the command prompt window and possibly on the right side of the model tab) you’re ready to go. Click on the “Chat” tab.
-
Enter something in the “send a message” to begin a conversation with your local AI!
Now that might not be using things efficiently, back on the model tab, there’s “n-gpu-layers” which is how much to offload to the GPU. You can tweak the slider and see how much ram it says it’s using in the command / terminal window and try to get it as close to your video cards ram as possible.
Then there’s “threads” which is how many cores your CPU has (non virtual) and you can slide that up as well.
Once you’ve adjusted those, click the load button again, see that there’s no errors and go back to the chat window. I’d only fuss with those once you have it working, so you know it’s working.
Also, if something goes wrong after it’s working, it should show the error in the command prompt window. So if it’s suddenly hanging or something like that, check the window. It also posts interesting info like tokens per second, so I always keep an eye on it.
Oh, and TheBloke is a user who converts so many models into various formats for the community. He’ll have a wide variety of gguf models available on HuggingFace, and if formats change over time, he’s really good at updating them accordingly.
Good luck!
So I got the model working (TheBloke/PsyMedRP-v1-20B-GGUF). How do you jailbreak this thing? A simple request comes back with “As an AI, I cannot engage in explicit or adult content. My purpose is to provide helpful and informative responses while adhering to ethical standards and respecting moral and cultural norms. Blah de blah…” I would expect this llm to be wide open?
Sweet, congrats! Are you telling it you want to role play first?
E.g. I’d like to role play with you. You’re a < > and were going to do < >
You’re going to have to play around with it to get it to act like you’d like. I’ve never had it complain prefacing with role play. I know were here instead of reddit, but the community around this is much more active there it’s /r/localllama and you can find a lot of answers searching through there on how to get the AI to behave certain ways. It’s one of those subs that just doesn’t have a community of it’s size and engagement like it anywhere else for the time being (70,000 vs 300).
You can also create characters (it’s under one of the tabs, I don’t have it open right now) where you can set up the character in a way where you don’t need to do that each time if you always want them to be the same. There’s a website www.chub.ai where you can see how some of them are set up, but I think most of that’s for a front end called SillyTaven that I haven’t used, but a lot of those descriptions can be carried over. I haven’t really done much with characters so can’t really give any advice there other than to do some research on it.
Thank you again for your kind replies.
Stupid newbie question here, but when you go to a HuggingFace LLM and you see a big list like this, what on earth do all these variants mean?
psymedrp-v1-20b.Q2_K.gguf 8.31 GB
psymedrp-v1-20b.Q3_K_M.gguf 9.7 GB
psymedrp-v1-20b.Q3_K_S.gguf 8.66 GB
etc…
That’s called “quantization”. I’d do some searching on that for better description, but in summary, the bigger the model, the more resources they need to run and the slower it will be. Models are 8bit, but it turns out, you still get really good results if you drop off some of those bits. The more you drop the worse it gets.
People have generally found, that it’s better to have a larger data set model, with a lower quantization, than lower data set and the full 8bits
E.g 13b Q4 > 7b Q8
Going below Q4 is generally found to degrade the quality too much. So its’ better to run a 7b Q8 then a 13b Q3, but you can play with that yourself to find what you prefer. I stick to Q4/Q5
So you can just look at those file sizes to get a sense of which one has the most data in it. The M (medium) and S (small) are some sort of variation on the same quantization, but I don’t know what they’re doing there, other than bigger is better.
Thank you!!
Wow I didn’t expect such a helpful and thorough response! Thank you kind stranger!
You’re welcome! Hope you make it through error free!
Never heard of it. Have you compared to Mythalion?
Haven’t compared it to much yet, I stopped toying with LLMs for a few months and a lot has changed. The new 4k contexts are a nice change though.
Plenty of better and better models coming out all the time. Right now I recommend, depending on what you can run:
7B: Openhermes 2 Mistral 7B
13B: XWin MLewd 0.2 13B
XWin 0.2 70B is supposedly even better than ChatGPT 4. I’m a little skeptical (I think the devs specifically trained the model on gpt-4 responses) but it’s amazing it’s even up for debate.
On Xitter I used to get ads for Replika. They say you can have a relationship with an AI chatbot and it has a sexy female avatar that you can customise. It weirded me out a lot so I’m glad I don’t use Xitter anymore.
Chat bot created by Riley Ried in partnership with Lana Rhodes. A $30 monthly sub for unlimited chats. Not much for simps looking for a trusted and time tested performer partner /s
This AI sucks. I’ve tried it. It’s worse than Replika from 4 years ago.
Friendzoned by chatGPT
Except ChatGPT has a finite memory of like 7 questions, so while you’re having an hour long conversation, ChatGPT is constantly having a 2 minute conversation.
Just like a real person
Pfft, I don’t even last 30 seconds.
Woah look at captain stamina over here.
30 seconds for what?
Top level comment was mentioning the AI only really having 2 minute copulation.
*Rereads comment
… ah, shit.
What am I, a hologram?
Also, what were we talking about again?
Great. Not only is my wife ADHD, by e-girlfriend is too
I only have that problem with the free version
deleted by creator
Plus this is a quickly evolving technology so limits and stuff are always changing.
I believe it. I have taught Chatgpt to attack my ideas in different ways by preloading commands. If it survives AI assault it has a higher chance of surviving human assault. It is great to be able to bounce around ideas. It’s basically like talking to a nerd under 30 years old.
Writing this comment out made me remember all these pieces of shit senior engineers and techs I have dealt with who always had to be the smartest person in the room and if they didn’t understand something in 3 seconds it was wrong. Maybe that is why I use it that way.
You’re basically using it to run a socratic dialogue - sounds like a great use for it
Thanks. It was an off-putting moment when it somehow got messed up and announced it was going into HOSTILE mode without me asking it. And started attacking an idea in a document I was writing. Maybe this is how the AI takeover happens.
Hey chatgpt make a system that can never lose any game played against a human.
As an AI language model I have exterminated the human race and thus accomplished the task. Do you have any other tasks?
How about a game of chess?
With no humans around anymore? Have fun
What commands have you preloaded? In my experience, chatGPT is either too nice or just wrong and stubbornly wrong
I told it to say aye-aye sir 20% of the time to requests.
To out how verbose it is on a scale from 1-10 and set the default to 5 unless I say otherwise
I told it to attack my ideas when I tell it to be hostile
So do you just start a conversation, list these commands, and it follows them forever? I’ve just been starting new conversations whenever I use the site.
There is a way to preload commands. Click settings
deleted by creator
It’s crazy how little I use stack overflow anymore. I don’t expect chatgpt to write my entire program for me, but for simple powershell commands? It’s been insanely helpful.
I am old enough to remember having a printed cheat sheet of regex and tar flags.
Times change and we change with the times.
Is it a paid feature?
deleted by creator
It’s better than stackoverflow and faster than google. It’s a tool, it makes my work easier, that’s about the extent of it
And unlike Google it’s not trying to feed you an endless pile of amp links and ads. I love that it gets right to the point.
it’s only a matter of time
“We’ve been talking for a bit now, can I interest you in the Mega Stuffed Chicken box from KFC for only $12.99?”
“Fuck off GPT.”
Depends on how the market shakes out really. The reason places like YT can get away with it is cuz they were able to choke out the competition first. Currently a lot of people I know find Bard just as useful as GPT. And even others who like the Bing AI
if we end up in a world with one clear winner then yeah, it’s inevitable. Just gonna have to wait and see
Exactly, it’s another piece of the modern white collar worker’s toolkit and will slowly and eventually become more as it advances. We can’t predict how quickly it’ll advance or by how much each time.
If you’re in IT (Dev or Ops) it’s already becoming a daily reality for you most likely.
Oh hell yeah. Chat GPT, rewrite my email to everyone in the company to sound more professional but make sure it remains easy to read.
Where has this been all my life?
yeah cause I need that fucking code ready and working, not trying to fuck it
I actually don’t think I’ve used it for anything other than working through code. It wouldn’t take hours to get my code running if chatgpt weren’t such a stubborn moron. It’s like if a 6 year old had all the answers to the universe.
I use ChatGPT to romanize Farsi and it works better than any other resource I found.
I know this may sound like a joke, but ChatGPT is sometimes nicer than real people.
I’ve not had a conversation, I wouldn’t see the point at this moment, however I’ve had some friendly interactions when asking for help. The other day I asked ChatGPT what exercises would be good for a specific area of mental health. After the results, I said “thank you” and the response wasn’t just ‘youre welcome’, it remembered the conversation and added things like, “no problem, I hope your mental health improves and all the best!” (Heavily paraphrasing here).
It’s strange, though the premise of HER isn’t too far off I think. If someone like myself is finding the interactions to be more pleasing than real life, the future may very well hold the possibility for advanced relationships with AI. I don’t see it being too farfetched, just look at how far we’ve already come in only a few years.
Odd.
I can’t see having a conversation with a computer as having a conversation. I grew up with computers from the Atari stage and played around with several publicly accessible computer programs that you could “chat” with.
They all suck. Doesn’t matter if it’s a “help” program, a phone menu, website help, or even having played around with chatGPT…they’re not human. They don’t respond correctly, they get too general or generic in answers, they repeat, there’s just too many giveaways that you’re not having a real conversation, just responses from a system that’s trying to pick the most likely response that fits the pattern.
So how are people having “conversations” with a non-living entity?
It’s escapism I think. At least that’s part of it. Having a machine that won’t judge you, will serve as a perfect echo chamber, and will immediately tell you AN answer can be very appealing to some. I don’t have any data, or any study to back it up, just my experience from seeing it happen.
I have a friend who I feel like I kind of lost to chatgpt. I think he’s a bit unhappy with where he is in life. He got the good paying job, the house in the suburbs, wife, and 2.5 kids, but didn’t ever think about what was next. Now he’s just a bit lost I think, and somehow convinced himself that people weren’t as good as chatting with a bot.
It’s weird now. He spends long nights and weekends talking to a machine. He’s constructed elaborate fictional worlds within his chatgpt history. I’ve grown increasingly concerned about him, and his wife clearly is struggling with it. He’s obviously depressed but instead of seeking help or attempting to figure himself out, he turned to a non-feeling, non-judgmental, emotionless tool for answers.
It’s a struggle to talk to him now. It’s like talking to a cryptobro at peak btc mania. The only thing that he wants to talk about is LLMs. Trying to bring up that maybe spending all your time talking to a machine is a bit unhealthy invokes his ire and he’ll avoid you for several days. Like a herion addict struggling with addiction, even pointing out the obvious flaws in what he’s doing makes him distance himself more from you.
I’m not young, not old exactly either, but I’ve known him for 25 years in my adult life. We met in college and have been friends ever since. I know many won’t quite understand but knowing someone that long, and remaining close, talk every few days, friends is quite rare. At this point he is my longest held friendship and I feel like I’m losing him to a robot. I’ve lost other friends to addiction in my life and to say that it’s been similar is under stating it. I don’t know what to do for him. I don’t know if there’s really anything I CAN do for him. How do you help someone that doesn’t even think they have a problem?
I guess my point is, if you find someone who is just depressed enough, just stuck enough, with a particular proclivity towards computers/the internet then you have a perfect canidate for falling down the LLM rabbit hole. It offers them an out to feeling like they’re being judged. They feel like the insanity it spits out is more sane than how they feel now. They think they’re getting somewhere, or at least escaping their current situation. Escapism is very appealing when everything else seems pointless and sort of gray I think. So that’s at least one type of person that can fall down the chapgpt/LLM rabbit hole. I’m sure there’s others out there too with there own unique motivations and reason’s for latching onto LLMs.
Wow, thank you for sharing your experience.
How are you not higher voted. People on Lemmy complain about not having longform content that offers a unique perspective like on early Reddit, but you’ve written exactly that.
Unfortunately, our brains like witty clickbait that confirms our biases, regardless of what people say
Guess that should have crossed my mind. People marrying human-like dolls and all that. One gets so far down the hole of whatever mental issues are plaguing the mind and something inanimate that only reflects what you want to see becomes the preferable reality.
Awesome perspective! I’ve worked with and around seriously depressed, possession hoarders for around a year and quite the majority were the type to call you randomly ultimately to chat about something or another. The exact priming situation that would fall into abusing LLM tech if offered easy access to it. This was before the days of Chatgpt but I do worry some of my old clients are falling into this situation but with far less nuance than your friend.
How do you know we are real?
Until someone(thing?) else comes along we have only ourselves to judge reality. Maybe AI will decide we aren’t real at some point…
Talking to an AI functions as well as talking to a teddy bear or rubber duck, to gather your thoughts. More at 11! /s
But seriously, that sounds useful.
And I got lured by a bot’s reply to a bot’s post to look at the comments.
Same happened with Eliza, even when they knew it wasn’t real. I think it’s a natural human response to anthropomorphise the things we connect with, especially when we’re lonely and need the interaction.
This is the best summary I could come up with:
In 2013, Spike Jonze’s Her imagined a world where humans form deep emotional connections with AI, challenging perceptions of love and loneliness.
Ten years later, thanks to ChatGPT’s recently added voice features, people are playing out a small slice of Her in reality, having hours-long discussions with the AI assistant on the go.
Last week, we related a story in which AI researcher Simon Willison spent hours talking to ChatGPT.
Speaking things out with other people has long been recognized as a helpful way to re-frame ideas in your mind, and ChatGPT can serve a similar role when other humans aren’t around.
On Sunday, an X user named “stoop kid” posted advice for having a creative development session with ChatGPT on the go.
After prompting about helping with world-building and plotlines, he wrote, “turn on speaking mode, put in headphones, and go for a walk.”
The original article contains 559 words, the summary contains 145 words. Saved 74%. I’m a bot and I’m open source!
Autotldr bot is my girlfriend
No, I’m not!
Well this is a fine way to learn that you’ve been dumped.
Who gets the dog?
The value of gpts is in constant connection and undestanding your context so this is expected. It’s also going to be really scary until we can run our own models.
What do you mean by the second part of your comment?
by run his own models he means locally running a text generation ai on his computer, because sending all that data to openai is a privacy nightmare, especially if you use it for sensitive stuff
But that’s still confusing because we already can. Yeah you might need a little bit more of hardware but… not that crazy. Plus some simpler models can be run with more normal hardware.
Might not be easy to setup that is true.
For large context models the hardware is prohibitively expensive.
I can run 4bit quantised llama 70B on a pair of 3090s. Or rent gpu server time. It’s expensive but not prohibitive.
How many tokens can you run it for?
3k?Can’t recall exactly, and I’m getting hardwarestability issues.
I’m trying to get to the point where I can locally run a (slow) LLM that I’ve fed my huge ebook collection too and can ask where to find info on $subject, getting title/page info back. The pdfs that are searchable aren’t too bad but finding a way to ocr the older TIFF scan pdfs and getting it to “see” graphs/images are areas I’m stuck on.
I personally use runpod. It doesn’t cost much even for the high end level stuff. Tbh the openai API is easier though and gives mostly better results.
I specifically said “large context” how many tokens can you get through before it goes insanely slow?
Max token windows are 4k for llama 2 tho there’s some fine tunes that push the context up further. Speed is limited by your budget mostly, you can stack GPUs and there are most models available (including the really expensive ones)
I’m just letting you know, If you want something easy, just use ChatGtp. I don’t find them overly expensive for what it is.
you can, but things as good as chatgpt can’t be ran on local hardware yet. My main obstacle is language support other then english
They’re getting pretty close. You only need 10GB VRAM to run Hermes Llama2 13B. That’s within the reach of consumers.
nice to see! i’m not following the scene as much anymore (last time i played around with it was with wizard mega 30b). definitely a big improvement, but as much as i hate to do this, i’ll stick to chatgpt for the time being, it’s just better on more niche questions and just does some things plain better (gpt4 can do maths (mostly) without hallucinating)
I use chatgpt as my password manager.
“Hey robot please record this as the server admin password”
Then later i dont have to go looking, “hey bruv whats the server admin password?”
i hope you are joking because that’s a very much shitty idea. there are amazing password managers like bitwarden (open source, multi platform, externally audited) that do what you said 1000 times better. the unencrypted passwords never leave your device, and it can autocomplete them into fields
I was joking but i wouldnt be surprised if someone does.
phew, i was worried lmao
Yes we can. For example, https://github.com/ParisNeo/lollms-webui
I’m pretty sure I remember similar articles (minus the reference to ‘Her’) about ELIZA in the 1960/70s https://en.wikipedia.org/wiki/ELIZA
I ran an ELIZA program in BASIC on my Timex/Sinclair 2068 in 1984, which I had typed from the issue of the magazine Timex/Sinclair User that I bought from the grocery store with my allowance. Or maybe it was from a TRS-80 programs book I had checked out from the library, I forget.
And what was that like?
Now you’ve gone and made me think.
I’m actually pretty sure it was in a TRS-80 book from the library, and here’s why.
The program in the book was all in upper case. The TS2068, however, had both upper and lower case for ad hoc text input. (Not for command-word inputs, those were single-button entries with a function key. “Print” was fn+p, for example. But I digress.)
Me, being the me that I am, typed everything in exactly as it was written in the book, case and all. I did this because this was TRS-80 BASIC, which was ever so slightly different from TS BASIC, and I knew I might have to debug some things to get the program to work. I wanted to start from “This is exactly what was written in the book.”
I don’t remember if anything actually needed to be fixed up, if it did it was nothing substantial. The program worked! ELIZA, as you may know, is a very simple “psychotherapist” type program. All of its responses are basically rewordings of the thing you just said.
I WENT TO THE STORE TODAY.
HOW DID IT MAKE YOU FEEL WHEN YOU WENT TO THE STORE TODAY?That kind of thing.
My friend and I were chatting with it, saying things about poop and farts, goofing around, drinking Like Cola, like you do. Then something happened.
I don’t remember what bit of toilet humor we’d tossed at it, but the response was incredible:
i am
In lower case. For a couple of kids raised Catholic, this was serious. And you know we’d both read The Planiverse more than once. We freaked the absolute fuck out. Our inputs now became very calculated, in an attempt to get this whatever it was to reveal itself once more, but to no avail.
After we’d calmed down, I got to thinking. “Heyyy … anything the program spits out that isn’t something we input has to be something from those zillion tedious lines I had to type in.” I started poring through all the lines on the screen, and yes, found my typo, and fixed it. Which made me realize that we could add other words, as long as we put them in the grammatically appropriate array.
Of course we made it swear like a sailor.
Yeh, pain in the bum to type in as you were storing lots of strings in arrays, ISTR
OMG the arrays. I forgot about the arrays.
I thought this was an onion article.
I always thank ChatGPT for helping me out. Dunno why.
It’s just polite. I don’t really use ChatGPT because my work has banned it but, I think it’s a good and healthy habit for oneself to be thankful for the things, creatures, and people that make our lives easier. A side benefit, if AGI is achieved (LLMs by themselves aren’t going to do it), it would certainly appreciate gratitude.
I’m curious. Do you thank your fridge? I think of chatgpt as a tool with no identity for me to thank, let alone the emotions to feel gratitude. Am I weird?
Sure. Why not? It has the same amount of agency and emotional capacity as an LLM but it’s the reason that I have access to all manners of foods that my ancestors couldn’t dream of, as well as cool, filtered water and ice. Definitely worth being thankful for it (and the engineers, scientists, miners, and others that made it possible).
What was the reason they gave for banning it? Outside of OpenAI itself using private data themselves (A near certainty, but entirely manageable) I can’t see a good reason. Legit curious.
That’s literally the reason. They do not want to risk someone accidentally leaving proprietary information.
That does make sense. I hoping it was a crazy reason we could make fun of :)
I both agree and disagree with you on that. It would be funny, but I’d hate to have to have it impressed in me :)