Reading this comment section is so strange. Skepticism about generative AI seems to have become some kind of professional sport on the internet.
Consensus in our group is that generative AI is a great tool. Maybe not perfect, but the comparison to the metaverse is absurd: no one asked for the metaverse or needed it for anything, as opposed to several cases where GPT has literally bailed us out of a difficult situation. e.g. some proof of concept needed to be written in a programming language that no one in the group had enough experience with. With no GPT, this could have easily cost someone a week. With GPT assistance – proof of concept ready in less than a day.
Generative AI does suffer from a host of problems. Hallucinations, jailbreaks, injections, reality 101 failures, believe me I’ve encountered all these intimately as I’ve had to utilize GPT for some of my day job tasks, often against its own better judgment and despite its own woefully lacking capacity to deal with the task. What I think is interesting is a candid discussion: why do these issues persist? What have we tried? What techniques can we try next? Are these issues intractable in some profound sense, and constitute a hard ceiling for where generative AI can go? Is there an “impossibility theorem for putting AI on autopilot”? Or are these limitations just artifacts we can engineer away and route around?
It seems like instead of having this discussion, it’s become in vogue to wave around the issues triumphantly and implicitly declare the field successfully dunked on, and the discussion over. That’s, to be blunt, reductive. Smartphones had issues, the early internet had issues. Sure, “they also laughed at Bozo the clown” and all that, but without a serious discussion of the landscape right now, of how far away we are from mitigating these issues and why, a lot of this “ha ha suck it AI” discourse strikes me as deeply performative. Like, suppose a year from now OpenAI solves hallucinations. The issue is just gone. Do all the cool kids who sneered at the invented legal precedents, crafted their image as knowing better than the OpenAI dweebs, elegantly implied how hallucinations are a cornerstone in how the entire field is a stupid useless dead end – do they lose any face? I think they don’t. I think this is why this sneering has become such a lucrative online professional sport.
Some of the skepticism is just a reaction to the excessive hype with which generative AI has been pushed over the past few months. If you’ve seen tech hype cycles before, the hype itself can generate some skepticism. Plus there are many dubious cases where companies are shoving ChatGPT or similar into their products just so they can advertise them as “AI powered”, and these poorly thought out, marketing-driven moves deserve criticism.
It’s anecdotal but I have found that the people who are “skeptical” (to use your word) about generative AI often turn out to be financially dependent on something that generative AI can do.
That it to say, they’re worried it will replace them at their job and so they very much want it to fail.
You have to have some skin in the game for that kind of cognitive dissonance. I think some are even resentful they can’t understand it. A 21st century cotton gin.
It’s amazing how critical Lemmy is of ChatGPT. It has become fashionable to pretend it’s a trash technology. The reality is that it is and will continue changing the world.
Reading this comment section is so strange. Skepticism about generative AI seems to have become some kind of professional sport on the internet.
Consensus in our group is that generative AI is a great tool. Maybe not perfect, but the comparison to the metaverse is absurd: no one asked for the metaverse or needed it for anything, as opposed to several cases where GPT has literally bailed us out of a difficult situation. e.g. some proof of concept needed to be written in a programming language that no one in the group had enough experience with. With no GPT, this could have easily cost someone a week. With GPT assistance – proof of concept ready in less than a day.
Generative AI does suffer from a host of problems. Hallucinations, jailbreaks, injections, reality 101 failures, believe me I’ve encountered all these intimately as I’ve had to utilize GPT for some of my day job tasks, often against its own better judgment and despite its own woefully lacking capacity to deal with the task. What I think is interesting is a candid discussion: why do these issues persist? What have we tried? What techniques can we try next? Are these issues intractable in some profound sense, and constitute a hard ceiling for where generative AI can go? Is there an “impossibility theorem for putting AI on autopilot”? Or are these limitations just artifacts we can engineer away and route around?
It seems like instead of having this discussion, it’s become in vogue to wave around the issues triumphantly and implicitly declare the field successfully dunked on, and the discussion over. That’s, to be blunt, reductive. Smartphones had issues, the early internet had issues. Sure, “they also laughed at Bozo the clown” and all that, but without a serious discussion of the landscape right now, of how far away we are from mitigating these issues and why, a lot of this “ha ha suck it AI” discourse strikes me as deeply performative. Like, suppose a year from now OpenAI solves hallucinations. The issue is just gone. Do all the cool kids who sneered at the invented legal precedents, crafted their image as knowing better than the OpenAI dweebs, elegantly implied how hallucinations are a cornerstone in how the entire field is a stupid useless dead end – do they lose any face? I think they don’t. I think this is why this sneering has become such a lucrative online professional sport.
Some of the skepticism is just a reaction to the excessive hype with which generative AI has been pushed over the past few months. If you’ve seen tech hype cycles before, the hype itself can generate some skepticism. Plus there are many dubious cases where companies are shoving ChatGPT or similar into their products just so they can advertise them as “AI powered”, and these poorly thought out, marketing-driven moves deserve criticism.
It’s anecdotal but I have found that the people who are “skeptical” (to use your word) about generative AI often turn out to be financially dependent on something that generative AI can do.
That it to say, they’re worried it will replace them at their job and so they very much want it to fail.
You have to have some skin in the game for that kind of cognitive dissonance. I think some are even resentful they can’t understand it. A 21st century cotton gin.
It’s amazing how critical Lemmy is of ChatGPT. It has become fashionable to pretend it’s a trash technology. The reality is that it is and will continue changing the world.
It is weird. I love this stuff. It can be so useful and would love a game with interactive npc. Also really fun tools to brainstorm with.
What a sane, boring, lukewarm take. Git out, we don’t like you.