One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn’t summarising at all, it only looks like it…
LLMs do not understand the text, so they cannot pick out the important sentences. Because of this, they are unable to summarize the text; instead they shorten the text. Unless the text is very rambly, important meaning will be lost when shortening.
Cheaper for now, since venture capitalist cash is paying to keep those extremely expensive servers running. The AI experiments at my work (automatically generating documentation) have got about an 80% reject rate - sometimes they’re not right, sometimes they’re not even wrong - and it’s not really an improvement on time having to review it all versus just doing the work.
No doubt there are places where AI makes sense; a lot of those places seem to be in enhancing the output of someone who is already very skilled. So let’s see how “cheaper” works out.
I use AI often as a glorified search engine these days. It’s actually kinda convenient to give me ideas to look into further, when encountering a problem to solve. But would I just take some AI output without reviewing it? Hell no😄
People always assume that the current state of generative AI is the end point. Five years ago nobody would have believed what we have today. In five years it’ll all be a different story again.
People always assume that generative AI (and technology in general) will continue improving at the same pace it always has been. They always assume that there are no limits in the number of parameters, that there’s always more useful data to train it on, and that things like physical limits in electricity infrastructure, compute resources, etc., don’t exist. In five years generative AI will have roughly the same capability it has today, barring massive breakthroughs that result in a wholesale pivot away from LLMs. (More likely, in five years it’ll be regarded similarly to cryptocurrency is today, because once the hype dies down and the VC money runs out the AI companies will have to jack prices to a level where it’s economically unviable to use in most commercial environments.)
In five years it’ll all be a different story again.
You don’t know that. Maybe it will take 124 years to make the next major breakthru and until then all that will happen is people will tinker around and find that improving one thing makes another thing worse.
This feels a bit similar to USSR of 60s promising communism and space travel tomorrow, humans on new planets and such in propaganda.
Not comparable at all, the social and economic systems are more functional than that of USSR at any stage in the developed nations, and cryptocurrencies and LLMs are just two kinds of temporary frustrations which will be overshadowed by some real breakthrough of which we don’t yet know.
But with LLMs, unlike blockchain-based toys, it’s funny how all the conformist, normie, big, establishment-related organizations and social strata are very enthusiastic over their adoption.
I don’t know any managers of such level and can’t ask what exactly they are optimistic about and what exactly they see in that technology.
I suspect the fact that algorithms of those are not so complex, and the important part is datasets, means something.
Maybe they really, honestly, want to believe that they’ll be able to replace intelligent humans with AIs, ownership of which will be determined by power. So it’s people with power thinking this way they can get even more power and make the alternative path of decentralization, democratization and such impossible. If they think that, then they are wrong.
But so many cunning people can’t be so stupid, so there is something we don’t see or don’t realize we see.
I don’t know. Maybe endorsement of LLMs and “AIs” is a way to encourage people create datasets, which can then be used for other things.
Also this technology is good for one thing - flagging people for some political sympathies or likeliness to behave a certain way, based on their other behavior.
As if - a technology to make kill lists for fascists, if you excuse my alarmism. Maybe nobody will come at night in black leather to take you away, but you won’t get anywhere near posts affecting serious decisions. An almost bloodless world fascist revolution.
Human summarization of the above story:
LLMs do not understand the text, so they cannot pick out the important sentences. Because of this, they are unable to summarize the text; instead they shorten the text. Unless the text is very rambly, important meaning will be lost when shortening.
Also the LLMs lie.
Good human.
But having an AI do it is cheaper so that’s where we’re going.
Cheaper for now, since venture capitalist cash is paying to keep those extremely expensive servers running. The AI experiments at my work (automatically generating documentation) have got about an 80% reject rate - sometimes they’re not right, sometimes they’re not even wrong - and it’s not really an improvement on time having to review it all versus just doing the work.
No doubt there are places where AI makes sense; a lot of those places seem to be in enhancing the output of someone who is already very skilled. So let’s see how “cheaper” works out.
I use AI often as a glorified search engine these days. It’s actually kinda convenient to give me ideas to look into further, when encountering a problem to solve. But would I just take some AI output without reviewing it? Hell no😄
People always assume that the current state of generative AI is the end point. Five years ago nobody would have believed what we have today. In five years it’ll all be a different story again.
People always assume that generative AI (and technology in general) will continue improving at the same pace it always has been. They always assume that there are no limits in the number of parameters, that there’s always more useful data to train it on, and that things like physical limits in electricity infrastructure, compute resources, etc., don’t exist. In five years generative AI will have roughly the same capability it has today, barring massive breakthroughs that result in a wholesale pivot away from LLMs. (More likely, in five years it’ll be regarded similarly to cryptocurrency is today, because once the hype dies down and the VC money runs out the AI companies will have to jack prices to a level where it’s economically unviable to use in most commercial environments.)
To add to this, we’re going to run into the problem of garbage in, garbage out.
LLMs are trained on text from the internet.
Currently, a massive amount of text on the internet is coming from LLMs.
This creates a cycle of models getting trained on data sets that increasingly contain large sets of data generated by older models.
The most likely outlook is that LLMs will get worse as the years go by, not better.
You don’t know that. Maybe it will take 124 years to make the next major breakthru and until then all that will happen is people will tinker around and find that improving one thing makes another thing worse.
Good human
AI is BS
This feels a bit similar to USSR of 60s promising communism and space travel tomorrow, humans on new planets and such in propaganda.
Not comparable at all, the social and economic systems are more functional than that of USSR at any stage in the developed nations, and cryptocurrencies and LLMs are just two kinds of temporary frustrations which will be overshadowed by some real breakthrough of which we don’t yet know.
But with LLMs, unlike blockchain-based toys, it’s funny how all the conformist, normie, big, establishment-related organizations and social strata are very enthusiastic over their adoption.
I don’t know any managers of such level and can’t ask what exactly they are optimistic about and what exactly they see in that technology.
I suspect the fact that algorithms of those are not so complex, and the important part is datasets, means something.
Maybe they really, honestly, want to believe that they’ll be able to replace intelligent humans with AIs, ownership of which will be determined by power. So it’s people with power thinking this way they can get even more power and make the alternative path of decentralization, democratization and such impossible. If they think that, then they are wrong.
But so many cunning people can’t be so stupid, so there is something we don’t see or don’t realize we see.
It is because they use LLM for their work and for their work LLM works mind blowing good (writing lies to get what you want) *sarcasm
I don’t know. Maybe endorsement of LLMs and “AIs” is a way to encourage people create datasets, which can then be used for other things.
Also this technology is good for one thing - flagging people for some political sympathies or likeliness to behave a certain way, based on their other behavior.
As if - a technology to make kill lists for fascists, if you excuse my alarmism. Maybe nobody will come at night in black leather to take you away, but you won’t get anywhere near posts affecting serious decisions. An almost bloodless world fascist revolution.