- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.
It’s one thing to claim that the current machine learning approach won’t lead to AGI, which I can get behind. But this article claims AGI is impossible simply because there are not enough physical resources in the world? That’s a stretch.
I haven’t seriously read the article for now unfortunately (deadline tomorrow) but if there is one thing that I believe is reliable, it’s computational complexity. It’s one thing to be creative, ingenious, find new algorithms and build very efficient processors and datacenters to make things extremely efficient, letting us computer things increasingly complex. It’s another though to “break” free of complexity. It’s just, as far as we currently know, is impossible. What is counter intuitive is that seemingly “simple” behaviors scale terribly, in the sense that one can compute few iterations alone, or with a computer, or with a very powerful set of computers… or with every single existing computers… only to realize that the next iteration of that well understood problem would still NOT be solvable with every computer (even quantum ones) ever made or that could be made based on resources available in say our solar system.
So… yes, it is a “stretch”, maybe even counter intuitive, to go as far as saying it is not and NEVER will be possible to realize AGI, but that’s what their paper claims. It’s a least interesting precisely because it goes against the trend we hear CONSTANTLY pretty much everywhere else.
PS: full disclosure, I still believe self-hosting AI is interesting, cf my notes on it https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence but that doesn’t mean AGI can be reached, even less that it’d be “soon”. IMHO AI itself as a research field is interesting enough that it doesn’t need grandiose claims, especially not ones leading to learned helplessness.
Maybe if they keep using digital computers. What they need is an analogue system. It’s much more efficient for this kind of work.
Saw a great video about this (project is still ongoing).