Dynamically Typed

GPT-3 AGI?

Does GPT-3, OpenAI’s latest iteration of their gargantuan language model (DT #42, #44) mean we’re imminently close to artificial general intelligence (AGI) like some of the Twitter hype has been suggesting? Reinforcement learning researcher Julian Togelius says no: in A very short history of some times we solved AI, he argues that we’ve been moving the goalpost fo AGI for decades. “Algorithms for search, optimization, and learning that were once causing headlines about how humanity was about to be overtaken by machines are now powering our productivity software. And games, phone apps, and cars. Now that the technology works reliably, it’s no longer AI (it’s also a bit boring).” Forgive the long quotes, but I share Togelius’ views on AGI almost exactly, and he communicates them very succinctly: “So when will we get to real general artificial intelligence? Probably never. Because we’re chasing a cloud, which looks solid from a distance but scatters in all directions as we drive into it.” For his more optimistic conclusion, read the full blog post.