Yes. Would love to hear your thoughts.
Some observations:
1. I’m pretty sure no one has any idea of how this will shape out, and similar to past innovations, it’s probably safe to assume that everyone is wrong.
2. It’s very reminiscent to early Internet dotcom hysteria, where all expert and industry opinions ended up being wrong, at best some were slightly less wrong but they used what today are clearly anachronistic mental models (read how they talked about the Internet and e-commerce in the 1990s). Things became clear after e-commerce and search engines got legs and changed the landscape (only after the dotcom crash). There was a ton of lawsuits from investors afterwards, the general pattern is that most companies tried to build walled gardens and “portals”, raised insane amounts of capital, and were wrong more than they were right (some more wrong than others).
2. Real limitations are emerging in LLMs. There’s a reason agent orchestration is all the rage and it doesn’t take much experimenting to realize that unsupervised agent orchestration completely sucks. The highly productive uses involve human orchestration amongst highly focused expert agents. Larger context does not resolve the issues but only leads to more subtle and difficult to resolve hallucinations. This is a very real limit in LLMs. Some of us get 10x and others do a 100x of slop that slows everyone down significantly. Good developers are now acting like engineering (micro)managers to teams of highly productive autists, it’s weird, and definitely not unsupervised, but when it works it works.
3. The most obvious limit is that under the hood an LLM is stochastic token prediction, and while it’s tempting to pretend that consciousness and a theory of mind can be built from stochastic token prediction, this is absolutely not AGI and any use cases outside of token prediction are going to be very sad.
4. Despite these and other limits, the entire market and “expert” class is acting as if we have AGI in the most over-the-top magical-thinking way possible. Everyone is just assuming AGI or that we will soon have AGI. They’re acting as if it’s a genie or a god, and they’re (mal)investing accordingly. We very clearly have not reached AGI, and the hysteria around this is obvious.
5. Local LLMs are only months behind the big cloud services. We’ve seen this story before (IBM mainframes, film processing labs, music studios, etc). But in those examples the disruption came after decades of profits. The giant data centers may be disrupted long before anyone sees profit. The true innovation with AI is unlikely to be from centralized cloud providers (with armies of bureaucrats and blatant attempts to form regulatory moats), when decentralized AI is right around the corner. It won’t kill the data centers, just like PCs didn’t kill mainframes, but there’s a reason Microsoft became bigger than IBM.