Predictions for 2026
Predictions for 2026
Let’s commit to a few thoughts and see where we are in 12 months time.
I’ve never done this before, but it seems fitting to start the blog with some predictions for the next year. I can’t really imagine in 12 months thinking these are very astute or insightful, but that’s part of the point.
(1) The AI bubble will deflate but not pop
and Nvidia will end the year with share price higher than today.
It’s almost just accepted wisdom now, we’re in a bubble, and it can’t last long. And while quite a lot of companies are raising funding rounds at inflated values, just because there’s too much hype doesn’t mean the underlying tech is going away any time soon. We might have reached peak LLM, but AI is here to stay.
(2) There will be a new open source SOTA LLM model from a US company.
There is a gap in the market for a US based / developed SOTA model. For a while this was the Llama family, but with Meta changing course, and a disappointing showing from their last model researchers and developers are turning to Chinese alternatives. And having heard rumours of academics being told in no uncertain terms to stop what they are doing even mid downloading weights for models like DeepSeek / Qwen at national labs, there is a significant gap in the market here. (Given that I’ve been slow getting this out, and DeepSeek keeps releasing papers, feels a little shakey, but we’ll stand with it.)
(3) It will be a good year for AI4Science
The trend of both big AI labs and dedicated AI first scientific labs publishing fundamental scientific work will continue, and both OpenAI and Anthropic will start releasing scientific discoveries. In the therapeutics space I expect some more high profile “AI Designed Drugs” to enter clinical trials (looking at you Isomorphic Labs) but despite a lot of up front hype the real world results will be improvements rather than step changes. My gut feeling is this will start to take prominence in the use / justification of big AI labs continuing their spending and raising.
And for what it’s worth I don’t think that’s a gimmick, automating large portions of the scientific workflow and modelling is a natural use of AI, and in many ways a simpler problem to solve than general conversational agents. The analogy I like to think about is learning a foreign language. (Bear with me on this tangent, it does make sense.) When starting to learn a language the natural inclination is to say something like “I don’t want to do it too seriously, I’d just like to be able to have a bit of a conversation when I’m on holiday” - completely missing that this is just about the hardest thing to do. It’s listening, it’s composition, it’s speaking, all at speed and on any topic, and it’s usually full of slang. Whereas if you say “I want to read detailed news articles, or my favourite fiction book” that feels harder but is much easier to control. You can use a dictionary, you’ve got time to understand each sentence, it’s usually on one topic, and most importantly you’re expecting to be checking your work as you go.
Slightly tortured analogy aside, this is where I see AI4Science. It’s the application of general ML methods, applied to a tight domain, with a concrete line of investigation, and with a feedback loop of data to fit and correct to.
A quantifiable prediction? Anthropic will publish a paper with Claude as a co-author. Isomorphic will have another AI generated drug in clinical trials. But Navier-Stokes will not be solved in 2026.
(4) There will be a lot of hype around a few new ideas - but no fundamental architecture shift
Lots of labs and research teams are obviously looking for the next step past Transformers, maybe it’s world models, maybe it’s some sort of continuous learning paradigm, maybe something neurosymbolic, honestly I don’t know, and I don’t think we will know by the end of the year either.
Every single week there will be a new paper hyped by someone, but we’ll have to do a lot of sifting through the noise before we find something good.
But this will be the most exciting area this year, for my two cents I back world models. There is a physical intuition that appeals to my background. Maybe I’ll write some more coherent thoughts about this later.
(5) Reality hits - demos have to become products
The natural progression will continue from everyone releasing loads of pretty cool but often not fundamentally useful products will tighten. Investors and customers will continue putting pressure on the labs to release products that meaningfully give people a genuine sense of productivity. This is much much harder than making an impressive demo, and as a result I think we’ll start to see the smaller organisations falling behind and not being able to raise in the way they need to continue.
Again though - we should note how little genuine credit gets given to the successful products. They get taken completely for granted, worth remembering where we were just two years ago.
(6) Voice chat
A bit more on a personal note, late this year I was using ChatGPT voice mode again after neglecting it for 18 months or so and I was really impressed. It was almost natural, and easy to use for hours at a time. Limited mostly by session length limits, and as someone who historically hates talking to my technology I was really surprised. And noted that Claude at least was not in the same league. I think this year more people will have the same realisation as me, and this might start feeling like a real option for default use of these models.