
"AI Futures Timelines and Takeoff Model: Dec 2025 Update" by elifland, bhalstead, Alex Kastner, Daniel Kokotajlo
06.1.2026 | 50 Min.
We’ve significantly upgraded our timelines and takeoff models! It predicts when AIs will reach key capability milestones: for example, Automated Coder / AC (full automation of coding) and superintelligence / ASI (much better than the best humans at virtually all cognitive tasks). This post will briefly explain how the model works, present our timelines and takeoff forecasts, and compare it to our previous (AI 2027) models (spoiler: the AI Futures Model predicts about 3 years longer timelines to full coding automation than our previous model, mostly due to being less bullish on pre-full-automation AI R&D speedups). If you’re interested in playing with the model yourself, the best way to do so is via this interactive website: aifuturesmodel.com If you’d like to skip the motivation for our model to an explanation for how it works, go here, The website has a more in-depth explanation of the model (starts here; use the diagram on the right as a table of contents), as well as our forecasts. Why do timelines and takeoff modeling? The future is very hard to predict. We don't think this model, or any other model, should be trusted completely. The model takes into account what we think are [...] ---Outline:(01:32) Why do timelines and takeoff modeling?(03:18) Why our approach to modeling? Comparing to other approaches(03:24) AGI timelines forecasting methods(03:29) Trust the experts(04:35) Intuition informed by arguments(06:10) Revenue extrapolation(07:15) Compute extrapolation anchored by the brain(09:53) Capability benchmark trend extrapolation(11:44) Post-AGI takeoff forecasts(13:33) How our model works(14:37) Stage 1: Automating coding(16:54) Stage 2: Automating research taste(18:18) Stage 3: The intelligence explosion(20:35) Timelines and takeoff forecasts(21:04) Eli(24:34) Daniel(38:32) Comparison to our previous (AI 2027) timelines and takeoff models(38:49) Timelines to Superhuman Coder (SC)(43:33) Takeoff from Superhuman Coder onward The original text contained 31 footnotes which were omitted from this narration. --- First published: December 31st, 2025 Source: https://www.lesswrong.com/posts/YABG5JmztGGPwNFq2/ai-futures-timelines-and-takeoff-model-dec-2025-update --- Narrated by TYPE III AUDIO. ---Images from the article:

"In My Misanthropy Era" by jenn
05.1.2026 | 13 Min.
For the past year I've been sinking into the Great Books via the Penguin Great Ideas series, because I wanted to be conversant in the Great Conversation. I am occasionally frustrated by this endeavour, but overall, it's been fun! I'm learning a lot about my civilization and the various curmudgeons that shaped it. But one dismaying side effect is that it's also been quite empowering for my inner 13 year old edgelord. Did you know that before we invented woke, you were just allowed to be openly contemptuous of people? Here's Schopenhauer on the common man: They take an objective interest in nothing whatever. Their attention, not to speak of their mind, is engaged by nothing that does not bear some relation, or at least some possible relation, to their own person: otherwise their interest is not aroused. They are not noticeably stimulated even by wit or humour; they hate rather everything that demands the slightest thought. Coarse buffooneries at most excite them to laughter: apart from that they are earnest brutes – and all because they are capable of only subjective interest. It is precisely this which makes card-playing the most appropriate amusement for them – card-playing for [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: January 4th, 2026 Source: https://www.lesswrong.com/posts/otgrxjbWLsrDjbC2w/in-my-misanthropy-era --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

"2025 in AI predictions" by jessicata
03.1.2026 | 21 Min.
Past years: 2023 2024 Continuing a yearly tradition, I evaluate AI predictions from past years, and collect a convenience sample of AI predictions made this year. In terms of selection, I prefer selecting specific predictions, especially ones made about the near term, enabling faster evaluation. Evaluated predictions made about 2025 in 2023, 2024, or 2025 mostly overestimate AI capabilities advances, although there's of course a selection effect (people making notable predictions about the near-term are more likely to believe AI will be impressive near-term). As time goes on, "AGI" becomes a less useful term, so operationalizing predictions is especially important. In terms of predictions made in 2025, there is a significant cluster of people predicting very large AI effects by 2030. Observations in the coming years will disambiguate. Predictions about 2025 2023 Jessica Taylor: "Wouldn't be surprised if this exact prompt got solved, but probably something nearby that's easy for humans won't be solved?" The prompt: "Find a sequence of words that is: - 20 words long - contains exactly 2 repetitions of the same word twice in a row - contains exactly 2 repetitions of the same word thrice in a row" Self-evaluation: False; I underestimated LLM progress [...] ---Outline:(01:09) Predictions about 2025(03:35) Predictions made in 2025 about 2025(05:31) Predictions made in 2025 --- First published: January 1st, 2026 Source: https://www.lesswrong.com/posts/69qnNx8S7wkSKXJFY/2025-in-ai-predictions --- Narrated by TYPE III AUDIO.

"Good if make prior after data instead of before" by dynomight
27.12.2025 | 17 Min.
They say you’re supposed to choose your prior in advance. That's why it's called a “prior”. First, you’re supposed to say say how plausible different things are, and then you update your beliefs based on what you see in the world. For example, currently you are—I assume—trying to decide if you should stop reading this post and do something else with your life. If you’ve read this blog before, then lurking somewhere in your mind is some prior for how often my posts are good. For the sake of argument, let's say you think 25% of my posts are funny and insightful and 75% are boring and worthless. OK. But now here you are reading these words. If they seem bad/good, then that raises the odds that this particular post is worthless/non-worthless. For the sake of argument again, say you find these words mildly promising, meaning that a good post is 1.5× more likely than a worthless post to contain words with this level of quality. If you combine those two assumptions, that implies that the probability that this particular post is good is 33.3%. That's true because the red rectangle below has half the area of the blue [...] ---Outline:(03:28) Aliens(07:06) More aliens(09:28) Huh? --- First published: December 18th, 2025 Source: https://www.lesswrong.com/posts/JAA2cLFH7rLGNCeCo/good-if-make-prior-after-data-instead-of-before --- Narrated by TYPE III AUDIO. ---Images from the article:

"Measuring no CoT math time horizon (single forward pass)" by ryan_greenblatt
27.12.2025 | 12 Min.
A key risk factor for scheming (and misalignment more generally) is opaque reasoning ability.One proxy for this is how good AIs are at solving math problems immediately without any chain-of-thought (CoT) (as in, in a single forward pass).I've measured this on a dataset of easy math problems and used this to estimate 50% reliability no-CoT time horizon using the same methodology introduced in Measuring AI Ability to Complete Long Tasks (the METR time horizon paper). Important caveat: To get human completion times, I ask Opus 4.5 (with thinking) to estimate how long it would take the median AIME participant to complete a given problem.These times seem roughly reasonable to me, but getting some actual human baselines and using these to correct Opus 4.5's estimates would be better. Here are the 50% reliability time horizon results: I find that Opus 4.5 has a no-CoT 50% reliability time horizon of 3.5 minutes and that time horizon has been doubling every 9 months. In an earlier post (Recent LLMs can leverage filler tokens or repeated problems to improve (no-CoT) math performance), I found that repeating the problem substantially boosts performance.In the above plot, if [...] ---Outline:(02:33) Some details about the time horizon fit and data(04:17) Analysis(06:59) Appendix: scores for Gemini 3 Pro(11:41) Appendix: full result tables(11:46) Time Horizon - Repetitions(12:07) Time Horizon - Filler The original text contained 2 footnotes which were omitted from this narration. --- First published: December 26th, 2025 Source: https://www.lesswrong.com/posts/Ty5Bmg7P6Tciy2uj2/measuring-no-cot-math-time-horizon-single-forward-pass --- Narrated by TYPE III AUDIO. ---Images from the article:



LessWrong (Curated & Popular)