As recently as late 2022, artificial intelligence was a topic for research conferences and Silicon Valley optimists. By early 2026, it accounts for three-quarters of the gains in the S&P 500, commands trillions of dollars in infrastructure investment, is reshaping corporate strategy across every major industry, and raising questions about the future of knowledge work that no previous technology has posed with such immediacy. All of this has happened in just over three years - and the pace is not slowing.
The question this article seeks to answer is deceptively simple: is artificial intelligence a trend, or a revolution unlike anything that came before it? The difficulty is that it may be both. In the words of Sam Altman, the CEO of OpenAI: “Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes.” If both are true simultaneously, then the tools we normally use to distinguish speculative excess from genuine transformation may not be sufficient.
To make sense of this moment, this article draws on two voices who approach it from radically different positions. The first is Howard Marks, the veteran investor and co-founder of Oaktree Capital, and one of the most respected voices in global finance. The second is Leopold Aschenbrenner, a young German-born former OpenAI researcher who translated his inside knowledge of AI’s trajectory into one of the fastest-growing hedge funds in recent history. They do not disagree so much as they illuminate different parts of the same question - and together, they offer a more complete picture than either provides alone.
This article is based substantially on two memos Marks published in late 2025 and early 2026 - “Is It a Bubble?” and “AI Hurtles Ahead” - which I would encourage every reader to engage with in full. What follows builds on his analysis to walk through the economics of this moment: the bubble question, the capital dynamics, the technology itself, and the implications for a generation of students preparing to enter a labour market that may no longer exist in the form they expected. The goal is not to prophesise the future, - but to understand the forces at work well enough to engage with them, rather than simply being swept along.
The Voices
Howard Marks has spent five decades watching markets overshoot and correct. He is the co-founder and co-chairman of Oaktree Capital, a leader among global investment managers specialising in alternative investments, with $223 billion in assets under management as of December 2025. His career has been built on a conviction that understanding investor behaviour matters more than understanding any individual technology or asset - and on a track record that has proven him right repeatedly. Warren Buffett has said that when he sees a memo from Marks in his mail, it is the first thing he opens and reads.
Marks called the dot-com bubble in 2000 and warned of the dearth of risk aversion that preceded the Global Financial Crisis in 2005-07. In neither case did he possess expertise in the subject at the centre of the excess - internet stocks in the first instance, sub-prime mortgage-backed securities in the second. What he possessed was a framework for recognising when investor behaviour had become untethered from fundamentals, and the willingness to say so publicly. He is explicit about the limits of that approach when it comes to AI. “I’m no techie,” he writes in his December 2025 memo, “and I don’t know any more about AI than most generalist investors.” He deliberately stays in his lane - market behaviour, investor psychology, historical pattern recognition - and that discipline is precisely what makes his analysis credible. He does not claim to understand the technology. What he understands is what happens when investors become convinced that a technology will change everything.
Leopold Aschenbrenner approaches the AI moment from a different vantage point. Born in Berlin, he enrolled at Columbia University at the age of 15 and graduated as valedictorian (best of his course) at 19. He joined OpenAI’s Superalignment team in 2023 - the group tasked with ensuring that AI systems smarter than humans could be reliably controlled - and was fired in 2024 following disputes over internal security practices. Within months, he published “Situational Awareness: The Decade Ahead,” a 165-page thesis arguing that artificial general intelligence could arrive by 2027, and founded a hedge fund built on that conviction. In the essay, Aschenbrenner wrote that there were perhaps a few hundred people, mostly in San Francisco and the AI labs, who had genuine awareness of what was coming. They had been dismissed as extreme. They had trusted the trendlines. And they had turned out to be right about the advances of the past few years. Whether they would be remembered as footnotes or as figures on the scale of Oppenheimer, as he put it, remained to be seen.
His fund, Situational Awareness LP, offers one measure of how seriously the market has taken his thesis. It grew from $225 million to $5.5 billion in US equity exposure in a single year, returning 47% net of fees in the first half of 2025 - against a 6% gain for the S&P 500 and a 7% average for tech-focused hedge funds. Aschenbrenner is not outside the bubble looking in. He is operating squarely within it - and so far, that positioning has paid off spectacularly. Whether it continues to do so is precisely the kind of question Marks’ framework is designed to answer.
These two figures are not in disagreement. They are looking at different parts of the same phenomenon. Marks provides the historical and behavioural framework - the patterns of excess, the role of psychology, the risks of capital misallocation. Aschenbrenner provides the inside knowledge of what the technology can actually do, and a case study in what disciplined conviction within a bubble can produce. One reads the behaviour of markets. The other understands the substance of what is driving them. Together, they give us more than either can alone.
Anatomy of a Bubble
Marks begins his analysis with a distinction that is easy to state and difficult to apply. Market bubbles, he argues, are not caused directly by technological or financial developments. They are caused by the application of excessive optimism to those developments. A technology can be genuine, even transformative, and the enthusiasm surrounding it can still be irrational. The boundary between warranted excitement and speculative excess is not a line that can be measured. It is a matter of judgment - and it is almost always identified more clearly in retrospect than in real time. Alan Greenspan’s phrase “irrational exuberance” captures the problem precisely. That investors are applying exuberance to AI is beyond question. Whether that exuberance is irrational is the question no one can yet answer with certainty.
This framing becomes more useful when Marks introduces a distinction drawn from Byrne Hobart and Tobias Huber’s book Boom: Bubbles and the End of Stagnation. Hobart and Huber propose that not all bubbles are alike. They identify two kinds. The first are what they call “mean-reversion bubbles” - speculative manias built around financial innovations that promise returns without broader progress. The South Sea Company in the early 1700s and sub-prime mortgage-backed securities in the mid-2000s are examples. There was no expectation that these developments would move the world forward. There was only a belief that money could be made. When the belief collapsed, the bubble did too, and the world reverted to where it had been. Nothing lasting was built.
The second kind - and the kind that matters for AI - are “inflection bubbles.” These form around genuine technological breakthroughs: the railroads, electrification, the internet. In an inflection bubble, the world does not revert to its prior state after the crash. The technology is real, and it endures. But the investors who financed its buildout are often destroyed in the process. As Marks notes, citing the work of Carlota Perez, speculative mania enables what Perez calls the “Installation Phase” - a period in which vast amounts of capital are poured into new infrastructure, much of it unwisely. The bubble’s collapse then triggers the “Deployment Period,” in which the technology matures and its benefits are realised - but on the foundations that the lost capital built. The people who funded the installation frequently do not survive to benefit from the deployment.
This is the paradox at the heart of inflection bubbles, and it is worth sitting with. If investors remained patient, prudent, and analytical, novel technologies would take many years - perhaps decades - to be built out. Instead, the frenzy of a bubble compresses the process into a dramatically shorter period. Money pours in from every direction, much of it chasing opportunities that will not materialise. A portion funds the genuine winners. The rest is incinerated. But the net effect is that the technology advances faster than it ever could have through disciplined, rational capital allocation alone. As Hobart and Huber write: “Not all bubbles destroy wealth and value. Some can be understood as important catalysts for techno-scientific progress.” The uncomfortable corollary is that the progress and the destruction are inseparable.
The historical record reinforces this pattern with uncomfortable consistency. Marks opens his December memo with a passage describing a transformative technology ascending, requiring unprecedented sums of investment, amid widespread fears that the country’s biggest corporations are propping up a bubble. The passage applies word-for-word to the AI moment in 2025 - and it was written about the American railroad boom of the 1860s. The parallel is not a coincidence. It is, as Marks puts it via the phrase attributed to Mark Twain, what happens when history rhymes.
The examples multiply. RCA, the Radio Corporation of America, has been described as “the Nvidia of its day.” Its stock price peaked in 1929 and then lost 97% of its value in the crash. Aviation stocks, inflated by the excitement surrounding Charles Lindbergh’s transatlantic flight in 1927, dropped 96% between May 1929 and May 1932. Both technologies were real. Both transformed the world. Both destroyed the investors who believed that transformation guaranteed returns. Warren Buffett made the same point about the automobile. Roughly 2,000 car companies were founded in America in the early twentieth century. Only three survived. “Autos had an enormous impact on America,” Buffett observed, “but the opposite direction on investors.”
The dot-com bubble offers the most recent and most directly comparable case. Between 1995 and its peak on March 10, 2000, the NASDAQ Composite rose 572%, from 751 to 5,048. The price-to-earnings ratio reached 200 - dwarfing anything seen before. When the bubble burst, the index fell 78%, bottoming at 1,114 in October 2002. More than $5 trillion in market value was destroyed. Over half of publicly listed dot-com companies failed by 2004. Venture funding collapsed by 95%. The NASDAQ did not reclaim its March 2000 peak for fifteen years. And yet - Amazon, which fell from $100 to $7 per share, survived and became one of the most valuable companies in history. So did eBay. So did Google, which was founded during the bubble and listed after it burst. The technology was real. The infrastructure that the bubble financed - fibre-optic networks, server farms, broadband rollout - became the backbone of the modern digital economy. The investors who paid for it, in the main, did not benefit.
There is, however, a dimension of this pattern that is especially treacherous for investors: the winner problem. Even when the technology proves transformative, the companies that lead during the bubble are not necessarily the companies that dominate afterward. Marks, drawing on the analysis of his Oaktree co-CEO Bob O’Leary, highlights two cases that illustrate the point. Lycos was the leading search engine before Google arrived and captured the entire market. MySpace dominated social media before Facebook displaced it completely. In both cases, late entrants with superior products overtook the early leaders. The AI landscape demands a more nuanced reading of this pattern. Today’s major players - Microsoft, Alphabet, Amazon - are not fragile startups. They are enormously profitable companies with diversified revenue streams and strong balance sheets; even if their AI investments underperform, they will not disappear. But dominance in one technological era does not guarantee dominance in the next. The real question is not whether these firms survive, but whether they lead - and whether the startups currently attracting billion-dollar valuations with no products and no revenue will exist at all in five years. That uncertainty - not about whether the technology is real, but about who captures its value - is what makes inflection bubbles so dangerous for investors. You can be right about the revolution and still lose by backing the wrong position within it.
The pattern, stated plainly, is this: every transformational technology in modern economic history has generated excessive enthusiasm, attracted more capital than the opportunity could absorb, produced more infrastructure than was immediately needed, and destroyed a significant portion of the wealth that financed its buildout. The technology endured. The investors, by and large, did not. As Derek Thompson concluded in his analysis of the AI moment: “Given the amount of debt now flowing into AI data centre construction, I think it’s unlikely that AI will be the first transformative technology that isn’t overbuilt and doesn’t incur a brief painful correction.” There is an old warning in finance that the four most dangerous words are “this time it’s different.” And yet, sometimes it is. But as Marks notes, it is the behaviour based on the belief that this time is different that tends to ensure it is not.
The AI Capital Machine
If the pattern of inflection bubbles holds - and Marks argues there is no reason to assume it will not - then the question is not whether capital will be misallocated in the AI buildout. It is how much, by whom, and with what consequences. The numbers involved provide a starting point.
JPMorgan analysts have estimated that the total infrastructure bill for the AI buildout - encompassing data centres, chips, power generation, and supporting systems - could reach $5 trillion. The near-term spending commitments are already vast. Close to half a trillion dollars is expected to flow into AI infrastructure in the coming year alone.
It is worth pausing on the sheer scale of the numbers involved, because Marks makes a point that is easy to read past and hard to internalise. A million dollars is a dollar a second for 11.6 days. A billion dollars is a dollar a second for 31.7 years. A trillion dollars is a dollar a second for 31,700 years. The commitments being made in the AI buildout are denominated in trillions. These figures have exceeded the human capacity to intuitively grasp their magnitude - and when investors and executives can no longer feel the weight of the numbers they are committing, the discipline that should accompany those commitments erodes.
The five largest spenders - Microsoft, Alphabet, Amazon, Meta, and Oracle - collectively held roughly $350 billion in cash as of late 2025. The arithmetic is plain: the gap between what these companies have and what they have committed to spend has to be filled with debt. And it is being filled with debt - on terms that raise serious questions about risk.
Oracle, Meta, and Alphabet have all issued 30-year bonds to finance AI investments, at yields that exceed those on US Treasuries of comparable maturity by as little as 100 basis points - roughly one percentage point. For an investor considering those bonds, the question is uncomfortable: is it prudent to accept three decades of technological uncertainty in exchange for a fixed-income return barely above the risk-free rate? Will the AI chips and data centres those bonds are financing maintain their productivity - and their value - long enough for the obligations to be repaid? Thirty years is a long time in any industry. In a field evolving as rapidly as AI, it is an eternity.
The question of how to finance a technological revolution is not merely practical. It is structural - and Marks, drawing on his Oaktree co-CEO Bob O’Leary, provides a framework that any economics student will recognise. Technological competitions, O’Leary argues, tend toward winner-takes-all or winner-takes-most outcomes. This has profound implications for how they should be financed.
Consider equity first. If an investor takes equity stakes in ten companies pursuing the same technological opportunity, and nine of those companies fail, the massive gain from the one winner can more than compensate for the losses on the other nine. This is the venture capital formula. The investor does not need to identify the winner in advance. They need to be invested in a pool broad enough that the winner is likely to be among their holdings. The scale of the winner’s success - often measured in hundreds or thousands of percent - absorbs the total losses elsewhere. This is why venture capital has historically been the instrument of choice for financing technological frontiers.
Debt works in precisely the opposite direction. If a lender extends credit to ten companies and nine fail, the lender loses a substantial portion of the capital deployed to the failures. On the one company that succeeds, the lender earns only the agreed coupon - the interest rate on the loan. Unlike equity, debt does not participate in the upside. A lender to the next Nvidia earns the same modest return they would have earned lending to the next failure. The coupon on the winner is grossly insufficient to compensate for the impairments on the losers. As O’Leary puts it bluntly: if you cannot even identify the pool of companies from which the winner will emerge, the distinction between debt and equity is irrelevant - you are a zero either way.
This framework makes it possible to distinguish between what Gil Luria, Head of Technology Research at D.A. Davidson, described as healthy and unhealthy behaviour in the AI buildout - a spectrum Marks examines closely in his December memo.
At the healthy end sit companies like Microsoft, Amazon, and Google. These firms are making large AI investments, but they are doing so backed by enormous existing cash flows from profitable non-AI businesses. They already have customers paying for AI services. When they invest, they are deploying money they have already earned, and they possess the balance sheet strength to absorb losses if their AI bets do not pay off in full. Their AI spending is ambitious, but it is grounded in the discipline of existing revenue. This is sound capital allocation.
At the unhealthy end, the picture is markedly different. Startups with no revenue and no customers are borrowing money to build data centres for other startups that are also losing cash. The investment is backed not by cash flow but by the expectation of future demand that may or may not materialise. The debt is secured against physical assets - data centres, chips - whose future value is entirely uncertain. As Luria frames it: debt is appropriate when you have predictable cash flow or a tangible asset that can back the loan. Equity is appropriate for speculative ventures where the cash flow is unknown. When you start confusing the two - financing speculative ventures with debt instruments - you get yourself in trouble. That confusion, he warns, is increasingly visible in the AI sector.
The speculative end of the market has produced some remarkable illustrations. Marks cites Thinking Machines Lab, a startup founded by a former OpenAI executive, which raised $2 billion at a $10 billion valuation while refusing to tell investors what the company was building. As one investor described the pitch meeting: the founders said they were building an AI company with the best AI people, but could not answer any questions. Months later, the company was reportedly in talks to raise again at a valuation of roughly $50 billion. The pattern extends beyond Marks’ examples. Safe Superintelligence, founded by another former OpenAI chief scientist, raised $2 billion at a $32 billion valuation - again with no publicly released product or service. These are, in the most literal sense, lottery tickets. Marks acknowledges the logic: if the potential payoff is large enough, even an overwhelming probability of failure can produce a positive expected value. The problem, as he notes, is that “thinking about a trillion-dollar payout will override reasonableness in any calculation.”
Compounding the concern is the emergence of what appear to be circular transactions between major AI players. The mechanics are worth walking through. In September 2025, Nvidia announced it intended to invest up to $100 billion in OpenAI, tied to the deployment of new data centre infrastructure. The deal generated headlines and helped fuel an AI-infrastructure rally. It was never finalised. By early 2026, Nvidia’s actual investment had landed at $30 billion, as part of OpenAI’s record $110 billion funding round. But the direction of the money is what matters. OpenAI receives Nvidia’s capital - and uses it to buy or lease Nvidia’s chips. The capital flows from investor to recipient and back to investor. Both parties report activity: investment on one side, revenue on the other. The outside observer is entitled to ask how much of this represents genuine economic value creation, and how much is the same capital completing a circle. Goldman Sachs has estimated that approximately 15% of Nvidia’s sales come from arrangements that critics describe as circular. For a company that briefly reached a $5 trillion market capitalisation, the question of how much of its revenue base rests on such arrangements is not trivial.
The scale of circular commitments extends further. OpenAI has made investment commitments to industry counterparties totalling $1.4 trillion - despite having never turned a profit. The company has stated that these commitments are to be paid out of revenues received from the same parties, and that it retains mechanisms to exit the arrangements. But the circularity is difficult to ignore. Marks poses the question directly: has the AI industry developed a perpetual motion machine? The comparison to the telecom boom of the late 1990s, in which fibre-optic companies engaged in reciprocal transactions that allowed both parties to report profits on what was essentially the same money, is one Marks draws explicitly.
Through all of this, Marks himself is not standing on the sideline. Oaktree has made investments in data centres. Brookfield, Oaktree’s parent company, is raising a $10 billion fund for investment in AI infrastructure, backed by its own capital alongside commitments from sovereign wealth funds and Nvidia, to which it intends to apply what it describes as “prudent” debt. Marks does not see this as a contradiction. His distinction - one of the sharpest lines in either memo - is this: “It’s okay to supply debt financing for a venture where the outcome is uncertain. It’s not okay where the outcome is purely a matter of conjecture. Those who understand the difference still have to make the distinction correctly.” He is participating in the buildout while warning that many others are doing so without the discipline to survive a correction. In an inflection bubble, sitting out entirely is its own form of risk. The question is not whether to participate, but how - and with what margin of safety.
What AI Can Actually Do - And How Fast It’s Moving
The financial architecture described in the previous section - trillions in spending, 30-year bonds, circular deals - rests on an assumption: that AI is valuable enough to justify it. To assess that assumption, it is necessary to understand what AI can actually do, how rapidly its capabilities are advancing, and what the economic implications of that advancement look like in practice. Marks’ second memo, “AI Hurtles Ahead,” provides a useful framework.
Marks’ tutorial divides AI’s capabilities into three levels. The first is chat AI - the mode most users encountered in 2023. The user asks a question, the model supplies an answer. It saves time that would otherwise be spent researching and thinking. But it does not do anything with the answer. The user remains in control of every subsequent step.
The second level is tool-using AI. Here, the user instructs the model not just to answer questions but to search for information, analyse it, and perform tasks with the results. The economic value is meaningfully larger, because AI is now saving execution time - not just thought, but action. A financial analyst might instruct the model to pull data from multiple sources, identify trends, and produce a summary. The model does all of this, but only what it is told. It does not take initiative. This was the dominant mode in 2024 and 2025.
The third level - the level now emerging in 2026 - is autonomous agents. At this level, the user does not tell the AI what to do step by step. The user gives it a goal and a set of parameters - the desired output, the constraints, the scope - and the agent does the work. It plans, executes, checks its own output, revises where necessary, and submits a finished product. As Marks’ tutorial puts it: “This is labor replacement at the task level. Not assistance - replacement.” The distinction between Level 2 and Level 3 may sound subtle. It is not. It is the difference between a tool that makes a worker more productive and a system that does the worker’s job. That difference, as the tutorial notes, is what separates a $50 billion market from a multi-trillion-dollar one.
Before examining the evidence for Level 3 in practice, it is worth pausing on a question that Marks himself finds fascinating: can AI think? The skeptics argue that everything an AI model produces is ultimately a sophisticated rearrangement of patterns absorbed from human-generated text - extraordinarily impressive pattern matching, but not thought. Marks’ own reflection, informed by his tutorial, offers a more productive framing. A child is not born knowing how to reason. It absorbs what others have figured out - language, logic, patterns of cause and effect - and through years of accumulation and synthesis, it develops the capacity to produce what looks like original thought. An AI model’s development follows a strikingly similar path. It absorbs vast amounts of human text and learns not just facts but reasoning patterns, argument structures, and how to apply them to novel situations. Whether that constitutes thinking is a fascinating philosophical question. But economically, it is a distinction without a difference. As Claude argued back to Marks in his tutorial: if the model can produce the analytical output of a professional earning substantial compensation, it does not matter to the person paying the bill whether the machine is “really” thinking. What matters is whether the work product is reliable enough to be useful. Increasingly, it is.
The evidence that AI has reached Level 3 capability is no longer speculative. Matt Shumer, CEO of OthersideAI, published a blog post in early 2026 that has been viewed by more than 50 million people. In it, he describes a shift that he says occurred almost overnight with the release of new models on February 5, 2026. “I am no longer needed for the actual technical work of my job,” Shumer writes. He describes telling the AI what he wants built, walking away from his computer for four hours, and returning to find the work completed - not a rough draft requiring correction, but a finished product. The AI writes tens of thousands of lines of code, opens the application itself, clicks through the interface, tests the features, and iterates until it is satisfied with the result. Only then does it return to the user for review. “A couple of months ago,” Shumer notes, “I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.”
The technical milestone that accompanies this shift is striking. OpenAI’s GPT-5.3 Codex, released in February 2026, included a detail in its documentation that deserves careful attention: it was, in OpenAI’s own words, “the first model that was instrumental in creating itself.” The AI was used to debug its own training, manage its own deployment, and diagnose its own test results. Dario Amodei, the CEO of Anthropic, has said that AI is now writing “much of the code” at his company and that the feedback loop between current AI and next-generation AI is “gathering steam month by month.” He estimates that we may be “only 1-2 years away from a point where the current generation of AI autonomously builds the next.” The implications of recursive self-improvement - AI that improves the AI that improves the AI - are difficult to overstate and, at present, impossible to forecast with confidence.
The speed of AI’s development has no precedent in the history of technology. Consider the comparison Marks draws with the computer. ENIAC, the first general-purpose electronic computer, was completed in 1945. It was nearly 40 years before IBM began selling personal computers for general business and home use in the early 1980s. The journey from invention to mass adoption spanned four decades. AI has compressed an equivalent journey into a fraction of that time. Artificial intelligence began to be incorporated into devices invisibly - spam filters, recommendation engines - around 2010. It became visible through assistants like Siri and Alexa a few years later. It was recognised as a general-purpose technology affecting knowledge work, education, and consumer decision-making only around 2024. In 2026, it is used by more than 400 million individuals and 75 to 80% of companies. And unlike previous technologies, where infrastructure was built and then waited for demand to materialise, AI is supply-constrained. Demand already exists and is growing rapidly. The infrastructure cannot keep up.
This dynamic helps explain why AI firms’ revenues are growing at rates that would have seemed implausible even recently - and why those rates have a structural logic behind them. Consider the economics from the perspective of an AI company. A tool that helps a junior analyst work 20% faster is worth roughly 20% of that analyst’s compensation to the employer - the firm still needs the analyst. If the analyst earns €80,000, the tool might be worth €16,000. But a tool that performs the analyst’s entire job on a defined category of tasks is worth the analyst’s full compensation for those tasks. The value an AI firm can capture scales directly with the cost of the human labour it replaces. Multiply this across every knowledge worker performing structured analytical work - legal associates, financial analysts, management consultants, software engineers, compliance officers - and the revenue trajectory becomes comprehensible. Anthropic, one of the leading AI model developers, has seen its revenue grow roughly 100-fold in two years. Claude Code, a coding tool Anthropic introduced earlier in 2026, is already running at an annual revenue rate of $1 billion. Cursor, another AI coding tool, went from $1 million in revenue in 2023 to $100 million in 2024 and is expected to reach $1 billion this year. This is not speculative demand chasing a promise. It is real revenue growing at exponential rates, driven by a product that delivers measurable economic value.
All of which brings us to the question of pace - and to the observation that may matter most for the remainder of this article. In June 2024, Leopold Aschenbrenner wrote in “Situational Awareness” that it was “strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.” At the time, this was regarded as an extraordinarily bold claim - one that positioned Aschenbrenner at the far end of credible AI forecasting. He built a fund on this conviction and was rewarded with one of the most remarkable debut performances in hedge fund history. And yet, by March 2026 - with Level 3 autonomous agents operational, with GPT-5.3 having contributed to its own creation, with Chinese AI models reaching frontier capability within weeks rather than months of Western releases, and with recursive improvement loops deepening across the industry - even Aschenbrenner’s timeline, built on deep expertise and validated by extraordinary returns, appears to be outpaced by reality. When someone with that level of knowledge and that track record finds the pace running ahead of their predictions, the acceleration is no longer a matter of speculation. It is observable. And it shows no sign of slowing.
The Labour Question
If AI can do the work - and the evidence presented in the previous section suggests it increasingly can - then the question shifts from technology to people. What happens to the humans whose labour AI replaces? Howard Marks does not hedge on this point. “I find the resulting outlook for employment terrifying,” he writes in his December memo. It is a striking word from a man whose professional voice is characterised by measured understatement.
The scale of potential disruption is difficult to absorb. Joe Davis, Global Chief Economist at Vanguard, estimates that AI could save roughly 43% of the time people currently spend on their work tasks, across four out of five jobs. Claude, in its tutorial for Marks, draws a sharp line between what came before and what is arriving now. Level 1 and Level 2 AI were faster horses - they made existing workers more efficient. Level 3 agents are the automobile. They do not make the work faster. They do the work.
The professions most exposed are not the ones that previous waves of automation threatened. They are not factory jobs or manual trades. They are the careers that require exactly the kind of structured analytical thinking that economics students are trained to perform: legal associates reviewing case law, financial analysts building models, management consultants producing strategic assessments, software engineers writing code, compliance officers processing regulatory filings. A department head at an e-commerce company told Marks that AI could replace 80% of her advertising copywriting staff. When Marks asked Claude to assess the economic implications, the answer was blunt: multiply the replacement potential across every knowledge worker performing structured analytical work, and you are looking at a meaningful share of a labour market that runs into the trillions annually.
What makes this particularly urgent is not the prospect of sudden mass unemployment. It is something quieter, less visible, and in some ways more insidious. Research from Stanford University and the Federal Reserve Bank of Dallas has identified what might be called a hiring chill. Workers aged 22 to 25 in the occupations most exposed to AI have experienced a 13% decline in employment since late 2022. The job-finding rate for this demographic - the rate at which people entering the labour force secure employment - has dropped by roughly 14% compared with less exposed sectors. The critical detail is the mechanism. This is not showing up as layoffs. It is showing up as a failure to hire. Entry-level roles are quietly disappearing, not because existing employees are being fired, but because their employers are discovering that AI can handle the tasks those roles were designed to perform. Senior workers, meanwhile, use AI to augment their own productivity. The effect is what researchers describe as “pulling up the ladder.” The people at the top become more productive. The rungs at the bottom vanish. And the phenomenon is largely invisible in headline unemployment statistics, because it manifests not as people losing jobs but as people never getting them in the first place.
The European dimension adds a layer of structural concern. Youth unemployment across the EU stood at 14.7% as of late 2025 - nearly 2.9 million people under 25 without work. The German Institute for Employment Research has projected that 1.6 million jobs could be reshaped by or lost to AI in Germany alone over the next fifteen years. Given the pace of AI development documented in the previous section - a pace that has outrun even expert predictions - that fifteen-year horizon may itself prove conservative. The Carnegie Endowment for International Peace has warned that AI disruption is unlikely to manifest as sudden mass redundancy. It is more likely to take the form of what they call “incremental task substitution” - jobs hollowed out before they are eliminated, creating prolonged insecurity rather than the kind of dramatic unemployment event that triggers a policy response. In much of Europe, the challenge is already one of precarity rather than outright joblessness. Spain’s youth unemployment has fallen from 40% to 27% in recent years, but contracts are shorter, wages are stagnant, and the sense of stable career progression that previous generations could rely on has largely evaporated.
Marks raises one further point that deserves particular attention, because its implications compound over time. If AI eliminates a substantial share of junior positions - junior lawyers, junior analysts, junior doctors - where do the experienced professionals of the next generation come from? The expertise that makes a senior lawyer effective, the judgment that makes a seasoned investor valuable, the clinical intuition that distinguishes an experienced physician - these are not qualities that can be acquired from a textbook or a training programme. They are developed through years of practice at the junior level: making mistakes, recognising patterns, learning to exercise judgment under uncertainty. AI cannot provide that apprenticeship. If the bottom rungs of the professional ladder disappear, the ladder itself breaks. The short-term efficiency gain of replacing junior workers with AI may come at the cost of a long-term deficit in human expertise that no technology can fill.
This is not abstract for the readers of this article. We are the generation in the data. The 22-to-25-year-olds whose job-finding rates are declining in AI-exposed occupations are our classmates, our peers, ourselves. We are studying economics, finance, and management at institutions that were built to prepare us for careers in exactly the sectors that AI is now reshaping. The question is no longer whether AI will transform the labour market we are about to enter - the data says it already is. The question is whether we will have the chance to begin our careers in the form we expected, and what we do if the answer is no.
AI and the Future of Investing
For many readers of this article - students of economics and finance, some of whom will seek careers in investment management - there is a more specific question embedded in everything discussed so far. The previous sections have established that AI can do the work, that it is doing the work, and that the labour market is already adjusting. But investing is not structured analytical work in the way that legal review or financial modelling is. It is a profession built on judgment under uncertainty - on the ability to be right when the data is incomplete and the future is unknowable. If AI can replace an analyst, can it replace the person deciding what the analysis means?
Marks offers a framework, drawn from the epidemiologist Marc Lipsitch, that clarifies exactly where the line falls. Lipsitch distinguishes three inputs to any decision: facts, informed extrapolation from analogies to prior experience, and opinion or speculation. AI excels at the first two. It can absorb more data than any human investor, remember it better, and recognise historical patterns that preceded success. It should not feel fear or greed. It is less likely to anchor to preexisting beliefs or overemphasise the most recent information. It is not swayed by the fads exciting everyone else, and it is not afraid of missing out on the trend others are chasing. For anyone whose career plan involves analysing financial data for a living, these are real advantages - and they should not be understated. But when the situation is genuinely new - a novel product, an untested business model, an industry that did not exist five years ago - facts are scarce, analogies are unreliable, and decisions rest on Lipsitch’s third category: speculation. Will AI’s speculation about genuinely new things be consistently superior to that of all humans? Marks believes not. And the reason has less to do with processing power than with something harder to define.
Marks writes that the best investors sense potential risk intuitively, and that this contributes greatly to their success. It is what he calls taste and discernment: a gut feeling built over decades of sitting through situations where the data said one thing and the outcome said another, of having felt the consequences of being wrong and internalised those consequences into a form of judgment that no dataset can replicate. These are assessments that depend not on processing more information but on knowing which information matters - and that knowledge is often inarticulate. The quality of a management team, the reliability of a counterparty, whether a business model that looks elegant on paper will survive contact with reality - none of this reduces to data. The parallel to the junior pipeline problem discussed earlier is direct: this is precisely the kind of expertise that develops through years of practice at the junior level, and precisely what is lost if those junior positions disappear. AI has no equivalent. It does not have skin in the game. It does not feel the weight of concentrated positions or the fear of capital loss. The risk aversion that constrains human investors - and that contributes, paradoxically, to the kind of investing Marks describes - is absent entirely. An investor who has never been burned may take risks that no seasoned professional would accept.
The implication is that AI is about to raise the bar for human investors sharply. As Marks frames it through his son Andrew’s observation: readily available, quantitative information about the present cannot hold the key to superior performance, for the simple reason that everyone has it - and AI can now process it better than everyone. The parallel Marks draws is to indexation. When low-cost index funds demonstrated that most active managers could not justify their fees, entire categories of investment professionals were rendered redundant. AI threatens to do the same to the next tier - the analysts and portfolio managers whose edge rested on processing publicly available information faster or more diligently than their peers. The market’s failure to price in AI’s impact on the software industry before a sharp correction in early February 2026 - despite the information being available for months - illustrates the kind of human blind spot that AI, in principle, does not share. Leopold Aschenbrenner’s fund offers one illustration of what remains possible on the other side of that bar. His performance demonstrates that deep domain expertise in AI, translated into a disciplined investment thesis, can produce extraordinary returns. But the fund is also extremely concentrated and thesis-dependent. If AGI timelines slip, if a broad technology correction arrives, or if the infrastructure buildout decelerates, it faces the prospect of sharp drawdowns. This is the all-in bet in action - validated spectacularly so far, with the “so far” doing important work in that sentence. For every Aschenbrenner whose conviction is rewarded, there are investors whose equal conviction in a different thesis has cost them everything. The distinction between foresight and luck is only ever clear in retrospect - and that is precisely the kind of judgment that neither AI nor any framework can fully resolve.
Where This Leaves Us
Among students our age, conversations about AI tend to reach the same endpoint within minutes. The technology is coming for everything. The jobs we are training for may not exist by the time we graduate. We are, in the phrasing that has become a kind of generational shorthand, cooked. The word carries a particular flavour - not panic, but resignation. A sense that the trajectory is set, that the forces at work are too large and too fast for any individual response to matter, and that the rational thing to do is shrug and move on. This article is the case against that resignation.
The evidence assembled here does not let us off the hook that easily. The question we opened with - trend or revolution? - can now be answered with more precision, even if the answer demands more of us than resignation does.
The historical pattern Marks documents is real, and we should take it seriously. Every transformational technology in modern economic history has generated excessive enthusiasm, overbuilt its infrastructure, and destroyed a significant share of the wealth that financed it. There is no reason to assume AI will be the first exception. But the pace of what we are living through introduces something that Marks’ framework - built on railroads, radio, aviation, the internet - was not designed to accommodate. This is not the railroad, built over decades. It is not even the internet, which took years to move from novelty to mass adoption. AI has gone from a technology that could not do basic arithmetic reliably to one that helps build itself in roughly three years. That compression should unsettle anyone relying on historical analogies for comfort. Aschenbrenner - who is 24, our generation, and who built a fund on the conviction that AGI could arrive by 2027 - has already seen his own timeline outpaced by reality. When even the boldest informed prediction from within our cohort is behind the curve, the acceleration is no longer something we can process through the frameworks of previous centuries. It is something we have to confront on its own terms.
On the labour market, honesty demands an admission of what we do not know. The transformation is not hypothetical. The hiring chill documented earlier in this article is measurable. The jobs being quietly hollowed out are the ones we are studying to perform. All of this is real, and all of it is happening now. But where it leads, nobody can say with confidence. Not Sam Altman, who acknowledges that investors are overexcited about the very technology his company is building. Not Leopold Aschenbrenner, whose own bold predictions are being outrun. Not Howard Marks, who freely admits he is neither enough of a futurist to imagine the jobs that may be created nor enough of an optimist to trust that they will materialise. Pretending to know the destination would be dishonest. What we can say is that the direction is set and the pace is extraordinary, and that taking it seriously is better than pretending it is not happening.
On investing, Marks’ counsel is worth carrying forward: a moderate position, applied with selectivity and prudence. It would be naive to sit out entirely, and for most of us, the more immediate question is not how to allocate capital but how to allocate ourselves. A generation ago, the students who understood what the internet was doing to the structure of their industries - not just the ones who used it, but the ones who grasped its economic logic - gained an advantage that compounded for decades. AI demands the same structural literacy from us. The economics students who treat AI as a curiosity to be observed from a distance will find themselves in the position of the professionals who dismissed the internet in 1998 - not wrong about the risks, but wrong about what it cost them to stand apart. The question is not whether to engage, but how - with what level of conviction, what tolerance for uncertainty, and what willingness to build competence in a technology that is not waiting for us to be ready.
The question we opened with - trend or revolution - may itself be insufficient for what is unfolding. Sam Altman’s observation from the beginning of this article captures the tension as precisely now as it did then: investors are overexcited about AI, and AI is the most important thing to happen in a very long time. Both are true simultaneously. That is what makes this moment so difficult to navigate and so dangerous to ignore. What this article argues is that resignation is not an adequate response to difficulty - and that our generation cannot afford it. The technology is not slowing down. If anything, it is compounding. The choice is not between understanding it and ignoring it. It is between shaping our relationship to it and having that relationship shaped for us.
Final Note: I would like to thank Lenny Lorenz and Leopold Jahn for their valuable contributions to this article, from early brainstorming and shaping its direction, to drafting and refining the final text.
Ivo Bone-Winkel