AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years
Link 2: https://basilhalperin.com/essays/stocks-forecasting-timelines.html
Link 3: https://www.basilhalperin.com/essays/cowens-third-law.html
AGI and the EMH: what does the market tell us about AI timelines?
TLDR: Markets are not putting much probability on the development of transformative AI (aligned or unaligned!) in the next 30-50 years – as evidenced by low *real interest rates*
This thread is based on new work with @ZMazlish and @tmychow, full post here:
forum.effectivealtruism.org/posts/8c7Lycgt…
We show *real interest rates* would be high if markets were expecting transformative AI
But long-term real rates are low!
(By “transformative AI”, we mean the truly radical changes envisioned in the EA/AI safety communities:
- a 10x increase in GDP growth — as Davidson (2021) notes, on the scale of the industrial revolution – or
- AI-induced human extinction, ie the “unaligned AI” of Yudkowsky/etc)
Real interest rates are interesting here because --
Real rates are high when
(1) growth is high, or
(2) probability of death is high
SO: real rates would be high if markets were expecting
(1) aligned AI (explosive growth), or
(2) unaligned AI (existential risk)
(1) Why high expected growth pushes up real rates:
If I expect to be astoundingly rich in 2040, there’s no reason to save today, because I’m going to be rich
=> lower supply of saving pushes up the real rate
You can see this even in a simple cut of the data: strong correlation between real interest rates and future growth
[AFAIK this simple stylized fact is novel – unlike us, existing lit doesn’t use real rates from inflation-linked bonds (!), and so measures real rates very badly]
(2) Why high existential risk pushes up real interest rates:
If I expect a high probability of being dead in 2040, I'm not going to be willing to lend much today, because I probably won't be around to enjoy the payoff of the loan
=> lower supply of lending pushes up real rate
Again: either of these – future GDP explosion or extinction risk – would be much more likely if the market was expecting transformative AI
=> real rates at (say) a 30y horizon would be high, if market was expecting TAI in the next 30 years
But 30y real rates are low!
- US 30y real rate on TIPS 1.4%
- UK 30y real rate is 0.7%
- even UK 50y real rate is 0.7% (!!)
- other countries with real bonds are similar
=> real rates are low at 30-50 year time horizons
=> markets are not forecasting transformative AI on a 30-50 year time horizon (!)
In particular, using the simplest possible model, we show that markets are decisively rejecting the shortest possible timelines of transformative AI in 0-10 years: real rates would be absurdly high if the singularity were *that* near
Okay, maybe you think markets are not that efficient, or at least wrong here – maybe put your money where your mouth is then 🤗
There’s easily a trillion dollars on the table, just from shorting US treasuries alone:
You can EASILY make this trade with a number of ETFs 🤗 we give some specific examples in the post 🤗
This is not *not* financial advice 🤗
(Paul Christiano has said publicly he’s short treasuries)
“Get rich or hopefully don’t die tryin”, to paraphrase
The post has a lot more analysis and responses to rebuttals – we really tried to keep it maximally concise
forum.effectivealtruism.org/posts/8c7Lycgt…
“The market-clearing price does not hate you nor does it love you”, as one might say
How I interpret this post: a useful OUTSIDE VIEW on forecasting timelines, which complements but does not completely substitute inside view forecasts (e.g. Cotra 2020)
=> only *one* model among a mixture of models that you should consider when thinking about AI timelines (!!)
There is a lot of evidence that financial markets are the best information aggregators produced by the universe (so far?)
=> the (dead simple!) logic here, showing markets are not predicting TAI soon, should be taken pretty seriously
Eliezer Yudkowsky @ESYudkowsky