> otherwise everyone places their value in work and then feels hopeless when ai is better than them at work
maybe this is a loaded question lol but how do you conceptualize this for yourself? I don't rly understand how people accept the possibility of AGI making e.g. math/coding skills obsolete without feeling hopeless
hmm that's not a loaded question. i'm not super sure tbh but i'm actually very excited!
like idk, i spent a long time learning swe / systems engineering / webdev and enjoy all these things but am still far from "elite" or whatever you want to call it, and these days with cursor + sonnet i can write every type of code so much faster / more painlessly than i could before. i'm objectively doing significantly less coding but it still feels good and helps cover my weaknesses while amplifying my strengths (ability to reason about code / design / systems is still important for guiding the ai and fixing mistakes). i think if you're attached to the act of personally writing all the code yourself then you'll have a tough time but if you just want good code to be written and don't care who does it then it's probably fine?
i think my perspective on math is a bit weird in that it's something i'm very good at but i've also slowly been getting worse at math since high school, so i've been feeling the decline for quite some time and am not particularly upset about ai being better than me at it haha
it's quite interesting that a lot of economic inequality trends started around 1980... if I recall correctly the pay-productivity gap also started around 1980 -- maybe the impact of technology on labor could be one of the explanations for part of the gap. at the same time, our interpretations of antitrust shifted around this time (to a consumer welfare model advocated by Bork), the government started slashing tax rates, and pursued deregulation across many industries starting with the Carter / Reagan administrations.
regarding AGI, I think it's generally easier for us to imagine the drawbacks of technology than it is to imagine new industries that will spawn from it. for example, the internet put video stores and print journalism out of business, but few could have predicted the scale of the creator or digital advertising industries that arose from it. I'm not saying this will necessarily happen with AGI, but I do think our margin of error when predicting innovation is a lot greater than when predicting job losses.
agreed that we need to raise taxes on capital. it never made sense to me why the top capital gains rate is so much lower than the top income tax rate. I understand the capital gains rate should be lower in theory to encourage investment, but I'm pretty sure it's been empirically shown in the past that increases to the capital gains rate (e.g. during Obama's term) didn't decrease investment by as much as people expected.
anyways, interesting post and would love to read more breakdowns like these in the future!
yep, totally agree with lots of things happening around 1980!
I'm pretty sure it's been empirically shown in the past that increases to the capital gains rate (e.g. during Obama's term) didn't decrease investment by as much as people expected » yeah, i think the persistence of silicon valley is another indication that higher taxes are fine if there are other advantages to compensate
Where do you get your consensus beliefs? I see that the one explicit reference is derived from the linked ITIF Report but more broadly I would be curious what foundational reading you would recommend to frame the points in this this essay (perhaps with an emphasis on US policy).
While constraints such as physical infrastructure deployment, regulatory frameworks, and human behavioral adaptation necessarily impose temporal bounds on the rate of change, network effects exhibit fundamentally different dynamics. These effects often follow a superlinear growth pattern, where the system's state variable y(t) initially evolves according to some subcritical growth rate dy/dt ≈ εy(t) for small ε > 0, but transitions to a supercritical regime dy/dt ≈ Cy(t) with C >> ε once some threshold value y* is exceeded. This phenomenon is analogous to the phase transitions observed in various physical systems, where microscopic changes in local parameters can induce macroscopic reorganization of the system's global structure.
The key distinction lies in the nonlinearity of the underlying dynamics - while physical, regulatory, and social constraints typically impose first-order linear damping terms, network effects introduce higher-order nonlinear coupling terms that can dominate the system's behavior once activated. This suggests we should expect a mixed temporal evolution, with some components following smooth, gradual trajectories while others exhibit sharp discontinuities at critical thresholds.
The only task worth automating is AI R&D. Other people's jobs are being automated to raise funding, generate revenue or data, or just as a byproduct of improving AI R&D. This will hold until AI labs (or whatever succeeds them) hit constraints on improving their AIs that they can resolve by taking more control of the economy.
I believe this transition will happen at a high level of general capability: AIs that we have then will engineer far better solutions to economic and social disruption than we can currently think of. Hence, perhaps the best thing governments can do wrt UBI/welfare/labor distribution for now is... nothing? The default short-term future is not immediate mass unemployment, but rather a period (a year or more) where many workers remain nominally employed without added value on the AI systems that actually do the work.
> otherwise everyone places their value in work and then feels hopeless when ai is better than them at work
maybe this is a loaded question lol but how do you conceptualize this for yourself? I don't rly understand how people accept the possibility of AGI making e.g. math/coding skills obsolete without feeling hopeless
hmm that's not a loaded question. i'm not super sure tbh but i'm actually very excited!
like idk, i spent a long time learning swe / systems engineering / webdev and enjoy all these things but am still far from "elite" or whatever you want to call it, and these days with cursor + sonnet i can write every type of code so much faster / more painlessly than i could before. i'm objectively doing significantly less coding but it still feels good and helps cover my weaknesses while amplifying my strengths (ability to reason about code / design / systems is still important for guiding the ai and fixing mistakes). i think if you're attached to the act of personally writing all the code yourself then you'll have a tough time but if you just want good code to be written and don't care who does it then it's probably fine?
i think my perspective on math is a bit weird in that it's something i'm very good at but i've also slowly been getting worse at math since high school, so i've been feeling the decline for quite some time and am not particularly upset about ai being better than me at it haha
it's quite interesting that a lot of economic inequality trends started around 1980... if I recall correctly the pay-productivity gap also started around 1980 -- maybe the impact of technology on labor could be one of the explanations for part of the gap. at the same time, our interpretations of antitrust shifted around this time (to a consumer welfare model advocated by Bork), the government started slashing tax rates, and pursued deregulation across many industries starting with the Carter / Reagan administrations.
regarding AGI, I think it's generally easier for us to imagine the drawbacks of technology than it is to imagine new industries that will spawn from it. for example, the internet put video stores and print journalism out of business, but few could have predicted the scale of the creator or digital advertising industries that arose from it. I'm not saying this will necessarily happen with AGI, but I do think our margin of error when predicting innovation is a lot greater than when predicting job losses.
agreed that we need to raise taxes on capital. it never made sense to me why the top capital gains rate is so much lower than the top income tax rate. I understand the capital gains rate should be lower in theory to encourage investment, but I'm pretty sure it's been empirically shown in the past that increases to the capital gains rate (e.g. during Obama's term) didn't decrease investment by as much as people expected.
anyways, interesting post and would love to read more breakdowns like these in the future!
yep, totally agree with lots of things happening around 1980!
I'm pretty sure it's been empirically shown in the past that increases to the capital gains rate (e.g. during Obama's term) didn't decrease investment by as much as people expected » yeah, i think the persistence of silicon valley is another indication that higher taxes are fine if there are other advantages to compensate
Where do you get your consensus beliefs? I see that the one explicit reference is derived from the linked ITIF Report but more broadly I would be curious what foundational reading you would recommend to frame the points in this this essay (perhaps with an emphasis on US policy).
hmm tbh i think you get a pretty good idea from reading (both left-leaning and right-leaning) mainstream news
While constraints such as physical infrastructure deployment, regulatory frameworks, and human behavioral adaptation necessarily impose temporal bounds on the rate of change, network effects exhibit fundamentally different dynamics. These effects often follow a superlinear growth pattern, where the system's state variable y(t) initially evolves according to some subcritical growth rate dy/dt ≈ εy(t) for small ε > 0, but transitions to a supercritical regime dy/dt ≈ Cy(t) with C >> ε once some threshold value y* is exceeded. This phenomenon is analogous to the phase transitions observed in various physical systems, where microscopic changes in local parameters can induce macroscopic reorganization of the system's global structure.
The key distinction lies in the nonlinearity of the underlying dynamics - while physical, regulatory, and social constraints typically impose first-order linear damping terms, network effects introduce higher-order nonlinear coupling terms that can dominate the system's behavior once activated. This suggests we should expect a mixed temporal evolution, with some components following smooth, gradual trajectories while others exhibit sharp discontinuities at critical thresholds.
The only task worth automating is AI R&D. Other people's jobs are being automated to raise funding, generate revenue or data, or just as a byproduct of improving AI R&D. This will hold until AI labs (or whatever succeeds them) hit constraints on improving their AIs that they can resolve by taking more control of the economy.
I believe this transition will happen at a high level of general capability: AIs that we have then will engineer far better solutions to economic and social disruption than we can currently think of. Hence, perhaps the best thing governments can do wrt UBI/welfare/labor distribution for now is... nothing? The default short-term future is not immediate mass unemployment, but rather a period (a year or more) where many workers remain nominally employed without added value on the AI systems that actually do the work.