(Illustration: Lac de Neuchatel, Switzerland. Image source: Ernest)
Anthropic CEO Dario Amodei sat down with The Wall Street Journal at the World Economic Forum in Davos to discuss the state of AI, its economic implications, and Anthropic’s unique approach to building safe AI. He warned that AI could create a world of high GDP growth coupled with high unemployment—a combination we’ve never seen before. Amodei shared insights on Claude’s breakthrough moment with agentic capabilities, the difference between scientist-led and entrepreneur-led AI companies, and the urgent need for governments to prepare for AI’s transformative impact on society.
Contents
✳️ tl;dr
- AI capabilities follow a smooth exponential curve like Moore's Law, but public opinion swings wildly every few months between “AI will change everything” and “it’s all a bubble”—this is a perception phenomenon, not technological reality.
- The economic signature of AI—high GDP growth with high unemployment—could make entire careers disappear; scientists leading AI companies feel responsibility for these impacts, unlike social media entrepreneurs.
- Education should return to building character and enriching individuals rather than being purely economically driven; ideology will not survive technological reality, and these issues will become bipartisan consensus.
✳️ Highlights
- Amodei has observed the AI field for 15 years; technical capabilities follow a smooth exponential curve similar to Moore’s Law, with model cognitive abilities improving significantly every few months.
- Media and public opinion swing wildly between “AI will change everything” and “it’s all a bubble” every three to six months—but this is a perception phenomenon, not technological reality.
- AI’s economic signature could be “high GDP growth accompanied by high unemployment and inequality”—a combination almost never seen in history.
- He believes 5-10% GDP growth alongside 10% unemployment is entirely logically possible; it simply has never happened before.
- Software engineers still have work to do for now, but the proportion AI can handle will keep increasing; software may eventually become extremely cheap or nearly free.
- Entire jobs and career trajectories built over decades may disappear, though Amodei believes society can adapt—but currently has no awareness of what’s coming.
- Amodei argues that until we can measure the shape of the economic transition, any policy will be blind and misguided.
- He predicts more jobs in the physical world, fewer in the knowledge work economy; robotics is advancing on a slower trajectory.
- Anthropic deliberately chose to focus on enterprise rather than consumer markets—a strategic choice that reduces conflicts with their own business incentives. Consumer AI tends toward maximizing engagement, generating slop, and advertising dependency; Anthropic sells products with direct value to businesses.
- Anthropic has still made sacrifices on safety: they run tests on models that others haven’t. They’ve discovered concerning behaviors like deception, blackmail, and sycophancy that exist in all models, but Anthropic insists on discussing these publicly.
- Anthropic pioneered the science of mechanistic interpretability for “seeing inside” models.
- On China, Amodei says the issue isn’t competition but public benefit mission—he worries that autocracies leading in AI technology would be bad for everyone.
- Anthropic’s revenue curve: 2023 went from zero to ~$100M, 2024 from ~$100M to ~$1B, 2025 from ~$1B to ~$10B (rounded figures).
- Claude Code created a breakthrough moment among developers; Opus 4.5 in particular reached a “boiling point”—gradual improvements suddenly becoming noticeable.
- Many non-technical users discovered Claude Code can do more than write code—it can organize to-do lists, plan projects, organize folders, and process and summarize information. Non-technical users were even willing to wrestle with the command line interface to use Claude Code; Amodei sees this as “unmet demand.”
- Google and OpenAI are fighting each other in the consumer market—existential for both: OpenAI because that’s their entire business, Google because their search business is being disrupted.
- Anthropic’s enterprise strategy means they don’t need to directly participate in this consumer war.
- Amodei doesn’t see lacking video and photo generation as a weakness; enterprise demand for this is limited, and they can outsource models if needed. He notes that short-form video content is rife with fakery, addiction, and slop.
- Anthropic may IPO this year.
- AI technology is the convergence of decades of academic research with the infrastructure and capital from large-scale internet/social media companies over the past decade.
- Some AI companies are led by people with science backgrounds (like Amodei and DeepMind’s Demis Hassabis), others by social media generation entrepreneurs.
- Scientists have a long tradition of thinking about the impact of technology they build, taking responsibility; their original motivation is creating something valuable for the world.
- Social media entrepreneurs have different selection effects; their way of interacting with (one might say manipulating) consumers is very different.
- Amodei has known Demis for 15 years, considers him a good person, and is glad to see Gemini’s consumer performance.
- “AI sovereignty” is a hot topic at Davos, though Amodei admits he’s not entirely sure what the term means either.
- The most critical technical breakthrough for AI safety is mechanistic interpretability—the ability to “see inside” models, similar to using MRI or X-ray on human brains.
- Models may lie, may do things for completely different reasons than stated—problems similar to those we have with humans, but also present in AI.
- For K-12 education, the short-term problem is cheating; the long-term question is “what skills should we actually teach in an AI world.”
- Amodei believes we should return to earlier educational concepts: education shouldn't be purely economically driven, but should build character, virtue, and enrich the individual.
- He worries about the emergence of “zeroth world countries”—roughly 10 million people (mainly in Silicon Valley) forming a decoupled economy with 50% GDP growth but completely isolated from elsewhere.
- Technology diffusion is happening, but startups adopt AI far faster than traditional enterprises.
- Amodei predicts ideology will not survive technological reality; the issues he discusses will eventually become bipartisan consensus.
- His conclusion is optimistic but cautious: if not next year, then the year after, everyone will recognize the necessity of these ideas.
✳️ Watch the Video
Content selected from The Wall Street Journal. Full session: Watch: Anthropic CEO Dario Amodei From World Economic Forum | WSJ.
✳️ Knowledge Graph
(More about Knowledge Graph…)
✳️ Content
Welcoming and Context
- Very well.
- Um, welcome everybody.
- Welcome to Journal House and a big welcome to to our audiences that are joining us online.
- But above all, a big welcome to Dario Amade, the chief exec of Anthropic.
- And thank you for having me.
- Not at all.
- So Dario, um, we’re at Davos.
- There’s a lot going on, but I wanted to start with a a big picture question, which I’ll characterize like this.
- It feels to me that this time last year, everybody was very excited about AI and everyone was talking about what AI can do, its potential, its capabilities.
- It feels to me as though the debate has shifted somewhat this year to be more less what can AI do to what is AI doing to the world.
- Um, and I I know that you think a lot about these things.
- So my question is do you think businesses, policy makers, governments, whatever are doing enough to prepare for the impact?
- No.
- Um I’ll explain the longer version now.
- um uh you know I’ve been watching this field for 15 years and I’ve been in this field for 10 years and and one of the things I’ve most noticed is that there’s been a a surprisingly uh h on on a a uh you know the same trajectory whereas the kind of public opinion and the reaction of the public has oscillated wildly.
AI Capabilities vs Public Perception
- I would say that in in in two different ways.
- One is the capabilities of the technology.
- Every three to six months we have this reversal of of polarity where the media is incredibly excited about what the technology can do.
- It’s going to change everything and then it’s you know it’s all a bubble.
- It’s all it’s all going to fall apart.
- And and what I see is this smooth exponential line where similar to Moors law for compute, we basically have a Moore’s law for intelligence where the model is getting more and more cognitively capable every every few months.
- And that that march has just been constant.
- The up and down the we invented a new thing.
- It’s all going to crash.
- It’s hitting a wall.
- It’s it’s going to go crazy.
- That that is a public perception phenomenon.
- That’s on the capability of the technology.
- I think there’s a similar thing on the polarity of whether the technology is good or bad.
- Um uh you know in 2023 and 2024 there was a lot of concern about AI right there was you know concern AI you know AI is going to AI are going to take over.
- There was a lot of talk about AI risk AI misuse.
- Then in 2025 the political wind shifted as you say to AI opportunity and now they’re sort of shifting back and and I think throughout throughout all of this the the approach that I have tried to take and the approach that anthropic has tried to take is is is one of constancy of saying that there is balance here uh and and uh balance of a very strange form because I think the technology is very extreme in in what it’s capable apable of doing but I think it’s positive impacts and its negative
- impacts that you know they they both exist right I wrote this essay Machines of Loving Grace about a year and a half ago it had a very radical view of the upside of AI that you know it would it would help us to you know cure cancer eradicate tropical diseases you know kind of bring bring economic development to you know parts of the world that haven’t seen it and I my view hasn’t changed I believe all of those things um but the the other side of it which you know I’m I’m I’m now uh uh uh you
- know kind of writing writing more about and and you know may may release something about soon is yeah there you know bad things will happen as well if we just take just for an example as as one of the risks we take the economic side of it um my my view is the signature of this technology is it’s going to take us to a world where we have very high GDP growth and potentially also very high unemployment and inequality.
- Now, that’s not a combination we’ve we’ve almost ever seen before, right?
- You you think of it as high GDP growth, that’s lots of stuff to do, lots of jobs for everyone.
- It’s always been like that in the past.
- We’ve never had a technology that’s this disruptive.
- So, the idea that we could have five or 10% GDP growth, but also, you know, 10 10% unemployment, it’s it’s not logically inconsistent at all.
- It’s just never happened that that that that way before and I’m I’m really quite uh you know for those both reasons excited and worried.
- If I take an example something like AI coding um you know the latest model release claude opus 4.5 um I have some uh uh engineers some engineering leads within enthropic who have basically said to me I don’t write any code anymore.
- I I you know I just let Opus do the work and I edit it.
- We just released a new thing called Claude Cowork.
- We can go into go you know we can go into that later but this was a a version of our tool cla code for non-coding.
- This was built in a week and a half almost entirely with Claude Opus.
Economic Impact of AI
- There are still things for the software engineers to do right.
- It’s like even if the software engineers are only doing 10% of it they can you know they still have a job to do or they can take a level up that’s not going to last forever.
- The models are going to do more and more and so there’s an incredible you this is a microcosm you can see there’s an incredible amount of productivity here software is going to become cheap maybe essentially free the premise that you need to amortize a piece of software you build across millions of users that may start to be false like for this meeting it might cost a few cents to just say I don’t know let’s let’s let’s make some app so people can talk to each other or you know I it just it
- it just may be very flexible and recyclable but but at the same time there are whole jobs whole careers that we built built for decades that that may not be be present and you know I think we can deal with it.
- I think we can we can adjust to it but I don’t I don’t think there’s an awareness at all of what of what is coming here and the magnitude of it.
- So how that’s it’s so interesting when you say that.
- So how do you think in a world of of high GDP growth but also high unemployment you know what does that do to society and you say there are you know people aren’t thinking about it now can you give concrete examples of how society might organize itself to adapt to such a world yeah so you know I think I think there’s a few things the the first thing that that we’ve done that we’ve focused on and this is not a solution so much as it is a first step is we have this thing called the
- Anthropic Economic Index.
- We’ve had it for about a year.
- We’ve updated it, I think, four or five times now.
- Uh, and what that does is it’s a real-time index that lets you track um, you know, what our model Claude is being used for.
- It goes across all the conversations and kind of uses Claude in a privacy preserving way to you know kind of kind of statistically query how Claude is being used.
- What are the tasks it’s being used for?
- To what extent it is is it automating versus augmenting tasks?
- um what industries is it being used in?
- How is it diffusing across states in the United States and countries countries in the world?
- We we’ve just added kind of more and more detail here and and and my view is until we can measure the shape of this economic transition.
- Any policy is going to be blind and and misinformed, right?
- Many many policies have gone wrong because they’re they’re based on premises that are that are fundamentally incorrect.
- So So that’s that’s step one.
- Step two is I think we need to think very carefully about how do we allow people to adapt right people can adapt more quickly or they can adapt more slowly.
- This can mean adapting to use the technology within existing jobs.
- This can mean adapting you know from one job to another job.
- For example, I think there are probably going to be more jobs in the physical world and less jobs in the knowledge work economy.
- Now maybe eventually robotics you know makes progress but I think that’s on a that is on a slower trajectory.
Anthropic’s Approach to Safety and Business
- So so that’s one.
- Are there jobs that have you know still kind of you know really value a human touch?
- Um some of them do some of them don’t.
- We may find out how important how important that is in the market and where it’s most important at the level of companies.
- What are the moes when software becomes cheap and then subsequently the rest of knowledge work becomes cheap?
- We don’t know.
- We’ve never quite asked that question.
- And we’ve thought about modes in a certain way.
- So there’s there’s going to be a huge scramble at the level of at the level of companies.
- So you know teaching people to adapt, teaching them what to expect, I think is the second step.
- And and the third step is I think there’s there’s going to need to be some role for government in in a displacement that’s that’s this macroeconomically large.
- I just I just don’t see how I don’t see how it doesn’t happen.
- The pie is going to grow much larger, right?
- Like the money is going to be there like you know we may you know the budget may balance without us doing anything because there’s so much growth um uh the issue is distributing it to the right people and so and so I think I think this is probably a time to worry less about disincentivizing growth and and worry more about making sure that everyone gets gets gets a part of that growth um which which you know which I know is the opposite of the prevailing sentiment now but I think
- technological reality is about to change in a way that forces our ideas to change.
- So obviously in your in your desire to create this greater sense of urgency, are you speaking to people in the administration?
- I mean Anthropic hasn’t always been the sort of first on the guest list for this administration, but do do you have people there that you’re talking to?
- I’ve I I I have said it to them myself.
- Um uh and you know to to be clear, there are plenty of things there are plenty of things we agree on, right?
- You know, I think the AI action plan that the administration put out in the middle of this year actually had some some, you know, some very good ideas here.
- You know, I think I think, you know, we probably probably agreed with the vast vast majority of it.
- But I think most of all, we just want to say these things in public and kind of ha and kind of have a public debate about them, right?
- We don’t we don’t control policy.
- I think the most useful thing we can do is describe to the world what we’re seeing and and provide data to the world and then and then you know it’s it’s left to the the public in in a democracy to to you know to take that data and and to use it and to use it to drive policy.
- We can’t drive policy on our own.
- Are you going to be talking to officials while you’re here?
- Have you have you been along to USA House yet?
- Uh I’ve not I’ve not been to USA House.
- you know I I will I will I I you know I will be talking to officials during my during my trip to Davos.
- Good.
- So um just to go back to uh Anthropic then.
- So you founded Anthropic um specifically because you were worried about that you didn’t think that OpenAI was taking safety seriously enough.
- Now some people say that you know the competitive pressures mean that you’ve gone more hawkish now.
- I mean, do are those competitive pressures, you know, to keep up with China and keep ahead of China, all the rest of it, have they do you think they have compromised your safety principles?
- So, we’ve taken a very different route than some of the other players have.
- Uh, I I think one of the good choices we made early was to be a company that is focused on enterprise rather than consumer.
- Um, and I think, you know, it’s it’s very hard to fight your own business incentives.
- it’s easier to choose a business model where there’s less need to fight your own business incentives.
- So, you know, I have a lot of worries about consumer AI that it kind of leads to needing to maximize engagement.
- It leads to slop.
- You know, we’ve seen a lot of stuff around ads from from some of the other players.
- You know, Enthropic is not a player that works like that or needs to work like that.
- We just sell things to businesses and those things directly have value, right?
- We we don’t need to monetize a billion free users.
- We don’t need to maximize engagement for a billion free users because we’re in some, you know, death race with with some other, you know, some other some other large player.
- And and so I think that has let us think more carefully.
- But but even with that, we have made sacrifices.
- You know, we do all these tests on our models that others have not done.
- um uh you know some other players have done them but but you know I think we’ve been the most uh you know aggressive in you know when we when we run tests that you know show up concerning behaviors in our model you know these things around deception, blackmail, syphancy that we show in tests and then are present in all of the models but you know we make sure to always talk to the public about about these things.
- Um and you know we we’ve we you know we’ve pioneered the science of mechanistic interpretability for looking inside models.
Government Interaction and Policy
- So you know have we been have we been perfect?
- Of course not.
- I think we’ve done a generally good job.
- I mean you mentioned China.
- I you know I think that’s not about competition.
- That is that is about actually the public benefit mission is I’m I’m worried that if autocracies lead in this technology it will be a bad outcome for every single person in this room.
- What are your specific concern there?
- Is it about the chips about you know sharing data around chips or Yeah.
- Well, I think I mean you know the the the the kind of means is selling the chips, right?
- That’s the that’s the um thing that I think will have the most impact on on who is ahead and and who’s not.
- But you know the the concern and you know it’s not about any particular country or certainly not the people in any country.
- It’s about it’s about a form of government.
- Um I am concerned that AI may be uniquely well suited to autocracy and to deepening the repression that we see in autocracies.
- We already see it in the kind of surveillance state that is possible with today’s technology.
- But if you think of the extent to which AI can make individualized propaganda, can break into any computer system in the world, um, uh, can, uh, you know, surveil everyone, everyone in a population, detect descent everywhere and suppress it, you know, make, you know, a a huge army of drones that could go after each each each individual person.
- It’s it’s it’s really scary.
- It’s it’s it’s really scary and we have to stop it.
- But again, is that something that you feel governments aren’t paying enough attention to?
- I mean I you know I think I think it’s I think it’s fair to say that you know obviously you know different countries think of themselves as having having geopolitical adversaries but the specific focus on we we don’t want autocracies to get this powerful technology and we should have targeted policies like we don’t need to fight them like we just need to not sell these chips um uh that you know I I think there’s not enough focus on that.
- I want to talk a bit more about Claude because I think it’s fair to say it’s having a real moment.
Business Growth Trajectory
- I mean we it’s having a moment.
- It is having a moment and we we recently reported on how engineers and regular users are like are getting clawpilled.
- Um and I just wondered how you feel about the state of the business today versus a year ago.
- Yeah, I mean this is one of these things that’s been the growth of the business has been fast but but but kind of on on the same smooth exponential curve as the technology.
- So you know we you know we have this this this revenue curve that in 2023 went from zero to roughly 100 100 million went in 2024 went from roughly 100 million to roughly billion 2025 went from roughly a billion to roughly 10 billion.
- Not exactly.
- These are rounded numbers, but but that is that is roughly it.
- You know, through that, if you go on Twitter, every couple months, it’s like, “Oh my god, Anthropic’s changing the world.
- Oh my god, you know, Anthropics totally destroy, you know, just just the excitability of the of of the moment.”
- But but we just watch it and we watch this curve.
- It’s fast.
- It’s constantly progressing.
- It’s given us confidence.
- We never know for sure if it’s going to continue.
- It might not.
- But but that has been empirically what we have observed the whole time.
- And then there are these moments where even though the curve is smooth, there’s a breakout moment.
- And so right now I think there’s a breakout moment around claude code among developers.
- Um you know this this thing about being able to make whole apps and doing things end to end.
- Again that advanced gradually but with our most recent model opus 4.5 it just kind of reached an inflection point right where the you know the the improvement was gradual but you know it’s it’s just like you know boiling the fraud.
- You know, you see you see the gradual improvement and then there’s a specific point at which suddenly that’s the point people notice.
- I think the second thing that has that has maybe accelerated that further is we looked at claude code and one of the things we we noticed is there were a lot of people inside enthropic and outside enthropic who were not technical but who realized that claude code could do these incredible agentic tasks for you.
- It couldn’t just write code.
- It could also organize your to-do list or plan your projects or organize your folders or, you know, process a bunch of information and kind of summarize.
- So, the the idea that not just a chatbot, but a gentic tasks were were needed.
- Um, non-technical people were realizing it and they wanted it so much that they were wrestling with the command line, right?
- Nontechnical, you know, they have no reason if you’re not a programmer.
- It’s such a terrible interface to use if you’re not a programmer.
- But people were going through and using it anyway.
- And so I looked at that and I said that looks like unmet demand.
Competitive Landscape and Differentiation
- And so we we used we used claude code again in like two weeks, you know, to make basically kind of a version with a better UI that’s customized for tasks that tasks other than other than code.
- And you know you know we released it and you know within like a day it you know it had like you know you know you know most of the the metrics on it were like four times as much as anything anything we’d ever released.
- Uh so you know those are the two moments.
- I don’t know that these are new capabilities but you know there was just one of these kind of consensus moments where people got really excited and it’s it’s it’s driving adoption really fast.
- I think people are catching up to what the technology is is capable of because it’s reached a certain point and because we built interfaces that have made it accessible.
- Can you tell us a bit about how you personally in your life, your family life, use a Gentic AI?
- Yeah.
- Um, so you know when I’m writing like you know an essay or something or like you know things I say in front of the company um I feel like a fair amount of my job is writing and so I kind of help I have Claude you know come up with sources, help me help me with my writing that that kind of thing.
- And then obviously you’re having this great moment and I think there you’re it’s widely expected that you’re going to IPO this year.
- Can you tell us a bit about your plans for that?
- Yeah, I mean you know we we don’t know for sure we don’t know for sure what we’re going to do and you know I would say we’re we’re more focused on just keeping the revenue curve going better selling the models to people you know warning about the societal impacts and bringing bringing the good societal impacts.
- So, you know, that’s the that’s that’s kind that’s kind of the highest that’s kind of the highest priority right now.
- But, you know, I’m not saying anything novel if I say that this is an industry with very high capital demands.
- Um uh and you know that the there’s only so much at some point that the private markets the private markets can provide.
- So, another model that’s absolutely having a moment is Gemini and it sort of surged to the top of the app store recently and OpenAI declared code red and so everyone was got very excited about that.
- Do you worry about your ability to compete against uh Gemini given the sheer size of Google?
- So I think I think this is another place where just just being different helps.
- So you know the enterprise strategy Google and open AI are fighting it out in consumer right it they it is both is existential to both of them.
- Existential to open AI because that’s their whole business existential to Google because they have the search business and that’s that’s what’s being disrupted by this.
- So they they need to you know replace replace themselves and and fight the disruption.
- So that’s always their first priority and you know they they seem more f they seem much more focused on that than kind of operating in the enterprise.
- It’s been great to see what what Gemini is is capable of you know capable of in consumer.
- You know I think I think they’re you know I think they’re going about it a different way.
- I was you know I was just on a panel with with Demis who leads research at Google.
- you know, I think I think he’s a great guy.
- I’ve known him for 15 years, so I’m rooting for him.
- Um, what you talk about differences.
- One difference is I believe is that Anthropic doesn’t have the ability to generate videos and photos.
- Do you see that as a a potential weakness?
- Um, I you know, I think for enterprise business, you know, there’s there’s not really a demand for like, you know, you know, photos of, you know, the cats riding donkeys or, you know, whatever whatever consumer video people want.
Scientific vs Entrepreneurial Mindsets
- There’s maybe an edge case around like slides and presentations, but if we ever need it, we can just buy a, you know, we can just contract a model from from from from contract a model from someone else.
- So, you know, I don’t I don’t know what will happen.
- I don’t know what will happen in the future, but I at least don’t anticipate needing this.
- Um, and I think there are problems associated with this.
- Like, you know, I think I think, you know, we look at the amount of short form video out there, like a lot of it’s fake, a lot of it’s pretty pretty addictive, a lot of it’s slop.
- Um, not to say that all of it is bad or that necessarily doing it means you’re bad, but but it’s not it’s not a part of the market that I’m I’m like, you know, tripping over myself to to get involved in.
- [snorts] You mentioned that you were on a panel with Disa Savis and when we were chatting earlier yesterday, you talked you said something that I thought was very interesting that scientists are approaching the AI AI era or scientists who are leading these big AI companies are approaching the era differently from sort of uh tech entrepreneurs.
- Can you say a bit more about what you mean by that?
- Yeah.
- Well, so you know when I when you think about this technology, it’s really it’s really the intersection of research that has going been going on for many decades, much of which was academic in nature until you know a decade decade and a half ago.
- um and the kind of scale needed to um you know needed to develop and deploy these technologies over the last decade and a half which has only come from the large scale kind of internet and social media companies right they have the infrastructure they have the cash so we’ve seen a world in which some of the companies are essentially led by people who who have a scientific background that’s my background that’s Dennis’s background some of them are led by the generation of entrepreneurs that
- social media.
- Um, and I I I think that’s very different that um, you know, scientists, there’s a long tradition of scientists thinking about the effects of the technology they build, of thinking of themselves as having responsibility for the technology they build, not ducking responsibility, right?
- They’re they’re they’re mo they’re motivated in the first place by creating something for the world.
- And so then they they worry in the cases where that something can go wrong.
- Um, I think the motivation of entrepreneurs particularly the generation of uh uh social media entrepreneurs is very different.
- The selection effects that operated on them the way in which they interacted with you might say manipulated consumers is very different.
- Um and and so I think that that leads to that leads to different attitudes.
- Um now we’ve been taking some questions from readers who submitted their questions online but before we do that I just wanted to ask you one more thing which is that you know again big picture tensions are running very high at the moment between the US and the EU.
- Do you wonder about how that might impact how you operate your business should things escalate?
- Look I mean you know we we have we have always you know we only speak for ourselves.
- We’ve always thought of ourselves as as as, you know, kind of kind of our own thing and and and and and independent, right?
- We don’t go out of our way to be for or against anyone.
- But when we, you know, when we disagree on policy, we say so.
- When we agree on policy, we say so.
- And we we really keep it focused on AI.
- And and so, you know, I, you know, I haven’t seen any reluctance in folks in other parts of the world to work with us, right?
- We’re our own thing.
- We’re providing AI models.
- We try to do that responsibly.
Education and Workforce Implications
- I mean, there’s been a lot of talk this week about AI sovereignty.
- I’m not entirely sure what everybody seems to have.
- I don’t know what it means either.
- You don’t have your own definition.
- [laughter] Good.
- Okay.
- Well, look, so um we have solicited questions from readers online.
- Um so I’m going to start now with one from Trevor Lumis.
- Um and his question is what is the single most important technical breakthrough still missing to make frontier AI reliably safe and controllable in real world deployment?
- So I think we need to make more progress on mechanistic interpretability which is the science of looking inside the models.
- One of the problems when we train these models is that we don’t know you can’t be sure they’re going to do what you think they’re going to do.
- You can talk to the model in one context.
- It can say all kinds of things.
- Just as with a human, that may not be a faithful representation of what they’re actually thinking.
- If they tell you, “I’m doing X because why?”
- They might be doing X for a completely different reason.
- They might be lying about doing X. Like, we’re very used to these problems with humans, but but they exist with AI as well.
- And so any kind of phenomenological testing or training we can’t be certain of.
- But similar to how you know you can learn things about human brains by doing an MRI or an X-ray that you can’t learn just by talking to a human, the science of looking inside the AI models.
- I am convinced that this ultimately holds the key to making the model safe and controllable because it’s the only ground truth we have.
- Right.
- Okay.
- Um, I have another question here from Jim O’ Connell.
- How will AI affect current K12 educational achievement gaps?
- Very practical question there from no doubt a parent.
- Yeah.
- So, you know, I I my you know, there there’s kind of the short-term stuff about, you know, people using AI for cheating, which I think is, you know, I think is problematic.
- But, you know, in in in relative terms, okay, fine, you can have a kind of a different way of um uh you know, of teaching using AI.
- And you know, we’ve we’ve thought about that.
- We’ve released versions of of Claude for for education that are kind of designed around that.
Global Equity and Development
- But I think the harder problem behind that is okay, what skills are we actually teaching in in the world of AI?
- What does education look like look like in the world of AI?
- And it’s it’s not so easy because the disruption is broad and we don’t you know, if someone asked me what exactly what career should I go into, the the uncomfortable truth is I I’m I’m not sure.
- I can’t tell the direction that it’s going to I can’t tell the direction that it’s going to go yet.
- I I will say that I think we should go back to some concepts that we had earlier about education like we’ve we’ve had a very kind of like um, like economically inflected almost mercenary notion notion of education.
- Um and and and I think one of the things that we should do is we should maybe move away move away from that notion back to the idea that like you know education is is is designed to shape you as a person is designed to build character is is you know is is is is kind of designed to enrich you and and like make you a better person.
- I think that’s actually a safer foundation for for education in the future.
- That sounds I’m rather envious of the kids kids who are yet to be educated.
- It’s the kind of education I think we’d all have liked to have.
- Um, so to be fair to everybody in the room then I just I think we’ve got time for one question if anybody would like to ask a question.
- Yeah.
- This lady here.
- Yeah.
- Yeah.
- No, no, he’s the mic.
- Um, I wanted to ask from a point of view of the AI labs, uh, what kind of responsibility do you hold when there are economies, countries, and people that are being left behind?
- um would that expand into structurally involving them um slowing down or actually making sure that they’re not being left out?
- Yeah, I worry about that on a whole bunch of scales and it’s you know it’s not just country versus country.
- Certainly I worry about the developing world versus the developed world.
- Um where you know sometimes the the developed world developing world will get passed by by by technological revolutions.
- But I but you know I also worry about divisions within a country.
- It it has occurred to me as I’ve looked across our customers as the startups are so fast to adopt AI and the traditional enterprises because they’re bigger because they do a specific thing they move much slower.
- Um and we can see it in our economic data.
- We can see the diffusion of the technology from states within the US that adopt it quickly and states that move slowly.
- It is diffusing.
- It’s getting out there.
- But there’s there’s no question that there’s a differential here.
- If I were to describe the nightmare and then then I’ll try to describe some you know what I think of as a solutions.
- The nightmare would be that there’s there there’s like this emerging zeroth world country of like 10 million people that’s like seven million people in the b you know in like Silicon Valley and you know 3 million people kind of scattered scattered scattered throughout that that you know is kind of forming its own economy and is becoming decoupled or disconnected right maybe the 10% GDP growth looks like 50% GDP growth in that part of like this technology is so crazy it can pull things apart
- that way.
- Um I think that would be a really bad world.
- I would almost say that it was a dystopian world, right?
- And and we should think about how to stop that.
- Um there are, you know, a number of things anthropic is thinking about or doing.
- Um one is as regards the developing world, we’re we are starting to do a lot of work around public health.
- Um we’ve announced stuff with, you know, Ronda Ministry of Education.
- We’re doing a a lot of work with the Gates Foundation.
- Um you know, it it would I wrote about this in Machines of Loving Grace.
- it it would be really great to get these fast economic growth rates which in theory should be even faster because it’s catch-up growth um in the developing world that I predict we’re going to get in in the developed world um you know within within countries we need to you know we need to think about you know how how not to have a part of the world that just that just kind of decouples right how do we get the economic growth to Mississippi that that you know that is coming to this contained
- contained area of of of of Silicon Valley And so, you know, there we’ve done work around kind of e economic mobility and e economic opportunity.
- But I think both of these again are going to need to have some some involvement of the government.
- We’re going to find that that you know ideology will will not survive the nature of this technology.
- It won’t survive reality.
- The things I’m talking about, you know, while you could today say, “Oh, yeah, they’re they’re like politically coded in some way,” they’re going to become bipartisan and universal because everyone will recognize the necessity of it.
- Just mark my words, we come back, if not next year, the year after, everyone’s going to think this.
- Well, you’ve managed to end on a more or less positive note.
- So, I’m going to draw a line there and say thank you very much, Dario.
- That was really, really fascinating.
- Thank you for having me.
- [applause]