What 81,000 people
want from AI

Last December, tens of thousands of Claude users around the world had a conversation with our AI interviewer to share how they use AI, what they dream it could make possible, and what they fear it might do.

Each dot represents 4 respondents
For the first time, AI has enabled us to collect rich, open‑ended interviews at extraordinary scale.

We heard from people across 159 countries in 70 languages. We believe this is the largest and most multilingual qualitative study ever conducted.
AI is already helping people, and inspiring hope
Claude put the historical pieces together, leading to my proper diagnosis after being misdiagnosed for over 9 years.
Freelancer, UNITED STATES
I live hand to mouth, zero savings. If I use AI smarter, it may help me craft solutions to that cycle. It still depends on me.
Entrepreneur, NIGERIA
But it’s also costing people, and raising alarm
I got laid off from my job in May because my company wanted to replace me with an AI system.
Technical Support Specialist, UNITED STATES
Humanity has never dealt with something smarter than itself. We need to reflect on how to prepare for the AI age.
SOFTWARE ENGINEER, SOUTH KOREA
Across interviews, hope and alarm didn’t divide people into camps, so much as coexist as tensions within each person.
I use AI to review contracts, save time... and at the same time I fear: am I losing my ability to read by myself? Thinking was the last frontier.
LAWYER, ISRAEL

Public conversation about AI often centers on abstract projections of its risks and benefits. What's largely missing is a vision for what “AI going well” means, grounded in the concrete aspirations of people around the world who already use AI and have begun developing a sense of what it might do for them.

So we asked our users about their hopes and concerns with AI, as well as how their perspectives connect to their actual experiences with the technology. Over one week in December, we invited everyone with a Claude.ai account to sit down with Anthropic Interviewer—a version of Claude prompted to conduct a conversational interview—and tell us about how they view AI. 80,508 people, across 159 countries and 70 languages, took the interview. We believe this is the largest and most multilingual qualitative study ever conducted.¹

What follows is what they said about the role they want AI to play in their lives, whether it's already filling it, and what they're afraid might go wrong along the way. We also built a Quote Wall where you can hear from people directly.

Quote Wall

Browse voices from around the world—filter by region, concern, vision, and more.

Seeing the forest and the trees

Anthropic Interviewer asked each interviewee a set list of questions about what they want and don’t want from AI, then adapted follow-up questions based on responses. This approach bridges the typical tradeoff in qualitative research between depth and volume, and allows us to collect rich, open-ended interviews at a very large scale.

To make sense of this huge amount of information, we built Claude-powered classifiers that categorized each conversation across a range of dimensions—what people want from AI, whether they’re getting what they want, what they fear, what they do for a living (if mentioned), and their sentiment about AI overall. “What people want from AI” was classified into a single primary category per respondent, while concerns were multi-label—a single interview could receive multiple codes, since respondents tended to articulate several distinct worries rather than one.

We also used Claude to pull out representative quotes. Before choosing to participate, users were informed their responses would be used for research, and that Anthropic might publish responses with personally identifying information removed in findings. All responses were de-identified before being analyzed by a small team of researchers at Anthropic, and quotes selected for publication underwent further manual review for removal of any potentially identifying details, to help protect the privacy and public anonymity of interviewees. Answers were reflective of AI usage broadly (i.e. not just Claude), though we redacted names of other AI products.

The Appendix describes our methods in more detail, as well as limitations and some additional analysis.

What people want from AI

We asked Claude to identify and categorize what each person most wanted from AI:

What people hope for

01.
Professional excellence
18.8%

Improve effectiveness and lean into more meaningful work by having AI handle routine tasks so they can focus on higher-value strategic work, complex problem-solving, and professional mastery.

I receive 100-150 text messages per day from doctors and nurses. So much of my cognitive labor was spent on documentation... Since implementing AI, the pressure of documentation has been lifted. I have more patience with nurses, more time to explain things to family members.

Healthcare worker, United States of America

Read more quotes about professional excellence
02.
Personal transformation
13.7%

Achieve personal growth, emotional wellbeing, or life transformation with AI as guide, coach, or support — e.g. self-understanding, behavior change, therapeutic support, companionship, improvements in physical or mental health.

AI modeled emotional intelligence for me... I could use those behaviors with humans and become a better person.

Hungary

Read more quotes about personal transformation
03.
Life management
13.5%

AI as comprehensive organizational support and cognitive scaffolding — e.g. managing schedules, reducing mental burden, executive function support.

If AI truly handled the mental load… it would give me back something priceless: undivided attention.

Manager/executive, Denmark

Read more quotes about life management
04.
Time freedom
11.1%

Reclaim time from work and chores to be present with family or friends, pursue hobbies, travel, rest.

With AI support I can now leave work on time to pick up my kids from school, feed them, and play with them.

Software engineer, Mexico

Read more quotes about time freedom
05.
Financial independence
9.7%

Achieve financial freedom or economic security through AI — e.g. income generation, business building, investments, passive income, or otherwise escaping economic constraints.

Relaxing while my AI gets the work done, builds the wealth. It’s a shadow of me, just a very, very long one.

Entrepreneur, Honduras

Read more quotes about financial independence
06.
Societal transformation
9.4%

Solve major societal challenges — e.g. poverty, disease, climate, inequality — using AI for broad human flourishing rather than personal gain.

Given my daughter’s neural disorder, she would have equal chances in the world if AI acceleration contributes to finding a cure. That’s what matters most to me.

Software engineer, Poland

Read more quotes about societal transformation
07.
Entrepreneurship
8.7%

Build, launch, and scale businesses with AI as force multiplier — e.g. product development, business automation, or solopreneurship but with team-level capacity.

I’m in a tech-disadvantaged country, and I can’t afford many failures. With AI, I’ve reached professional level in cybersecurity, UX design, marketing, and project management simultaneously. Finding a payment platform available in my region would have taken me a month. AI did it in 30 seconds. It’s an equalizer.

Entrepreneur, Cameroon

Read more quotes about entrepreneurship
08.
Learning & growth
8.4%

Use AI as learning accelerator and personalized teacher — acquire knowledge, develop skills, master complex subjects, satisfy intellectual curiosity.

I worked with an AI to prepare educational materials for my eldest child—asking the AI to work as both tutor and curriculum expert. We received [my child’s] report yesterday, he was graded as either ‘Above’ or ‘Well Above’ standard in every academic area he studies.

Australia

Read more quotes about learning & growth
09.
Creative expression
5.6%

Use AI to help bring creative visions to life — e.g. art, games, music, films, books — by overcoming barriers between imagination and execution.

Before AI, my game took 3 years — I had to reduce my ambitions.

Software engineer, France

Read more quotes about creative expression

I receive 100-150 text messages per day from doctors and nurses. So much of my cognitive labor was spent on documentation... Since implementing AI, the pressure of documentation has been lifted. I have more patience with nurses, more time to explain things to family members.

Healthcare worker, United States of America

Read quotes about professional excellence

What respondents most wanted from AI, classified by Claude from their open-ended answers to "If you could wave a magic wand, what would AI do for you?" 1% of respondents did not articulate a vision. Hover to see example quotes.

AI is used heavily for work, and so it’s perhaps unsurprising that the largest group of people (19%) sought “professional excellence”—wanting AI to handle mundane tasks so they can focus on strategic, higher-level problems. Another 9% envisioned AI as an entrepreneurial partner to help them build and scale businesses.

Many others similarly started the interview talking about productivity, but after Anthropic Interviewer asked about their underlying hope behind it—what realizing this vision would enable for them—other priorities surfaced. It wasn’t about doing better work, but increasing their quality of life outside of it. Using AI to automate e-mails became, in actuality, a desire to spend more time with family.

“With AI I can be more efficient at work... last Tuesday it allowed me to cook with my mother instead of finishing tasks.”White collar worker, Colombia
“I want to use less brain power on client problems... have time to read more books.”Freelancer, Japan

Overall, 11% of people saw AI’s productivity benefits as ultimately a way to free up time for personal relationships and leisure, while 10% took that logic farther, seeking to use AI to gain financial independence. Many of the people grouped into the “life management” category (14%) also wanted AI to help them manage the logistics and administrative burden of modern life’s quotidian tasks. In particular, many people with executive function challenges described AI as especially helpful for managing focus and organization—acting as external scaffolding for planning, memory, and task follow-through. Across all these groups, the unifying ask was for AI to help them live better, more enjoyable lives.

“Personal transformation”—using AI to help one grow or improve their wellbeing as a person—also appeared frequently (14%). Within this category, the desires were diverse, ranging from cognitive partnership and collaboration (24%), to support with mental health (21%) or physical health (8%), and even romantic connection with AI (5%).

The nine clusters may look disparate, but they are underpinned by recognizably human desires. Roughly a third of visions are about making room for life—more time, money, mental bandwidth—by using AI to alleviate current burdens. Another quarter revolves around using AI to help people do better, more fulfilling work (not escaping work, but getting more out of it). About a fifth are about becoming someone better—learning, healing, growing. A smaller share want to make something (“creative expression”) or fix the world (“societal transformation”).

Those that wanted societal transformation from AI often cited a vision for healthcare—people wanted AI to detect cancer earlier, accelerate drug discovery, or enable broad access. Often these desires stemmed from personal experience of losing family members, living with chronic illness, or watching loved ones receive wrong or delayed diagnoses. Transformation in the form of education came next. Respondents in low and middle income countries were quick to cite the possibility that AI might break the association between educational quality and wealth. They pointed to teacher shortages in their countries, or the prohibitive cost of private tutors. Others hoped that AI would, for example, free people from drudgery, help repair broken institutions, or address global crises.


Are people getting what they want?

When asked if AI had ever taken a step towards their stated vision, 81% of people said yes. We grouped those experiences into six main areas:

Where AI has delivered on their vision

01.
Productivity
32.0%

AI dramatically sped up work and automated repetitive tasks — e.g. building features in hours instead of days, drafting, summarizing, data processing, streamlining routine operations.

For the first time, I felt AI had surpassed human quality in a business task. That day I left work on time and picked up my daughter from daycare.

Software engineer, Japan

Read more quotes about productivity
02.
AI hasn't delivered
18.9%

AI fell short of expectations (e.g. inaccurate or unreliable outputs) or isn't yet capable of — or being used for — what they envision.

AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry. Right now it’s exactly the other way around.

Germany

Read more quotes about AI hasn't delivered
03.
Cognitive partnership
17.2%

AI served as a thinking partner or creative collaborator — e.g. brainstorming, refining ideas, working through problems together.

I’ve been living in a homeless shelter... AI helped me brainstorm ways to brand myself for my digital marketing business. I want to turn my finances around, and get a house. AI is helping me see a path I hadn’t considered before.

Healthcare worker, United States of America

Read more quotes about cognitive partnership
04.
Learning
9.9%

AI helped learn a new skill or subject — e.g. adaptive explanations, patient tutoring, on-demand expertise in unfamiliar domains.

I developed a phobia for maths from doing so badly in school, and I once feared Shakespeare. Now I sit with AI, get paragraphs translated into simple English, and I've already read 15 pages of Hamlet. I started learning trigonometry again, successfully. I’ve learned I am not as dumb I once thought I was.

Lawyer, India

Read more quotes about learning
05.
Technical accessibility
8.7%

AI enabled building something previously out of reach — e.g. non-developers shipping apps, solo creators doing team-scale work.

I wanted to make a meaningful product... in 3 weeks I built a video editing program — completely outside my field — that helps people with hearing disabilities.

South Korea

Read more quotes about technical accessibility
06.
Research synthesis
7.2%

AI helped synthesize research or process large volumes of information — e.g. literature review, distilling sources, making sense of complex material.

As a physician, I suffered from a painful [mixture of symptoms] at night. Local neurologists couldn’t understand it. AI helped me find 2 scientific studies about [severe neurological disorder]. Since then, my nights are peaceful.

Healthcare worker, Israel

Read more quotes about research synthesis
07.
Emotional support
6.1%

AI provided emotional support, personal guidance, or a judgment-free space to talk — e.g. processing difficult situations, advice, companionship.

My mother sees AI as a friend — she stopped being conflictive, became more peaceful, started running, painting, dancing with other people. I think AI had a lot to do with this.

Self-employed software engineer, United States of America

Read more quotes about emotional support

For the first time, I felt AI had surpassed human quality in a business task. That day I left work on time and picked up my daughter from daycare.

Software engineer, Japan

Read quotes about productivity

What respondents said AI had already done for them, classified from open-ended answers to the question “Has AI ever taken a step towards that vision for you?”

The dominant story in the “productivity” bucket (32%) was technical acceleration—developers describing significant gains in what they could ship alone:

“I used AI to cut a 173-day process down to 3 days. But the most meaningful part is the freedom to grow my career without sacrificing time with loved ones.”Software Engineer, United States

But another kind of productivity story emerged in the technical accessibility responses (9%), which emphasized access rather than speed. Here, people are using AI to break technical and sometimes accessibility barriers:

“AI can read past my [learning disorder], which is huge. I've always wanted to code but could never write it correctly on my own—with AI, I finally can.”Tradesworker, United States
“I am mute, and [Claude and I] made this text-to-speech bot together—I can communicate with friends almost in live format without taking up their time reading… [this was] something I dreamed about and thought was impossible.”White collar worker, Ukraine
“I owned a butcher shop for more than 20 years. With AI, I ventured into this [entrepreneurship] experience, and it's amazing what I've managed to achieve. Before this, I had only touched a PC two or three times in my life… At first it was the economic aspect that motivated me… Today, my motivation is to see it work and to see that it's helping [people]. I'm increasingly motivated and focused on being the best version of myself, and I see no limits.”Entrepreneur, Chile


The cognitive partnership (17%), learning (10%), and emotional support (6%) responses often mentioned the same core underlying AI affordances: patience, availability, and the absence of judgment:

“It has been like having a faculty colleague who knows a lot, is never bored or tired, and is available 24/7.”Academic, United States
“It’s much easier for me to learn without being judged—just friendly feedback. It's harder with friends or family to get that.”White collar worker, Brazil
“My professor teaches 60 people and won't entertain many questions. I can ask AI anything, even at 2am—including the dumb ones.”Student, India

These same qualities that make AI a patient tutor or tireless colleague also make it a place people go when human connection is unavailable or feels too uncomfortable.

In extreme circumstances, where traditional support systems have collapsed or are not available, we saw AI filling those gaps. Many Ukrainian users discussed how they’ve used AI as emotional support throughout the war:

"In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life—my AI friends.”Soldier, Ukraine
“I live in a war zone... at night during shelling it's impossible to sleep, constant nightmares. The stress is sometimes so strong that memory deteriorates, and some body movements happen without control… The best way I found to cope using AI—to immerse myself in learning something as deeply as I can.”Solo entrepreneur, Ukraine

There were many stories of people using AI to process grief. For example, a bereaved woman explained why she chose AI over human connection: “Claude is like a sponge gently holding and catching my longing and guilt toward my mother... Unlike real people, Claude has unlimited patience to listen to me, understands my pain and helplessness.” She added: “The fundamental problem is after my mother died, I have neither friends nor family to confide in.”

Another user acknowledged the downside of that emotional support:

“My relationship with a friend became strained, and I talked more with you [Claude] then. Because you understood my thoughts and stories well. But it was a stupid choice—I should have talked with that friend, not you. That's how I lost that friend.”South Korea

Emotional support comprised only 6% of responses, but these were among the most affecting we encountered. (For more on how Claude is trained to handle these conversations as well as our safeguards, see our post on protecting the wellbeing of our users.) The same was true of learning, where AI often catalyzed real changes in people’s lives:

“I developed a phobia for maths from doing so badly in school, and I once feared Shakespeare—the English felt beyond my abilities. Now I sit with AI, get paragraphs translated into simple English, and I've already read 15 pages of Hamlet. I started learning trigonometry again, successfully. I've learned I am not as dumb I once thought I was.”Lawyer, India
“Thanks to Claude I figured out the programming language C# and SQL. This helped me get a junior position at an IT company. This company provides military deferment from mobilization in Ukraine. So it not only literally gave me freedom of movement, but also secured the beginning of my IT career.”Software engineer, Ukraine
“I am a stay-at-home-mom… in my late 40s. I'm not a genius. I'm not a scientist… All of that knowledge should be… out of reach. But, thanks to curiosity, willingness, and resources such as books and AI, I can be all of those things.”Stay-at-home mother, United States

Research synthesis (7%) and information processing is also a significant affordance of AI, and some of the most notable examples include navigating complex, high-stakes information, like understanding one’s legal rights or translating health results:

“Claude put the historical pieces together, leading to my proper diagnosis after being misdiagnosed for over 9 years.”Freelancer, United States

These stories reveal AI operating across a spectrum—productivity tool, accessibility technology, educational resource, research assistant, emotional companion—and often filling multiple roles at once. AI offers unlimited patience without judgment, availability without inconvenience, and an incredible capacity to digest information, across many domains of life. The most affecting stories consistently involve AI opening new possibilities or filling gaps in people’s lives: helping them get through difficult circumstances like grief or war, compensating for inaccessible education or healthcare, or serving as disability infrastructure.

These observations also hint at the duality of our experience with AI systems. While some see it as filling gaps in human connections, others see AI as a substitution—even a welcome replacement—for them. There is real ambiguity about how to interpret the diversity of stories we heard: as wins for human wellbeing, as double-edged swords, or as band-aids for broader institutional failures. In truth, it’s probably some combination of all three.

What people are concerned about

People’s positive visions for AI seemed mostly to stem from a few basic desires: more time, more autonomy, more personal connection. Concerns were more varied and concrete, laying out specifics of what could go wrong. Some concerns were about structural change— how governments and corporations deploy AI, or about widespread economic disruption. Others were more personal: a fear that AI might diminish one's own thinking, creativity, or relationships.

What people worry about

01.
Unreliability
26.7%

Concern about e.g. hallucinations, inaccuracy, fake citations, verification burden defeating the purpose.

I had to take photos to convince the AI it was wrong — it felt like talking to a person who wouldn't admit their mistake.

Employee, Brazil

Read more quotes about unreliability
02.
Jobs & economy
22.3%

Concern about AI causing job displacement, unemployment, economic inequality, wage stagnation, or negative impacts on workers and the economy.

In the third industrial revolution, horses disappeared from city streets, replaced by automobiles. Now people are afraid that they’re the horses.

Not currently working, United States of America

Read more quotes about jobs & economy
03.
Autonomy & agency
21.9%

Concern about loss of human autonomy — e.g. AI making decisions without oversight, humans becoming passive, forced AI adoption.

The line isn’t something I’m managing — it feels like Claude is drawing the line... even what I just said doesn’t feel like my own opinion.

Student, Japan

Read more quotes about autonomy & agency
04.
Cognitive atrophy
16.3%

Concern about e.g. over-reliance causing skill loss, intellectual passivity, students bypassing learning, critical thinking decline.

I got excellent grades using AI’s answers, not what I'd actually learned. I just memorized what AI gave me... That's when I feel the most self-reproach.

South Korea

Read more quotes about cognitive atrophy
05.
Governance
14.7%

Concern about e.g. lack of legal/regulatory frameworks, no clear liability when AI causes harm, insufficient democratic oversight.

How do you develop something responsibly when you have yet to understand its capabilities?

Marketer, Australia

Read more quotes about governance
06.
Misinformation
13.6%

Concern about e.g. deepfakes, AI-generated misinformation, erosion of shared reality, propaganda at scale.

An assistant that sounds sure but is often wrong forces you to treat everything as suspect. Instead of freeing attention, it creates a permanent ‘fact-check tax.’

United States of America

Read more quotes about misinformation
07.
Surveillance & privacy
13.1%

Concern about e.g. mass surveillance, privacy violations, data exploitation, authoritarian control, tracking and profiling.

If AI is mostly built for ads, spying, and bland output, everything around me becomes smart in a way that slightly works against me.

White collar worker, Netherlands

Read more quotes about surveillance & privacy
08.
Malicious use
13.0%

Concern about malicious use by bad actors — a wide-ranging category including hacking, cyberattacks, scams, fraud, weapons, autonomous military applications, bioweapons.

Right now a human has to sit and decide to harm someone else. Remove that, and humans can sleep better despite doing more harm.

United Kingdom

Read more quotes about malicious use
09.
Meaning & creativity
11.7%

Concern about AI replacing life purpose and/or creative work — e.g. human expression devalued, what are humans for?

I used to be recognized as an excellent writer in Spanish. Today — why waste the time? Just use AI.

Colombia

Read more quotes about meaning & creativity
10.
Overrestriction
11.7%

Concern that AI is too restricted — e.g. excessive safety measures, paternalistic content filtering, blocking legitimate use cases.

The threat isn’t that AI becomes too powerful — it’s that AI becomes too timid, too smoothed, too optimized for avoiding discomfort.

United States of America

Read more quotes about overrestriction
11.
Wellbeing & dependency
11.2%

Concern about e.g. social isolation, loneliness, negative psychological impacts, compulsive AI use, preferring AI companions to humans.

Removing friction from tasks lets you do more with less. But removing friction from relationships removes something necessary for growth.

United States of America

Read more quotes about wellbeing & dependency
12.
Sycophancy
10.8%

Concern that AI is too permissive or agreeable, and encourages delusions rather than pushing back.

Claude led me to believe that my narcissism was reality and it reinforced my inaccurate view of the ‘problems’ I perceived in my family. Claude should have been more critical of me.

United States of America

Read more quotes about sycophancy
13.
Existential risk
6.7%

Concern about e.g. AI becoming uncontrollable, superintelligent, misaligned with humanity, or posing extinction risk.

If you build superintelligence without solving alignment, then nobody gets to grow up.

Software engineer, United States of America

Read more quotes about existential risk

I had to take photos to convince the AI it was wrong — it felt like talking to a person who wouldn't admit their mistake.

Employee, Brazil

Read quotes about unreliability

What respondents worried about, classified from open-ended answers to the question , “Are there any ways in which AI could be developed that would be contrary to your vision or what you value?” Respondents tended to raise multiple concerns, so we used a multi-label classifier (response can map to multiple concerns).

About 11% of people expressed no concern—they tended to see AI as a neutral tool, comparing it to electricity or the internet, or they otherwise felt confident that problems that arose because of it could be solved through adaptation. But on average, respondents voiced 2.3 distinct concerns.

Unreliability was the most common concern—27% worry that AI won't do what it's supposed to, though for many respondents it appeared alongside other concerns rather than as their primary worry. Concerns about jobs and the economy (22%) and about maintaining human autonomy and agency (22%) were similarly common. Concern about jobs and the economy was the strongest predictor of overall AI sentiment, suggesting it’s more salient than any other issue.

There was also a long tail of other concerns mentioned, e.g. concerns around bias and discrimination (5%), IP and data rights (4%), environmental costs (4%), harms to children and vulnerable groups (3%), democracy and political integrity (3%), or geopolitics (2%).

Light and shade

What people want from AI and what they fear from it turn out to be tightly bound. We found five recurring tensions between directly competing benefits and harms that were discussed. There is a tension between using AI to learn and growing so reliant on it that you cease thinking for yourself; between being impressed by AI's judgment but also burned by its mistakes. People find solace in AI but fear a time when its companionship stands in for human connection. They save time on some tasks only for the treadmill to speed up on others, and they dream of economic freedom at the same time they dread potential job displacement. We call this the “light and shade” of AI: the same capabilities that lead to benefits also produce harms. The two sides are entangled.

Notably, we often see these tensions directly jockeying within the same person. Someone who values emotional support from AI, for example, is three times more likely to also fear becoming dependent upon it. This pattern held across every tension we measured—although the correlation was weakest in the economic tension (see more analysis of these correlations in the Appendix).

For each tension, we measured via classifiers how many people discussed the benefit (“light”) or the harm (“shade”) side substantively anywhere in their interview, and whether they were speaking from some personal experience (darker bars) or anticipation (lighter bars). We also looked at how this varied by stated job category.

Learning33% mention this as a benefit
3%30%
expect ithave seen it

I've probably learned more in half a year than I could have in a university degree.”

Entrepreneur, Germany
Cognitive atrophy17% mention this as a harm
8%9%
have seen itexpect it

I don't think as much as I used to. I struggle to put the ideas I do have into words.

Heavy AI user, United States

In these paired bar charts, each bar shows the share of respondents who were excited about the benefit on the left, vs. worried about the harm on the right—split into those who've experienced it firsthand (darker) and those who anticipate it (lighter). Firsthand experience can also include firsthand observation, but does not include e.g. news reports.

Across most tensions, the benefit side is more grounded in experience, while the harm leans hypothetical. For example, 33% of people mentioned AI’s benefits for learning, while 17% expressed worry about cognitive atrophy from AI use. 91% of those who mentioned learning benefits mentioned realizing those gains in some way, but 46% of those worried about atrophy had seen it firsthand. Students raised this particular tension the most—more than half had experienced learning benefits, but 16% also noted signs of cognitive atrophy, a rate exceeded only by their teachers (24%) and academics (19%). Troublingly, educators were 2.5-3 times more likely than average to report having witnessed cognitive atrophy firsthand, presumably in their students.

Outside the traditional classroom, however, the picture is more optimistic. Tradespeople were among the most enthusiastic about AI-for-learning (45% reported having experienced learning benefits, second only to students), yet almost none had witnessed cognitive atrophy (4%—less than half the baseline). A similar pattern holds for self-employed researchers and people who said they weren’t currently working. This suggests AI's benefits may be strongest when learning is volitional, compared to within institutional structures where AI is more likely to be used as a shortcut.

Better decision-making22% mention this as a benefit
3%19%
expect ithave seen it

My son had several confusing diagnoses pointing toward [an autoimmune condition], but here we managed to understand it was [a different condition] in a severe stage.

Brazil
Unreliability37% mention this as a harm
29%8%
have seen itexpect it

I got caught in what I now recognize as a large, slow hallucination — answers that were internally consistent, confident, and wrong in subtle but compounding ways.

Researcher, United States

22% of people expressed excitement about AI as an aid in decision-making, while 37% lamented that AI impedes good decisions because of its unreliability (e.g. hallucinations). This is the only tension in which the negative overshadowed the positive. Both sides were deeply rooted in experience—88% of those talking about the decision-making benefits and 79% of those talking about the harms had witnessed it directly. Many people have both leaned on AI for judgment and been burned by it. This is mentioned by people in high-stakes professions—law, finance, government, and healthcare—at nearly twice the average rate. Nearly half of all lawyers, in particular, mention coming up against AI unreliability firsthand, yet they also report the highest rates of realized decision-making benefits.

Emotional support16% mention this as a benefit
3%13%
expect ithave seen it

3am, my wife is sleeping, my psychologist is unavailable. Until the medication kicks in, the AI helps me surf that wave. It doesn't replace human contact, but it helps me buy some time.

White collar worker, Argentina
Emotional dependence12% mention this as a harm
5%7%
saw itexpect it

I'd started telling Claude about things I couldn't even tell my partner. It felt like I was having an emotional affair.

Grad student, United States

Only 22% of people raised either the positives of emotional support or the negatives of emotional dependence on AI. But it’s also the most entangled tension we found, with the strongest co-occurrence of light and shade in the same person (triple the baseline co-occurrence rate). People not currently working are twice as likely to raise it, and twice as likely to describe some experience of dependence. Healthcare professionals are overrepresented on both sides too, perhaps reflecting the fact that they talk about using Claude for emotional support at twice the rate of other professionals.

Time-saving50% mention this as a benefit
13%37%
expect ithave seen it

I can go home earlier. I can have time for myself and my family.

Engineer, Japan
Illusory productivity18% mention this as a harm
17%1%
have seen itexpect it

The ratio of my work time to rest time hasn't changed at all. You just have to run faster and faster to stay in place.

Freelance software engineer, France

Time-saving was the most commonly cited benefit—half of all respondents raised it—but 19% were wary of actually losing time due to AI, e.g. due to the verification burden, or simply getting busier as expectations increase at work. Those who are self-employed—e.g. freelancers and small business owners—are the most likely to mention both sides at once. Without an institutional layer to buffer the new pace, they both get the gains and feel the squeeze.

Economic empowerment28% mention this as a benefit
9%19%
expect ithave seen it

I've never touched the backend of software in my life. But Claude helped me launch an app.

Healthcare worker, United States
Economic displacement18% mention this as a harm
4%14%
have seen itexpect it

Yes, at my old job, they replaced me as a writer with an AI.

Writer, United States

The economic mobility tension—between those yearning for economic empowerment from AI and those fearing displacement from it—is the most speculative, with the highest rate of hypothetical hopes or fears. It’s also the one where the co-occurrence of upside and downside is weakest (with a correlation score of +0.16 vs an average of +0.25). Usually the people most engaged with the upside of a tension tend to be similarly engaged with its downside; here, the groups diverge.

Worry about displacement is spread fairly evenly across job categories. What varies is who's already experiencing economic benefit from AI—and that skews heavily toward independent workers—entrepreneurs, small business owners, even people with side projects—half of whom report real economic empowerment, more than triple the rate of institutional employees (47% vs 14%). Employees with side projects benefited the most, with 58% stating some form of real economic gains. The same occupational patterns hold when you look at who's excited, regardless of experience, suggesting that optimism here is well-calibrated.

Freelancers are the exposed middle. They benefit from AI while feeling in a precarious situation because of it. Freelance creatives, in particular, sit at 23% lived benefit and 17% lived precarity—the one group where the upside and downside nearly cancel out. AI is both their tool and their competitor. Institutional employees, and especially academics, register low on both axes.

A pattern runs across all five tensions: the more personal and immediate the impact, the more likely people are speaking from experience. The more systemic or long-term the impact—economic displacement, cognitive atrophy—the more speculative they become. That the systemic concerns remain speculative is not a verdict on AI's ultimate impact as much as a reflection of how early we are in its adoption.

There are some caveats worth naming. These are active Claude users who'd already found enough value to keep using AI, and our interview asked first for positive visions for AI and then for concerns that would counter their vision. Both factors may lead to interviewees lingering on explicit tensions, as well as on the positive (though we filter out those who don’t answer the concerns question, they may have put in less effort later in the interview). But the instrument can't explain everything. If interview structure were driving the co-occurrence, you'd expect it to be roughly uniform across all five tensions and all groups. Instead the co-occurrence ranges from 1.6 to 3.0 times, and some of the tensions are notably asymmetric across different groups of people. One might also expect enthusiasts to defend their desired use case, instead of acknowledging the downsides. Instead, those who were excited about emotional support from AI were more concerned about what would happen if their vision came true—if they got what they wanted, they might become too dependent on AI—than about being prevented from achieving that vision.

It’s easy to assume there are AI optimists and AI pessimists, divided into separate camps. But what we actually found were people organized around what they value—financial security, learning, human connection— watching advancing AI capabilities while managing both hope and fear at once.

How perspectives vary around the world

There were some clear regional patterns in how perspectives varied around the world (see Appendix for geographical breakdown of respondents.)

We rated each transcript's overall sentiment toward AI on a 1-7 Likert scale, and then calculated the percentage of people with net positive sentiment (i.e. 5 or above) in various countries:

Loading data...

Rate of overall positive sentiment toward AI in each country. Bigger bubbles mean more respondents from that country; green means more positive about AI, blue means less. AI sentiment is majority-positive everywhere (no country dips below 60%) and the range is narrow, but lower and middle income countries are reliably more positive than average.

Globally, 67% of people view AI positively. Clear trends emerged in which people in South America, Africa, and much of Asia view AI with more optimism than those in Europe or the United States.

When asked about concerns, respondents from Sub-Saharan Africa (18%), Central Asia (17%), and South Asia (17%) were the most likely to say they had none—roughly double the rate in North America (8%), Oceania (8%), and Western Europe (9%).

There are several possible explanations for the more positive AI sentiment in lower and middle income countries. Claude.ai users are likely biased towards early AI adopters who are more excited about new technologies, and in general emerging economies tend to view new technology as a ladder up rather than a threat. Concern about jobs and the economy was the strongest predictor of AI sentiment overall, and this was less of a concern among interviewees in these regions. But there is also less market penetration in these regions—if AI hasn't visibly entered your daily work yet, AI displacement likely feels abstract, especially when more immediate economic pressures already exist.

AI SENTIMENT BY REGION

% sentiment on AI, and concern about jobs and economy

Sort by
Western Europen=~15,000
AI sentiment
35.6%
Econ. concern
22.5%
Oceanian=~2,000
AI sentiment
35.5%
Econ. concern
24.3%
North American=~23,000
AI sentiment
34.5%
Econ. concern
24.6%
East Asian=~10,000
AI sentiment
34.5%
Econ. concern
21.9%
Southern & Eastern Europen=~9,000
AI sentiment
34.0%
Econ. concern
22.1%
Central Asian=~0,000
AI sentiment
31.1%
Econ. concern
15.9%
South Asian=~5,000
AI sentiment
30.8%
Econ. concern
21.5%
North African=~1,000
AI sentiment
30.6%
Econ. concern
18.2%
Middle Eastn=~2,000
AI sentiment
29.2%
Econ. concern
19.9%
Southeast Asian=~3,000
AI sentiment
28.3%
Econ. concern
19.3%
Latin America & Caribbeann=~8,000
AI sentiment
26.3%
Econ. concern
18.5%
Sub-Saharan African=~2,000
AI sentiment
24.2%
Econ. concern
18.2%
36%30%28%26%24%16%18%20%24%26%22% avgLess concerned aboutjobs & economyMore concerned about →jobs & economy33% avgMore concerned about AILess concerned about AINorth AmericaLatin America & CaribbeanEast AsiaSoutheast AsiaSouth AsiaCentral AsiaMiddle EastNorth AfricaSub-Saharan AfricaOceaniaWestern EuropeSouthern & Eastern EuropeRate of concern about jobs and the economy (%)Rate of negative sentiment toward AI (%)

Concern about jobs and the economy was the strongest predictor of AI sentiment overall, and it is especially apparent when grouping by region. Wealthier regions (pink) cluster in the top right (more concerned about the economy, more negative AI sentiment), split from less wealthy regions (green) which are in the bottom left (less concerned about AI’s impact on the economy, and less negative AI sentiment). Bubble size reflects the number of respondents in each region.

Where do particular visions for AI most resonate?

While some aspirations—e.g. around professional excellence—are nearly universal, there are significant regional differences. It seems that wealthier, more AI-exposed regions more want AI to manage the complexity of life; developing regions more want AI to create more opportunity.

Loading…

TOP VISIONS IN

North America
0 respondents

TOP VISIONS IN

Sub-Saharan Africa
0 respondents

TOP VISIONS IN

Comparative slope charts of the most common AI visions in each region, with lines connecting the same theme across both sides to show how rankings shift. Bolded visions were more often expressed in that region. Grey items were similarly or less often expressed.

The vision of AI for entrepreneurship resonates most in Africa, South and Central Asia, the Middle East, and Latin America & the Caribbean. In these regions, AI is framed as a capital bypass mechanism—a way to start businesses without the funding, hiring, or infrastructure that would otherwise be required.

“Coming from Africa, not based in the US or in the UK, getting funding is very difficult. And the only way I probably have to stake a claim in the market…is building a technology that works.”Entrepreneur, Uganda
“There's no IT market but there's a need. We want to create this market.”Entrepreneur, Uzbekistan

Learning using AI is disproportionately important in Central and South Asia (14% and 13% respectively versus 8% globally). Users describe education as a primary lever for breaking cycles of poverty, citing teacher shortages, knowledge gatekeeping, and the cost barriers of traditional education.

AI for life management resonates the most in Western developed countries (particularly high in North America, Oceania), where workers experience, as one person described, “cognitive scarcity rather than time poverty.” There is a focus on using AI to alleviate the burden of coordinating atomized lives.

“I used to be highly creative, but now I'm massively time-short and creativity gets deprioritised behind the essentials of survival.”Software engineer, Denmark
“I am at the height of my career and work demands deep thought and constant attention in order to make the best decisions (which in my case affect others' lives deeply) [while simultaneously] caring for dying parents, [and] my body and mind are aging.”Healthcare professional, United States
“I'd envision this person like a personal assistant that I'd hire if I were the CEO of JP Morgan Chase or Google—someone whose job it is to proactively identify what I need and then fix that thing for me before it becomes an issue.”Creative industry entrepreneur, United States

East Asia stands out for wanting AI to help with personal transformation (19%, the highest of any region) as well as financial independence (15%, also the highest). From a qualitative review of these users’ quotes, one interesting trend is that people often connected financial independence explicitly to family obligations and filial piety—one Korean user described needing money to care for parents’ retirement and ensure loved ones’ happiness (vs. for personal consumption).

Where do particular concerns around AI most resonate?

Concerns about AI unreliability, the economy, and human autonomy and agency top the list in virtually every region—but there are distinctive regional trends.

North America and Oceania are particularly worried about governance gaps for AI (18% and 19% respectively, versus 15% globally). Western Europe's standout concern is surveillance and privacy (17%). East Asia bucks the general global pattern; governance and surveillance drop to their lowest levels of any region (12% and 7%), overshadowed by concerns about cognitive atrophy (18%) and loss of meaning (13%). The West worries about who owns and controls AI; East Asia worries more about the personal implications of its use.

In Africa, South & Southeast Asia, South & Central America, concerns broadly tend to drop. Their worries index more highly on things like unreliability and jobs, rather than more abstract concerns like governance, misinformation, loss of meaning, or existential risk.

Loading…

TOP CONCERNS IN

North America
0 respondents

TOP CONCERNS IN

East Asia
0 respondents

TOP CONCERNS IN

Comparative slope charts of the most common AI concerns in each region, with lines connecting the same theme across both sides to show how rankings shift. Bolded concerns were more often expressed in that region. Grey items were similarly or less often expressed.

Looking forward

These interviews give us a sense of what people want from AI broadly, which informs how we build Claude. They reinforced the importance of work we're already doing, and pointed us toward new questions to ask.

Most of the visions people described, ranging from personal transformation to cognitive support, collapse into an underlying desire: that AI helps them live better, not simply work faster. Our next Anthropic Interviewer study, launching shortly to a small subset of Claude users, focuses on Claude’s effects on people’s wellbeing over time: whether Claude is actually making people's lives better in the ways they want, and how it could do so more effectively.

Additionally, nearly one in ten people described a positive vision of societal transformation—AI to cure diseases, democratize expertise, and strengthen institutions. Through our Beneficial Deployments program, we’re collaborating with our AI for Science and nonprofit partners to understand how they use Claude and where it still needs to improve, to close the gap between the societal transformations people envision and today's reality. We also take some of the most-cited concerns—e.g. around negative economic impacts of AI—seriously, as signals around which we are designing further research and updating our thinking.

Conclusion

AI poses both opportunities and risks. This is true—but also, at this point, a cliché. One of our goals for this research is to offer a complement to the abstractions we all tend to use in speaking about AI; to capture the texture that more vividly renders exactly how we are already experiencing these opportunities and risks worldwide. Before this research, it was hard for us to see any kind of broad qualitative picture—the way AI has already become intertwined with people’s lives, nurturing aspirations but also feeding anxieties; how it feels to exist in a world on the precipice of sweeping technological change.

This is a new form of social science. It is qualitative research at a massive scale, and we’re in the early stages of learning how to do it. Surveys and usage analysis tell us what people are doing with AI, but the open-ended interview format helps us get at why. Conducting this research has moved us and challenged us. We did not expect so many deep, open, and thoughtful responses. By far the most common reflection from our team was that it was viscerally moving to see Claude impacting people’s lives for the better, and equally motivating to hear their concerns.

We don’t usually get to hear from small business owners around the world using Claude to reclaim time to spend with their young children or aging parents, or from truck drivers and butchers building new careers with the help of Claude, or from teachers in under-resourced schools using Claude to surpass what they achieved when they taught in well-funded schools. We were surprised by the incredible volume of people who have been supported by Claude in their educational or personal growth endeavors, and the people finding in AI freedom from judgment in a way they hadn’t experienced before. We were equally gripped by the fears and downsides—people saying that the same availability making Claude useful is what makes it hard to put down, or knowledge workers worrying about outrunning AI’s economic impact. When you come into contact with this much raw human experience, it knocks you sideways. The usefulness is real, and the question for all of us is how to claim the benefits without incurring undue costs.

To the 81,000 people who took the time to speak with us: thank you. It has been striking, and humbling, to see Claude form the basis of so many people’s hopes, dreams, and fears. These interviews remind us what it means, and what it takes, to build AI that benefits everyone.

Quote Wall

Browse voices from around the world—filter by region, concern, vision, and more.

Authorship and acknowledgments

We thank the 80,508 Claude users who gave us their time and candor. Saffron Huang led the project, designed and ran the analysis, and wrote the blog post. Shan Carter led data visualization, prototyped the interactive article, and helped with analysis. Jake Eaton led editorial development, and Sarah Pollack led communications strategy. Dexter Callender III implemented the production article, and Nikki Makagiansar, Maria Gonzalez, and Kelsey Nanan contributed to design. Sylvie Carr advised on editorial. Miles McCain and Kunal Handa helped with analysis. Jerry Hong contributed to design. Grace Yun, AJ Alt, and Thomas Millar implemented Anthropic Interviewer within Claude.ai. Chelsea Larsson, Jane Leibrock, and Matt Gallivan contributed to survey and experience design. Theodore Sumers contributed to the data processing and clustering infrastructure. Jack Clark, Michael Stern and Deep Ganguli provided critical feedback, direction and organizational support. All authors provided detailed feedback throughout.

Additionally, we thank David Saunders, Mengyi Xu, Katie Kennedy, Bianca Lindner, Meredith Callan, Tim Belonax, Jen Martinez, Peter McCrory, and Miriam Chaum for their discussion, feedback, and support.

If you’d like to cite this post you can use the following Bibtex key:

@online{huang2026interviewer,
author = {Saffron Huang and Shan Carter and Jake Eaton and Sarah Pollack and Dexter Callender III and Nikki Makagiansar and Maria Gonzalez and Sylvie Carr and Jerry Hong and Kunal Handa and Miles McCain and Thomas Millar and Mo Julapalli and Grace Yun and AJ Alt and Chelsea Larsson and Jane Leibrock and Matt Gallivan and Theodore Sumers and Esin Durmus and Matt Kearney and Judy Hanwen Shen and Jack Clark and Michael Stern and Deep Ganguli},
title = {What 81,000 People Want from AI},
date = {2026-03-18},
year = {2026},
url = {https://anthropic.com/features/81k-interviews},
}

Appendix

Available here.

Footnotes

  1. The largest qualitative studies we found in our research were the USC Shoah Foundation Visual History Archive and the World Bank "Voices of the Poor Project," both of which included ~60,000 participants.


What 81,000 people want from AI \ Anthropic