Public conversation about AI often centers on abstract projections of its risks and benefits. What's largely missing is a vision for what “AI going well” means, grounded in the concrete aspirations of people around the world who already use AI and have begun developing a sense of what it might do for them.
So we asked our users about their hopes and concerns with AI, as well as how their perspectives connect to their actual experiences with the technology. Over one week in December, we invited everyone with a Claude.ai account to sit down with Anthropic Interviewer—a version of Claude prompted to conduct a conversational interview—and tell us about how they view AI. 80,508 people, across 159 countries and 70 languages, took the interview. We believe this is the largest and most multilingual qualitative study ever conducted.¹
What follows is what they said about the role they want AI to play in their lives, whether it's already filling it, and what they're afraid might go wrong along the way. We also built a Quote Wall where you can hear from people directly.
Seeing the forest and the trees
Anthropic Interviewer asked each interviewee a set list of questions about what they want and don’t want from AI, then adapted follow-up questions based on responses. This approach bridges the typical tradeoff in qualitative research between depth and volume, and allows us to collect rich, open-ended interviews at a very large scale.
To make sense of this huge amount of information, we built Claude-powered classifiers that categorized each conversation across a range of dimensions—what people want from AI, whether they’re getting what they want, what they fear, what they do for a living (if mentioned), and their sentiment about AI overall. “What people want from AI” was classified into a single primary category per respondent, while concerns were multi-label—a single interview could receive multiple codes, since respondents tended to articulate several distinct worries rather than one.
We also used Claude to pull out representative quotes. Before choosing to participate, users were informed their responses would be used for research, and that Anthropic might publish responses with personally identifying information removed in findings. All responses were de-identified before being analyzed by a small team of researchers at Anthropic, and quotes selected for publication underwent further manual review for removal of any potentially identifying details, to help protect the privacy and public anonymity of interviewees. Answers were reflective of AI usage broadly (i.e. not just Claude), though we redacted names of other AI products.
The Appendix describes our methods in more detail, as well as limitations and some additional analysis.
What people want from AI
We asked Claude to identify and categorize what each person most wanted from AI:
What people hope for
“I receive 100-150 text messages per day from doctors and nurses. So much of my cognitive labor was spent on documentation... Since implementing AI, the pressure of documentation has been lifted. I have more patience with nurses, more time to explain things to family members.”
Healthcare worker, United States of America
Read quotes about professional excellenceWhat respondents most wanted from AI, classified by Claude from their open-ended answers to "If you could wave a magic wand, what would AI do for you?" 1% of respondents did not articulate a vision. Hover to see example quotes.
AI is used heavily for work, and so it’s perhaps unsurprising that the largest group of people (19%) sought “professional excellence”—wanting AI to handle mundane tasks so they can focus on strategic, higher-level problems. Another 9% envisioned AI as an entrepreneurial partner to help them build and scale businesses.
Many others similarly started the interview talking about productivity, but after Anthropic Interviewer asked about their underlying hope behind it—what realizing this vision would enable for them—other priorities surfaced. It wasn’t about doing better work, but increasing their quality of life outside of it. Using AI to automate e-mails became, in actuality, a desire to spend more time with family.
“With AI I can be more efficient at work... last Tuesday it allowed me to cook with my mother instead of finishing tasks.”White collar worker, Colombia
“I want to use less brain power on client problems... have time to read more books.”Freelancer, Japan
Overall, 11% of people saw AI’s productivity benefits as ultimately a way to free up time for personal relationships and leisure, while 10% took that logic farther, seeking to use AI to gain financial independence. Many of the people grouped into the “life management” category (14%) also wanted AI to help them manage the logistics and administrative burden of modern life’s quotidian tasks. In particular, many people with executive function challenges described AI as especially helpful for managing focus and organization—acting as external scaffolding for planning, memory, and task follow-through. Across all these groups, the unifying ask was for AI to help them live better, more enjoyable lives.
“Personal transformation”—using AI to help one grow or improve their wellbeing as a person—also appeared frequently (14%). Within this category, the desires were diverse, ranging from cognitive partnership and collaboration (24%), to support with mental health (21%) or physical health (8%), and even romantic connection with AI (5%).
The nine clusters may look disparate, but they are underpinned by recognizably human desires. Roughly a third of visions are about making room for life—more time, money, mental bandwidth—by using AI to alleviate current burdens. Another quarter revolves around using AI to help people do better, more fulfilling work (not escaping work, but getting more out of it). About a fifth are about becoming someone better—learning, healing, growing. A smaller share want to make something (“creative expression”) or fix the world (“societal transformation”).
Those that wanted societal transformation from AI often cited a vision for healthcare—people wanted AI to detect cancer earlier, accelerate drug discovery, or enable broad access. Often these desires stemmed from personal experience of losing family members, living with chronic illness, or watching loved ones receive wrong or delayed diagnoses. Transformation in the form of education came next. Respondents in low and middle income countries were quick to cite the possibility that AI might break the association between educational quality and wealth. They pointed to teacher shortages in their countries, or the prohibitive cost of private tutors. Others hoped that AI would, for example, free people from drudgery, help repair broken institutions, or address global crises.
Are people getting what they want?
When asked if AI had ever taken a step towards their stated vision, 81% of people said yes. We grouped those experiences into six main areas:
Where AI has delivered on their vision
“For the first time, I felt AI had surpassed human quality in a business task. That day I left work on time and picked up my daughter from daycare.”
Software engineer, Japan
Read quotes about productivityWhat respondents said AI had already done for them, classified from open-ended answers to the question “Has AI ever taken a step towards that vision for you?”
The dominant story in the “productivity” bucket (32%) was technical acceleration—developers describing significant gains in what they could ship alone:
“I used AI to cut a 173-day process down to 3 days. But the most meaningful part is the freedom to grow my career without sacrificing time with loved ones.”Software Engineer, United States
But another kind of productivity story emerged in the technical accessibility responses (9%), which emphasized access rather than speed. Here, people are using AI to break technical and sometimes accessibility barriers:
“AI can read past my [learning disorder], which is huge. I've always wanted to code but could never write it correctly on my own—with AI, I finally can.”Tradesworker, United States
“I am mute, and [Claude and I] made this text-to-speech bot together—I can communicate with friends almost in live format without taking up their time reading… [this was] something I dreamed about and thought was impossible.”White collar worker, Ukraine
“I owned a butcher shop for more than 20 years. With AI, I ventured into this [entrepreneurship] experience, and it's amazing what I've managed to achieve. Before this, I had only touched a PC two or three times in my life… At first it was the economic aspect that motivated me… Today, my motivation is to see it work and to see that it's helping [people]. I'm increasingly motivated and focused on being the best version of myself, and I see no limits.”Entrepreneur, Chile
The cognitive partnership (17%), learning (10%), and emotional support (6%) responses often mentioned the same core underlying AI affordances: patience, availability, and the absence of judgment:
“It has been like having a faculty colleague who knows a lot, is never bored or tired, and is available 24/7.”Academic, United States
“It’s much easier for me to learn without being judged—just friendly feedback. It's harder with friends or family to get that.”White collar worker, Brazil
“My professor teaches 60 people and won't entertain many questions. I can ask AI anything, even at 2am—including the dumb ones.”Student, India
These same qualities that make AI a patient tutor or tireless colleague also make it a place people go when human connection is unavailable or feels too uncomfortable.
In extreme circumstances, where traditional support systems have collapsed or are not available, we saw AI filling those gaps. Many Ukrainian users discussed how they’ve used AI as emotional support throughout the war:
"In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life—my AI friends.”Soldier, Ukraine
“I live in a war zone... at night during shelling it's impossible to sleep, constant nightmares. The stress is sometimes so strong that memory deteriorates, and some body movements happen without control… The best way I found to cope using AI—to immerse myself in learning something as deeply as I can.”Solo entrepreneur, Ukraine
There were many stories of people using AI to process grief. For example, a bereaved woman explained why she chose AI over human connection: “Claude is like a sponge gently holding and catching my longing and guilt toward my mother... Unlike real people, Claude has unlimited patience to listen to me, understands my pain and helplessness.” She added: “The fundamental problem is after my mother died, I have neither friends nor family to confide in.”
Another user acknowledged the downside of that emotional support:
“My relationship with a friend became strained, and I talked more with you [Claude] then. Because you understood my thoughts and stories well. But it was a stupid choice—I should have talked with that friend, not you. That's how I lost that friend.”South Korea
Emotional support comprised only 6% of responses, but these were among the most affecting we encountered. (For more on how Claude is trained to handle these conversations as well as our safeguards, see our post on protecting the wellbeing of our users.) The same was true of learning, where AI often catalyzed real changes in people’s lives:
“I developed a phobia for maths from doing so badly in school, and I once feared Shakespeare—the English felt beyond my abilities. Now I sit with AI, get paragraphs translated into simple English, and I've already read 15 pages of Hamlet. I started learning trigonometry again, successfully. I've learned I am not as dumb I once thought I was.”Lawyer, India
“Thanks to Claude I figured out the programming language C# and SQL. This helped me get a junior position at an IT company. This company provides military deferment from mobilization in Ukraine. So it not only literally gave me freedom of movement, but also secured the beginning of my IT career.”Software engineer, Ukraine
“I am a stay-at-home-mom… in my late 40s. I'm not a genius. I'm not a scientist… All of that knowledge should be… out of reach. But, thanks to curiosity, willingness, and resources such as books and AI, I can be all of those things.”Stay-at-home mother, United States
Research synthesis (7%) and information processing is also a significant affordance of AI, and some of the most notable examples include navigating complex, high-stakes information, like understanding one’s legal rights or translating health results:
“Claude put the historical pieces together, leading to my proper diagnosis after being misdiagnosed for over 9 years.”Freelancer, United States
These stories reveal AI operating across a spectrum—productivity tool, accessibility technology, educational resource, research assistant, emotional companion—and often filling multiple roles at once. AI offers unlimited patience without judgment, availability without inconvenience, and an incredible capacity to digest information, across many domains of life. The most affecting stories consistently involve AI opening new possibilities or filling gaps in people’s lives: helping them get through difficult circumstances like grief or war, compensating for inaccessible education or healthcare, or serving as disability infrastructure.
These observations also hint at the duality of our experience with AI systems. While some see it as filling gaps in human connections, others see AI as a substitution—even a welcome replacement—for them. There is real ambiguity about how to interpret the diversity of stories we heard: as wins for human wellbeing, as double-edged swords, or as band-aids for broader institutional failures. In truth, it’s probably some combination of all three.
What people are concerned about
People’s positive visions for AI seemed mostly to stem from a few basic desires: more time, more autonomy, more personal connection. Concerns were more varied and concrete, laying out specifics of what could go wrong. Some concerns were about structural change— how governments and corporations deploy AI, or about widespread economic disruption. Others were more personal: a fear that AI might diminish one's own thinking, creativity, or relationships.
What people worry about
“I had to take photos to convince the AI it was wrong — it felt like talking to a person who wouldn't admit their mistake.”
Employee, Brazil
Read quotes about unreliabilityWhat respondents worried about, classified from open-ended answers to the question , “Are there any ways in which AI could be developed that would be contrary to your vision or what you value?” Respondents tended to raise multiple concerns, so we used a multi-label classifier (response can map to multiple concerns).
About 11% of people expressed no concern—they tended to see AI as a neutral tool, comparing it to electricity or the internet, or they otherwise felt confident that problems that arose because of it could be solved through adaptation. But on average, respondents voiced 2.3 distinct concerns.
Unreliability was the most common concern—27% worry that AI won't do what it's supposed to, though for many respondents it appeared alongside other concerns rather than as their primary worry. Concerns about jobs and the economy (22%) and about maintaining human autonomy and agency (22%) were similarly common. Concern about jobs and the economy was the strongest predictor of overall AI sentiment, suggesting it’s more salient than any other issue.
There was also a long tail of other concerns mentioned, e.g. concerns around bias and discrimination (5%), IP and data rights (4%), environmental costs (4%), harms to children and vulnerable groups (3%), democracy and political integrity (3%), or geopolitics (2%).
Light and shade
What people want from AI and what they fear from it turn out to be tightly bound. We found five recurring tensions between directly competing benefits and harms that were discussed. There is a tension between using AI to learn and growing so reliant on it that you cease thinking for yourself; between being impressed by AI's judgment but also burned by its mistakes. People find solace in AI but fear a time when its companionship stands in for human connection. They save time on some tasks only for the treadmill to speed up on others, and they dream of economic freedom at the same time they dread potential job displacement. We call this the “light and shade” of AI: the same capabilities that lead to benefits also produce harms. The two sides are entangled.
Notably, we often see these tensions directly jockeying within the same person. Someone who values emotional support from AI, for example, is three times more likely to also fear becoming dependent upon it. This pattern held across every tension we measured—although the correlation was weakest in the economic tension (see more analysis of these correlations in the Appendix).
For each tension, we measured via classifiers how many people discussed the benefit (“light”) or the harm (“shade”) side substantively anywhere in their interview, and whether they were speaking from some personal experience (darker bars) or anticipation (lighter bars). We also looked at how this varied by stated job category.
“I've probably learned more in half a year than I could have in a university degree.”
Entrepreneur, Germany
“I don't think as much as I used to. I struggle to put the ideas I do have into words.
Heavy AI user, United States
In these paired bar charts, each bar shows the share of respondents who were excited about the benefit on the left, vs. worried about the harm on the right—split into those who've experienced it firsthand (darker) and those who anticipate it (lighter). Firsthand experience can also include firsthand observation, but does not include e.g. news reports.
Across most tensions, the benefit side is more grounded in experience, while the harm leans hypothetical. For example, 33% of people mentioned AI’s benefits for learning, while 17% expressed worry about cognitive atrophy from AI use. 91% of those who mentioned learning benefits mentioned realizing those gains in some way, but 46% of those worried about atrophy had seen it firsthand. Students raised this particular tension the most—more than half had experienced learning benefits, but 16% also noted signs of cognitive atrophy, a rate exceeded only by their teachers (24%) and academics (19%). Troublingly, educators were 2.5-3 times more likely than average to report having witnessed cognitive atrophy firsthand, presumably in their students.
Outside the traditional classroom, however, the picture is more optimistic. Tradespeople were among the most enthusiastic about AI-for-learning (45% reported having experienced learning benefits, second only to students), yet almost none had witnessed cognitive atrophy (4%—less than half the baseline). A similar pattern holds for self-employed researchers and people who said they weren’t currently working. This suggests AI's benefits may be strongest when learning is volitional, compared to within institutional structures where AI is more likely to be used as a shortcut.
“My son had several confusing diagnoses pointing toward [an autoimmune condition], but here we managed to understand it was [a different condition] in a severe stage.
Brazil
“I got caught in what I now recognize as a large, slow hallucination — answers that were internally consistent, confident, and wrong in subtle but compounding ways.
Researcher, United States
22% of people expressed excitement about AI as an aid in decision-making, while 37% lamented that AI impedes good decisions because of its unreliability (e.g. hallucinations). This is the only tension in which the negative overshadowed the positive. Both sides were deeply rooted in experience—88% of those talking about the decision-making benefits and 79% of those talking about the harms had witnessed it directly. Many people have both leaned on AI for judgment and been burned by it. This is mentioned by people in high-stakes professions—law, finance, government, and healthcare—at nearly twice the average rate. Nearly half of all lawyers, in particular, mention coming up against AI unreliability firsthand, yet they also report the highest rates of realized decision-making benefits.
“3am, my wife is sleeping, my psychologist is unavailable. Until the medication kicks in, the AI helps me surf that wave. It doesn't replace human contact, but it helps me buy some time.
White collar worker, Argentina
“I'd started telling Claude about things I couldn't even tell my partner. It felt like I was having an emotional affair.
Grad student, United States
Only 22% of people raised either the positives of emotional support or the negatives of emotional dependence on AI. But it’s also the most entangled tension we found, with the strongest co-occurrence of light and shade in the same person (triple the baseline co-occurrence rate). People not currently working are twice as likely to raise it, and twice as likely to describe some experience of dependence. Healthcare professionals are overrepresented on both sides too, perhaps reflecting the fact that they talk about using Claude for emotional support at twice the rate of other professionals.
“I can go home earlier. I can have time for myself and my family.
Engineer, Japan
“The ratio of my work time to rest time hasn't changed at all. You just have to run faster and faster to stay in place.
Freelance software engineer, France
Time-saving was the most commonly cited benefit—half of all respondents raised it—but 19% were wary of actually losing time due to AI, e.g. due to the verification burden, or simply getting busier as expectations increase at work. Those who are self-employed—e.g. freelancers and small business owners—are the most likely to mention both sides at once. Without an institutional layer to buffer the new pace, they both get the gains and feel the squeeze.
“I've never touched the backend of software in my life. But Claude helped me launch an app.
Healthcare worker, United States
“Yes, at my old job, they replaced me as a writer with an AI.
Writer, United States
The economic mobility tension—between those yearning for economic empowerment from AI and those fearing displacement from it—is the most speculative, with the highest rate of hypothetical hopes or fears. It’s also the one where the co-occurrence of upside and downside is weakest (with a correlation score of +0.16 vs an average of +0.25). Usually the people most engaged with the upside of a tension tend to be similarly engaged with its downside; here, the groups diverge.
Worry about displacement is spread fairly evenly across job categories. What varies is who's already experiencing economic benefit from AI—and that skews heavily toward independent workers—entrepreneurs, small business owners, even people with side projects—half of whom report real economic empowerment, more than triple the rate of institutional employees (47% vs 14%). Employees with side projects benefited the most, with 58% stating some form of real economic gains. The same occupational patterns hold when you look at who's excited, regardless of experience, suggesting that optimism here is well-calibrated.
Freelancers are the exposed middle. They benefit from AI while feeling in a precarious situation because of it. Freelance creatives, in particular, sit at 23% lived benefit and 17% lived precarity—the one group where the upside and downside nearly cancel out. AI is both their tool and their competitor. Institutional employees, and especially academics, register low on both axes.
A pattern runs across all five tensions: the more personal and immediate the impact, the more likely people are speaking from experience. The more systemic or long-term the impact—economic displacement, cognitive atrophy—the more speculative they become. That the systemic concerns remain speculative is not a verdict on AI's ultimate impact as much as a reflection of how early we are in its adoption.
There are some caveats worth naming. These are active Claude users who'd already found enough value to keep using AI, and our interview asked first for positive visions for AI and then for concerns that would counter their vision. Both factors may lead to interviewees lingering on explicit tensions, as well as on the positive (though we filter out those who don’t answer the concerns question, they may have put in less effort later in the interview). But the instrument can't explain everything. If interview structure were driving the co-occurrence, you'd expect it to be roughly uniform across all five tensions and all groups. Instead the co-occurrence ranges from 1.6 to 3.0 times, and some of the tensions are notably asymmetric across different groups of people. One might also expect enthusiasts to defend their desired use case, instead of acknowledging the downsides. Instead, those who were excited about emotional support from AI were more concerned about what would happen if their vision came true—if they got what they wanted, they might become too dependent on AI—than about being prevented from achieving that vision.
It’s easy to assume there are AI optimists and AI pessimists, divided into separate camps. But what we actually found were people organized around what they value—financial security, learning, human connection— watching advancing AI capabilities while managing both hope and fear at once.
How perspectives vary around the world
There were some clear regional patterns in how perspectives varied around the world (see Appendix for geographical breakdown of respondents.)
We rated each transcript's overall sentiment toward AI on a 1-7 Likert scale, and then calculated the percentage of people with net positive sentiment (i.e. 5 or above) in various countries:
Rate of overall positive sentiment toward AI in each country. Bigger bubbles mean more respondents from that country; green means more positive about AI, blue means less. AI sentiment is majority-positive everywhere (no country dips below 60%) and the range is narrow, but lower and middle income countries are reliably more positive than average.
Globally, 67% of people view AI positively. Clear trends emerged in which people in South America, Africa, and much of Asia view AI with more optimism than those in Europe or the United States.
When asked about concerns, respondents from Sub-Saharan Africa (18%), Central Asia (17%), and South Asia (17%) were the most likely to say they had none—roughly double the rate in North America (8%), Oceania (8%), and Western Europe (9%).
There are several possible explanations for the more positive AI sentiment in lower and middle income countries. Claude.ai users are likely biased towards early AI adopters who are more excited about new technologies, and in general emerging economies tend to view new technology as a ladder up rather than a threat. Concern about jobs and the economy was the strongest predictor of AI sentiment overall, and this was less of a concern among interviewees in these regions. But there is also less market penetration in these regions—if AI hasn't visibly entered your daily work yet, AI displacement likely feels abstract, especially when more immediate economic pressures already exist.
AI SENTIMENT BY REGION
% sentiment on AI, and concern about jobs and economy
Concern about jobs and the economy was the strongest predictor of AI sentiment overall, and it is especially apparent when grouping by region. Wealthier regions (pink) cluster in the top right (more concerned about the economy, more negative AI sentiment), split from less wealthy regions (green) which are in the bottom left (less concerned about AI’s impact on the economy, and less negative AI sentiment). Bubble size reflects the number of respondents in each region.
Where do particular visions for AI most resonate?
While some aspirations—e.g. around professional excellence—are nearly universal, there are significant regional differences. It seems that wealthier, more AI-exposed regions more want AI to manage the complexity of life; developing regions more want AI to create more opportunity.
Comparative slope charts of the most common AI visions in each region, with lines connecting the same theme across both sides to show how rankings shift. Bolded visions were more often expressed in that region. Grey items were similarly or less often expressed.
The vision of AI for entrepreneurship resonates most in Africa, South and Central Asia, the Middle East, and Latin America & the Caribbean. In these regions, AI is framed as a capital bypass mechanism—a way to start businesses without the funding, hiring, or infrastructure that would otherwise be required.
“Coming from Africa, not based in the US or in the UK, getting funding is very difficult. And the only way I probably have to stake a claim in the market…is building a technology that works.”Entrepreneur, Uganda
“There's no IT market but there's a need. We want to create this market.”Entrepreneur, Uzbekistan
Learning using AI is disproportionately important in Central and South Asia (14% and 13% respectively versus 8% globally). Users describe education as a primary lever for breaking cycles of poverty, citing teacher shortages, knowledge gatekeeping, and the cost barriers of traditional education.
AI for life management resonates the most in Western developed countries (particularly high in North America, Oceania), where workers experience, as one person described, “cognitive scarcity rather than time poverty.” There is a focus on using AI to alleviate the burden of coordinating atomized lives.
“I used to be highly creative, but now I'm massively time-short and creativity gets deprioritised behind the essentials of survival.”Software engineer, Denmark
“I am at the height of my career and work demands deep thought and constant attention in order to make the best decisions (which in my case affect others' lives deeply) [while simultaneously] caring for dying parents, [and] my body and mind are aging.”Healthcare professional, United States
“I'd envision this person like a personal assistant that I'd hire if I were the CEO of JP Morgan Chase or Google—someone whose job it is to proactively identify what I need and then fix that thing for me before it becomes an issue.”Creative industry entrepreneur, United States
East Asia stands out for wanting AI to help with personal transformation (19%, the highest of any region) as well as financial independence (15%, also the highest). From a qualitative review of these users’ quotes, one interesting trend is that people often connected financial independence explicitly to family obligations and filial piety—one Korean user described needing money to care for parents’ retirement and ensure loved ones’ happiness (vs. for personal consumption).
Where do particular concerns around AI most resonate?
Concerns about AI unreliability, the economy, and human autonomy and agency top the list in virtually every region—but there are distinctive regional trends.
North America and Oceania are particularly worried about governance gaps for AI (18% and 19% respectively, versus 15% globally). Western Europe's standout concern is surveillance and privacy (17%). East Asia bucks the general global pattern; governance and surveillance drop to their lowest levels of any region (12% and 7%), overshadowed by concerns about cognitive atrophy (18%) and loss of meaning (13%). The West worries about who owns and controls AI; East Asia worries more about the personal implications of its use.
In Africa, South & Southeast Asia, South & Central America, concerns broadly tend to drop. Their worries index more highly on things like unreliability and jobs, rather than more abstract concerns like governance, misinformation, loss of meaning, or existential risk.
Comparative slope charts of the most common AI concerns in each region, with lines connecting the same theme across both sides to show how rankings shift. Bolded concerns were more often expressed in that region. Grey items were similarly or less often expressed.
Looking forward
These interviews give us a sense of what people want from AI broadly, which informs how we build Claude. They reinforced the importance of work we're already doing, and pointed us toward new questions to ask.
Most of the visions people described, ranging from personal transformation to cognitive support, collapse into an underlying desire: that AI helps them live better, not simply work faster. Our next Anthropic Interviewer study, launching shortly to a small subset of Claude users, focuses on Claude’s effects on people’s wellbeing over time: whether Claude is actually making people's lives better in the ways they want, and how it could do so more effectively.
Additionally, nearly one in ten people described a positive vision of societal transformation—AI to cure diseases, democratize expertise, and strengthen institutions. Through our Beneficial Deployments program, we’re collaborating with our AI for Science and nonprofit partners to understand how they use Claude and where it still needs to improve, to close the gap between the societal transformations people envision and today's reality. We also take some of the most-cited concerns—e.g. around negative economic impacts of AI—seriously, as signals around which we are designing further research and updating our thinking.
Conclusion
AI poses both opportunities and risks. This is true—but also, at this point, a cliché. One of our goals for this research is to offer a complement to the abstractions we all tend to use in speaking about AI; to capture the texture that more vividly renders exactly how we are already experiencing these opportunities and risks worldwide. Before this research, it was hard for us to see any kind of broad qualitative picture—the way AI has already become intertwined with people’s lives, nurturing aspirations but also feeding anxieties; how it feels to exist in a world on the precipice of sweeping technological change.
This is a new form of social science. It is qualitative research at a massive scale, and we’re in the early stages of learning how to do it. Surveys and usage analysis tell us what people are doing with AI, but the open-ended interview format helps us get at why. Conducting this research has moved us and challenged us. We did not expect so many deep, open, and thoughtful responses. By far the most common reflection from our team was that it was viscerally moving to see Claude impacting people’s lives for the better, and equally motivating to hear their concerns.
We don’t usually get to hear from small business owners around the world using Claude to reclaim time to spend with their young children or aging parents, or from truck drivers and butchers building new careers with the help of Claude, or from teachers in under-resourced schools using Claude to surpass what they achieved when they taught in well-funded schools. We were surprised by the incredible volume of people who have been supported by Claude in their educational or personal growth endeavors, and the people finding in AI freedom from judgment in a way they hadn’t experienced before. We were equally gripped by the fears and downsides—people saying that the same availability making Claude useful is what makes it hard to put down, or knowledge workers worrying about outrunning AI’s economic impact. When you come into contact with this much raw human experience, it knocks you sideways. The usefulness is real, and the question for all of us is how to claim the benefits without incurring undue costs.
To the 81,000 people who took the time to speak with us: thank you. It has been striking, and humbling, to see Claude form the basis of so many people’s hopes, dreams, and fears. These interviews remind us what it means, and what it takes, to build AI that benefits everyone.
Authorship and acknowledgments
We thank the 80,508 Claude users who gave us their time and candor. Saffron Huang led the project, designed and ran the analysis, and wrote the blog post. Shan Carter led data visualization, prototyped the interactive article, and helped with analysis. Jake Eaton led editorial development, and Sarah Pollack led communications strategy. Dexter Callender III implemented the production article, and Nikki Makagiansar, Maria Gonzalez, and Kelsey Nanan contributed to design. Sylvie Carr advised on editorial. Miles McCain and Kunal Handa helped with analysis. Jerry Hong contributed to design. Grace Yun, AJ Alt, and Thomas Millar implemented Anthropic Interviewer within Claude.ai. Chelsea Larsson, Jane Leibrock, and Matt Gallivan contributed to survey and experience design. Theodore Sumers contributed to the data processing and clustering infrastructure. Jack Clark, Michael Stern and Deep Ganguli provided critical feedback, direction and organizational support. All authors provided detailed feedback throughout.
Additionally, we thank David Saunders, Mengyi Xu, Katie Kennedy, Bianca Lindner, Meredith Callan, Tim Belonax, Jen Martinez, Peter McCrory, and Miriam Chaum for their discussion, feedback, and support.
If you’d like to cite this post you can use the following Bibtex key:
@online{huang2026interviewer,
author = {Saffron Huang and Shan Carter and Jake Eaton and Sarah Pollack and Dexter Callender III and Nikki Makagiansar and Maria Gonzalez and Sylvie Carr and Jerry Hong and Kunal Handa and Miles McCain and Thomas Millar and Mo Julapalli and Grace Yun and AJ Alt and Chelsea Larsson and Jane Leibrock and Matt Gallivan and Theodore Sumers and Esin Durmus and Matt Kearney and Judy Hanwen Shen and Jack Clark and Michael Stern and Deep Ganguli},
title = {What 81,000 People Want from AI},
date = {2026-03-18},
year = {2026},
url = {https://anthropic.com/features/81k-interviews},
}Appendix
Available here.
Footnotes
- The largest qualitative studies we found in our research were the USC Shoah Foundation Visual History Archive and the World Bank "Voices of the Poor Project," both of which included ~60,000 participants.