Anthropic AI Interviews 1,250 Professionals, Uncovers Varied Perceptions of Technology

author-Chen
Dr. Aurora Chen
A stylized illustration showing diverse professionals interacting with abstract AI elements, representing varied perceptions of technology.

Anthropic has developed an AI tool, named Interviewer, capable of conducting in-depth conversations with human subjects, generating interview outlines, asking follow-up questions, and performing thematic and emotional analysis. The company recently utilized this tool to interview 1,250 professionals, creators, and scientists, aiming to understand their perceptions and interactions with AI. This initiative marks a shift in AI application, where the technology itself becomes a research instrument to study human perspectives.

Interviewer is designed to function as a professional researcher. It formulates hypotheses, sets research goals, initiates interviews, adapts questions in real-time, and quantifies emotional responses. The process begins with the AI automatically drafting interview outlines based on research objectives, identifying key topics, and anticipating emotional signals. During the 10-15 minute interviews, the AI adjusts its pace and redirects conversations if participants deviate from the main topic. Post-interview, a separate analyzer tool, Clio, processes the recorded data, performing thematic clustering, extracting insights, and generating "emotional radar charts" categorized by occupation and industry.

Anthropic made the anonymized interview content public for external research. Participant feedback indicated high satisfaction, with over 97% reporting positive experiences and a general consensus that the AI accurately captured their thoughts. This research expands qualitative human research methods by leveraging AI for large-scale, consistent analysis.

AI Reveals Professional Nuances

The 1,250 interviews revealed diverse attitudes toward AI across different professions. Prior assumptions about universal views on AI were challenged by the findings, which highlighted distinct emotional and practical responses. An "emotional tendency bubble chart" categorized themes, with blue indicating pessimism, yellow indicating optimism, and larger bubbles representing more frequent mentions.

Among ordinary professionals, "efficiency" was the most discussed topic, with 86% reporting faster work and 65% expressing satisfaction with current AI usage. However, a hidden concern emerged: 69% admitted to downplaying their AI use, fearing it might be perceived as unprofessional or compromise their standing within a team. While they described their AI use as collaborative, Anthropic's analysis of Claude usage logs indicated a higher proportion of automation, where tasks were directly delegated to the model with minor human adjustments. Despite this, professionals often emphasized their role in "supervising AI, managing processes, and retaining judgment."

For creators, interviews showed a duality of efficiency and anxiety, with inspiration intertwined with identity crises. Many reported significant efficiency gains, such as a photographer reducing retouching cycles from 12 to 3 weeks. However, 70% worried about their work appearing "too AI-generated," potentially damaging their brand or originality. Some also expressed concern over income displacement, citing examples in voice acting and product photography. A creative director noted, "I know every time I use AI, it means a photographer loses another day's income."

Scientists exhibited different concerns. They were less anxious about economic pressure or professional image. Their primary concern was reliability, with 79% stating AI was not stable enough for critical tasks like hypothesis generation or experimental design. Consequently, they primarily used AI for literature reviews, code debugging, and paper writing, reserving core tasks for human expertise. Scientists also cited "tacit knowledge"—such as subtle changes in cell culture or instrumental "feel"—as non-digitizable aspects machines cannot perceive. Despite these reservations, 91% expressed optimism about a future AI research partner, attributing current distrust to technological immaturity rather than professional anxiety.

Underlying Professional Structures

The study suggests that emotional differences alone do not fully explain these varied responses. Instead, the attitudes reflect underlying pressures within each profession's structure. Ordinary professionals' caution stems from an organizational environment where impression management is crucial. For them, AI use is a signal that could alter perceptions of their professionalism.

Creators' tension arises from direct market competition, where their income, style, and work value are constantly assessed. The use of AI raises questions about maintaining their unique identity and originality. Scientists' approach is rooted in the long-term accumulation of judgment and reliability, where the cost of error is high. For them, AI is a system with potential for mistakes, and caution is driven by the demands of their discipline.

Anthropic's research indicates that AI reveals the "irreplaceable core" of each profession. The technology itself is neutral, but its application within different work structures triggers distinct psychological responses. These findings suggest that differences in attitudes toward AI are not solely technology-driven but are shaped by professional structures, evaluation systems, and survival strategies.

Future of Human-AI Collaboration

Anthropic views Interviewer as a step toward understanding "relational variables" often overlooked in AI development—how people build relationships with AI beyond chat interactions. The company aims to understand users' feelings, expectations, and boundaries regarding AI, which are crucial for determining the future influence of large models.

By employing in-depth interviews rather than traditional questionnaires, Interviewer allows users to express implicit information, such as how they balance efficiency and identity, explain their AI usage, and articulate their concerns. This information, not typically found in chat logs, is vital for guiding product iteration and shaping the human-model collaboration. Anthropic's goal is to train future models to integrate into diverse professional lives without disrupting existing structures. The 1,250 interviews represent an initial effort to shift the perspective back to human experiences, aiming for models that evolve from the "human-AI relationship" rather than solely from technological advancements.