A new study from Anthropic examines how university students use the language model Claude in their daily academic work. The analysis reveals subject-specific usage patterns and raises questions about the influence of AI on learning and academic integrity.
The research group initially analyzed one million conversations from users with university email addresses. After filtering for relevance, 574,740 academic chats remained over a period of 18 days.
Usage was heavily concentrated among students in STEM fields. According to the data, computer science students made up 38.6 percent of all users, although they represent only 5.4 percent of the US student population. This group was significantly overrepresented in the dataset.
Anthropic researchers categorized student interaction with Claude into four main patterns: direct and collaborative conversations, each aimed at either problem-solving or content generation. Each mode accounted for between 23% and 29% of conversations, but nearly half (47%) were direct interactions with minimal effort. In these, students appeared to delegate the more demanding work to Claude, with little interaction.
Some usage patterns reported by Anthropic are questionable. Students have Claude answer multiple-choice questions on machine learning, generate direct answers for English tests, or rephrase marketing and business assignments to circumvent plagiarism checkers. Even in more collaborative sessions – like step-by-step explanations of probability problems – the AI still takes on much of the cognitive load.
The challenge, according to the researchers, is that without knowing the context for each interaction, it’s difficult to draw a clear line between genuine learning and cheating. Using AI to check one's own work on practice problems could be intelligent self-study; using it to complete graded homework is another story.
What students primarily delegate to AI are higher-order thinking tasks. Looking at Bloom's Taxonomy, 39.8% of the prompts fall into the "Create" category and 30.2% into "Analyze." Simpler actions like "Apply" (10.9%) and "Understand" (10%) were much less frequent.
The patterns also differ by subject area. Science and math students primarily use AI for problem-solving – think guided explanations. Educators mainly use Claude to create teaching materials and lesson plans (a whopping 74.4% of use cases there).
The study has some limitations. The analysis covered a short period and focused on early adopters, which may not reflect general student behavior. The method for classifying conversations could also lead to misinterpretations; for example, some chats from university staff might have been aggregated with the student data. The Anthropic team calls for further research into how AI actually affects learning – and how schools should respond.
Meanwhile, AI companies are racing to plant their flag in higher education. Anthropic recently launched Claude for Education, a campus-oriented offering with dedicated learning modes. Universities like Northeastern University, the London School of Economics, and Champlain College are already implementing it, and Anthropic plans to integrate it into existing platforms like Canvas LMS.
OpenAI isn't sleeping either. Their ChatGPT Edu launched in May 2024, offering universities discounted access to the latest models, as well as features like data analysis and document summarization. Oxford, Wharton, and Columbia are already using it for tutoring, assessments, and administrative tasks.
The goal is clear: get students using AI – specifically *their* AI – early and often. Once these students graduate and enter the workforce, the hope is that they'll retain these tools in their daily workflow, thus promoting both AI adoption and brand loyalty.
As language models become increasingly integrated into academic workflows, the line between legitimate educational support and inappropriate shortcuts is blurring. The next big question is how institutions will draw – and enforce – that line.
Bibliography - https://the-decoder.com/students-delegate-higher-level-thinking-to-ai-anthropic-study-finds/ - https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude - https://x.com/theaitechsuite/status/1913939288732959050 - https://www.linkedin.com/pulse/how-students-using-ai-tools-margaretta-colangelo-oucvf - https://www.linkedin.com/posts/will-van-reyk_anthropic-ai-have-published-a-really-illuminating-activity-7315612753715474432-l0Vo - https://www.zdnet.com/article/the-tasks-college-students-are-using-claude-ai-for-most-according-to-anthropic/ - https://www.aibase.com/news/www.aibase.com/news/16950 - https://aixeducation.substack.com/p/ai-students-whats-changing - https://the-decoder.com/ - https://www.anthropic.com/research/reasoning-models-dont-say-think