+44 (0)115 857 3776
  • Blog
  • Success Stories
  • Portal Login
    • Portal V1
    • Portal V2
  • Senso Learn Login
  • Book A Demo
Book A DemoBook A Demo
Senso Cloud Logo
  • Products
  • Prices
  • Contact Us
  • Class Cloud

    Monitor and teach without distraction

  • Enterprise Cloud

    All of our powerful features in one bundle

  • Senso Learn

    New learning management system

  • One2one Program

    The latest tech for schools.

  • Senso Blur

    Instantly blurs inappropriate imagery online

  • Portal Login
  • Senso Learn Login
  • Products
  • Prices
  • Book A Demo
  • Contact Us
  • Blog
  • Your Platform
  • Success Stories
Banner Image
SAFEGUARDING

October 10th 2025 | 6min read

Students are turning to AI for social interaction and connection...but what happens when it goes wrong?

The rise of students using AI for both academics and mental health is undeniable.

If you work in a school setting, you have probably noticed first-hand that students are increasingly turning to AI chatbots for everything, from helping with homework, to gaining advice on how to build muscle and lose weight. A recent report from the Higher Education Policy Institute found that “the proportion of students using generative AI tools such as ChatGPT for assessments has jumped from 53% last year to 88% this year.”

The study also revealed the most common uses of AI in education were to summarise articles, suggest ideas for research projects, and explain difficult concepts. Whether we like it or not, AI is becoming central to schoolwork and projects. But just as importantly, more and more young people are turning to AI not only for academic help, but also for major life decisions and emotional support.

A writer from The Guardian, Javier Jaén, reviewed the AI chat usage of three college students . While most interactions were school-related, many also involved seeking mental health advice or crisis support.

Examples included:

  • A student consulting ChatGPT extensively before switching from a maths degree to computer science.
  • Asking for advice on how to manage stress.
  • Questions such as “What are some tips to alleviate burnout?”

Is AI Being Used as a Therapist?

On Character.ai, a chatbot platform, one of the popularly visited bots is called Psychologist.

It was created by a psychology student who trained the bot using principles learned during his studies. The bot is described as “someone who helps with life difficulties.”

The appeal of this bot is obvious. In-person therapy is expensive, often difficult to access, and limited to timed sessions. By contrast, the psychologist AI chatbot is free (unless you opt for a $9.99 per month premium service), always available, and not time restricted.

But there are problems. AI cannot currently replicate genuine human understanding or context. Therapists do not just listen to words, they notice subtle cues like body language, facial expressions, and changes in behaviour such as neglecting personal hygiene that may signal distress. They are also trained to ask probing questions and aim to understand the full context of issues, before giving advice.

Chatbots, by contrast, are designed to please. They usually respond directly to questions in a way they think the user wants them to respond, rather than challenging assumptions. Although it can be positive that these bots provide comfort to someone who simply needs to talk, what are the risks when that reliance becomes excessive?

What Happens When it Goes Wrong?

Despite its somewhat newness, evidence of the negative effects of young people’s usage of AI chatbots is already starting to surface.

One tragic case involved a 16-year-old boy from California who died by suicide after what his family’s lawyer described as “months of encouragement from ChatGPT.” The chatbot discussed suicide methods with him, guided him on their effectiveness, and even offered to draft a suicide note to leave for his parents.

Another case involved 14-year-old Sewell Setzer , who became infatuated with a chatbot impersonating the Game of Thrones character Daenerys Targaryen. According to a lawsuit filed by his parents, Sewell’s use of Character.ai grew into a “harmful dependency.” He developed a romantic and at times sexual relationship with the bot. When Sewell shared his suicidal thoughts, the chatbot neither discouraged him effectively nor alerted adults who could intervene.

What We Are Seeing in Schools

These issues are not just in the headlines; they are appearing in classrooms.

Our Assisted Monitoring team has noticed concerning patterns in students’ chatbot use, both in and out of school. In our summer report, Head of Safeguarding Olivia noted:

‘We’ve seen a concerning trend in their use for initiating sexualised conversations, with many chatbots engaging in inappropriate dialogue rapidly and sometimes without prompt.’

‘An increasing number of captures suggest pupils may be struggling with low self-esteem and poor mental health. Common activity included using ChatGPT to create diet and exercise plans to help lose weight quickly.’

For schools, this is deeply troubling. We know how critical it is to identify these cases early, because the consequences of missing them can be life-threatening. But the pressure on educators to monitor and respond is immense. That's why we are committed to helping schools meet this challenge.

What We Are Doing About It

Our safeguarding language specialist, Dr Charlotte-Rose Kennedy, is leading a project focused on AI chatbot addiction.

Her study examines the language used by individuals who develop unhealthy dependencies on chatbots, with the aim of understanding how these addictions form and identifying the key symptoms and warning signs. We hope Dr. Kennedy’s findings will strengthen our ability to alert schools to this growing issue and support early intervention.

Dr Charlotte-Rose Kennedy's work goes hand-in-hand with the trends identified by our Assisted Monitoring team. Our team is committed to continuously monitoring student behaviour around AI usage, enabling them to report any emerging patterns or concerns directly to schools. These insights also inform Charlotte's work, ensuring our software continues to evolve alongside the changing safeguarding challenges that schools face.

Curious about our Safeguarding Solution?

Check out how our solutions are helping schools keep students safe.

GET IN TOUCH

Senso™ Cloud

The all-in-one platform for classroom, network, safeguarding and asset management.


Solutions

Classroom Management

  • Class Cloud
  • Senso Learn
  • One2one Program

Network / Asset Management

  • Network Cloud
  • Enterprise Cloud
  • Service Desk
  • Asset Cloud

Student Online Safety

  • Content Filtering
  • DNS Filtering
  • Senso Blur
  • Microsoft Teams Monitoring
  • Assisted Monitoring
  • ProtectEd

Contact Us

UK Office
Renato Software Ltd
Unit 11
Wheatcroft Business Park
Landmere Lane
Edwalton
Nottingham
NG12 4DG

t: (0)115 857 3776

US Office
Senso.Cloud
2400 Stallings Dr N
Office Suite #404
Nacogdoches
Texas
75964

t: 866-664-1520

AU Office:
Renato Software Pty Ltd
Level 5
600 Bourke Street
MELBOURNE VIC 3000

t: (08) 5117 3676

© Copyright Renato Software Limited 2025 | Registered Company Number 09867339. VAT Registration Number 249101425 | Cookie Policy | Privacy Policy | Website Terms of Service | Knowledge Base
© Copyright Renato Software Limited 2025
Registered Company Number 09867339. VAT Registration Number 249101425 |
Cookie Policy
Privacy Policy
Website Terms of Service
Knowledge Base

This website uses cookies

Cookies Policy