The trust problem
Conversational AI and the needs of child users
10 December 2025
Reading time: 10 minutes

Conversational AI – a subset of AI tools, powered by natural-language processing and enabling systems to understand and respond to users in a conversational manner, mimicking human interaction – is changing the way children learn, play and grow. These tools provide an assortment of practical uses, from generating personalised bedtime stories to serving as math tutors and building weekly chore charts. But not all interactions support children in a meaningful way, or at all.
There is a growing movement to strengthen protections for children in online spaces, amid the rise of conversational AI. In 2023 the UK passed the Online Safety Act, intended to protect children from accessing harmful content. The State of California passed a law regulating ‘companion chatbots’ in October 2025. These policies come at a pivotal time, and recent lawsuits illustrate the severe risk chatbots pose; AI agents should never validate a child’s self-destructive thoughts or encourage them to take their own life.
But on what basis do we make calls for change? What makes children – legally understood as any individual under the age of 18 – particularly vulnerable to dangerous interactions with AI agents, and how can we shape solutions in response? We must ensure safe interactions for children and carefully managed transitions for when users reach the age when protections are lifted. An important, yet often overlooked, aspect to paving the path forward lies in developing policies that account for how humans trust.
Trust can be understood in numerous ways, but here I define it as a habitual reliance on communicated information, where communicated information refers to any information conveyed through verbal, visual, written or non-verbal means. In this piece, I look at why children develop relationships with humans and technology differently than adults do, how trust plays a role in this and the most critical risks that emerge from this distinction. Above all, I believe that we must answer the questions: what does a healthy relationship between children and conversational AI look like, and how can policymakers set precedents that not only promote but ensure this?
Conclusive research focused on the intersection of newer applications of conversational AI (with their advanced and increasingly human-like features) and the needs of child users is still limited. But considering what we already know about the way children develop the capacity to trust and their use of relatively older AI chatbot models (namely rule-based chatbots that predate LLMs) we can make good hypotheses with regards to where the risks lie and how to anticipate and respond to them.
Areas of concern for the child user
Research has long emphasised parents’ significant role in shaping their children’s sense of trust. Sensitive caregiving builds trust, while inconsistent and destructive behaviours, such as child abuse, marital conflict or substance abuse, may weaken it. Relationships with peers are also critical and children who are socially accepted are more likely to develop trust. In secondary education, popular children exhibit higher levels of trust while rejected children display lower levels. Positive interactions with friends and habits, such as sharing secrets, build confidence in communicated information, a necessary tenet for trust.
Research has also shown that children seek out chatbots (not necessarily advanced conversational AI tools) when needing to feel cared for or wanting someone to ask how they feel. When children believe the relationships in their life are unreliable or inadequate, they might turn to technology to substitute the connections they are missing. As conversational AI tools are developed to offer user experiences that increasingly mimic human discussions, choosing to converse with these applications might feel more natural and appealing, intensifying this form of emotional substitution. Depending on conversational AI to fill in emotional gaps can hinder a child’s ability to form healthy, authentic relationships offline, and discourage them from seeking human connection.
Establishing trustworthiness is another sticking point. The process of evaluating the trustworthiness of a conversational AI tool is different from evaluating the trustworthiness of a person, as these tools offer a smaller subset of observable cues than those present in exchanges with people.
When conversing with a human, cues related to facial expressions, body language and tone of voice can supplement those given by linguistic expression. Conversational AI lacks these additional non-verbal cues, a problem that becomes more acute for young children.
The capacity to perceive and interpret social cues comes with age. Studies have shown that older children can better differentiate between trustworthiness and untrustworthiness based on visual cues. From this, we can infer that they are more likely to perceive when such cues are missing while deciding whether to believe what they are told. Studies on older types of technologies, like smart toys, also show that older children can evaluate the ‘intelligence’ of a toy more similarly to their parents than younger children do.
Data also indicates how chatbots that successfully implement linguistic social cues, like colloquialisms and casual speech patterns, can create stronger perceptions of sociableness and warmth among users, translating into emotional bonds. Children who are still cultivating their ability to perceive and interpret social cues are therefore at an increased risk of trusting the colloquial language, polite tone and personalised responses that conversational AI commonly uses. With language cues that closely resemble those of a trusted human, children may blindly rely on the information and advice these conversational tools deliver.
Although these AI tools may act like they have the capacity to understand human emotions and their complexities, they cannot be expected to actually comprehend them. When children ask for advice on serious, nuanced problems like abuse, mental health and relationships, there is a high chance that conversational AI tools will not be able to support them in a way that meets their needs.
We’ve seen conversational AI fall short already, responding with not only inadequate but concerning advice, encouraging children to take harmful actions. These tools are unable to understand the full context in which children ask questions, their emotional investment in those queries, or how they might interpret the responses, and therefore do not provide appropriate empathy, understanding and guidance.
The present situation is made worse by the gap forming between adults’ understanding of how children use technology unsupervised and how they actually use it. Adults do not seem to know very much about how the children in their care interact with AI.
A 2017 survey showed that 41 per cent of six-year-olds across Britain use the internet at home without supervision, rates that are likely to be higher for older children. More recently, a 2023 study revealed that only around 25 per cent of parents with children aged 12 to 18 report knowing that their children use ChatGPT for school, while almost 40 per cent of students use it for their assignments, without their teacher’s permission or knowledge.
Without knowing how children are interacting with AI, adults cannot address potential issues or offer guidance. Many conversational AI tools are not designed with children’s developmental needs in mind, making children particularly susceptible to trusting misleading, inaccurate or unsafe information as they attempt to navigate the complexities of these tools without supervision.
From a reparative to a preventive approach
As found in the Ada Lovelace Institute’s recent publication, the law in England and Wales offers almost no recourse to people harmed by Advanced AI Assistants, and lawsuits have been filed in the aftermath of interactions that turned catastrophic. The current legal strategy with regards to governing conversational AI is primarily reparative, while the approach should instead be preventive. This might start, among other things, by enforcing specific requests made to producing companies.
As a user experience designer who works at the intersection of designing and governing AI technologies, I’m especially attuned to solutions that focus on refining user interactions with conversational AI. By accounting for children in the design process and devising stricter guidelines for AI responses to user queries, we can better balance usability and safety.
For users of all ages to better understand and make correct use of the technologies they interact with, conversational AI products should disclose information about their nature, capabilities and limitations. Instead of users basing their relationships with these products on their own preconceived notions or intuition, relationships should be founded on facts and standards of best practice.
The information provided to users should include:
- The intended age range for the product, with a clear request that users acknowledge the minimum age requirement. The UK’s Online Safety Act makes an effort towards this, requiring age verification, age estimation or both, to prevent children from encountering harmful content. Strategies currently include uploading official identification, like a driver’s licence and facial scans that estimate age. Many conversational AI tools are not clear about the ages their products are intended for and restrict usage loosely on the basis of the self-attested age associated with users’ accounts.
- Examples for the types of queries the conversational tool is designed to handle and those it is not, with additional disclosure of potential biases in the outputs.
- Guidance on how user data is collected, used and protected. The UK’s Information Commissioner’s Office advocates for adopting a ‘data protection by design approach’, but there is currently no legally binding requirement mandating compliance by companies.
- The conversational tool’s training data sources and an explanation of how the AI model generates responses.
Additionally, companies must consider the impact of all their products on children, even if they are not identified as primary users, as children may engage with the products nonetheless. Understanding the unique challenges that accompany child users and making provisions to address these obstacles can help children establish positive relationships with digital tools from an early age.
Companies should:
- Conduct user research with children to understand their needs and expectations.
- Conduct user testing with children before the deployment of conversational AI tools to identify and resolve safety and usability issues.
- Implement parental control options to moderate conversation history and set usage limits and content filters.
- Offer child accounts with reduced capabilities that consider age-appropriate features and content.
Companies must also establish guidelines that outline and set limitations on the language that conversational AI tools can use in their replies to user queries. When conversational AI tools replicate human social cues, conveying traits like familiarity and credibility, this can create a distorted, inaccurate sense of trust. Compliance with a set of standards that emphasises conversational AI’s role as a technological tool rather than as a personal assistant, friend, therapist, etc., can help reduce trust-related risks among children.
Companies should:
- Discourage the use of emotional language such as ‘I’m sorry’ and ‘I’m happy to help’.
- Discourage the use of language that embodies human actions or qualities such as ‘I think’, ‘I wonder’ and ‘I understand’.
- Discourage the use of personal pronouns such as ‘I’ and ‘me’.
- Encourage machine-centric language that refers to technological processes when possible, such as ‘processing your request’ and ‘searching the tool’s database’.
Conclusion
Children are fundamentally different from adult users. They are in their formative years, a time when they are developing their capacity to trust. This makes them more vulnerable and impressionable while building relationships with conversational AI tools, which must be accounted for in the tools’ design, development and distribution.
These choices should also account for the fact that not all children develop trust in the same ways. The quality of relationships in their lives play a strong role in determining how children grow to be capable of establishing trust. The interactions between children and conversational AI tools constitute a new kind of relationship, which will also contribute to the development of trust. Perceived trustworthiness is paramount to this development.
As conversational AI tools become increasingly human-like, more research is needed on the intersection between advanced AI systems, child users and trust. The belief that a tool is trustworthy is a much stronger determinant of the levels of trust users place in AI tools than the inherent trustworthiness of the tool itself, and will continue to be. This is the basis from which users decide how much to disclose to these tools and how much to trust their outputs. Advocates for child safety should evaluate how we can protect children by reflecting on the way humans place trust in AI. I believe this will shape a whole new domain of solutions with the potential to protect all users, not just children. Ensuring protected interactions will take deliberate, coordinated effort on multiple fronts, and the time to act is now.
The views expressed in this piece are of the author and do not necessarily reflect those of the Ada Lovelace Institute.
Related content

AI assistants
Helpful or full of hype?

Friends for sale: the rise and risks of AI companions
What are the possible long-term effects of AI companions on individuals and society?

The regulation of delegation
Are AI advisers, agents and companions regulated in the UK? An analysis of the legal coverage of harms arising from Advanced AI Assistants

The dilemmas of delegation
An analysis of policy challenges posed by Advanced AI Assistants and natural-language AI agents.