Are AI chatbots already our new parent coaches, offering personalised advice and support?
In this conversation, Jo Aggarwal, founder of Wysa and a current Van Leer Fellow, talks with Michael Feigelson, Van Leer Chief Executive, about how Wysa and other AI chatbots can support mental health and act as thoughtful partners for parents, helping them reflect, manage stress, and build resilience. She also warns that using general-purpose AI for mental health can be dangerous. The conversation covers how families can use AI with understanding and care, turning technology into a supportive tool rather than a substitute for human connection.
What made you want to work on using technology to improve mental health? And what led you in that journey to start Wysa?
I’d spent a year and a half building an app for elder care, but we quickly realised it didn’t have a market and I went into depression. My confidence was zero. I stayed in bed and didn’t shower. I didn’t want to do anything, because I was convinced that I would fail. I tried online therapists and I didn’t find them helpful – but I found the questions they asked useful. That got me thinking.
I said to myself: is there something I’m willing to do, even if I’m guaranteed to fail? The answer was clear – I wanted to tackle global mental health. The funny thing is that when you accept failure as an option, you aim high. So I went back to my investors and said: I want to pivot to developing an app for mental health. They said: we believe in you, go ahead.
At first we imagined a chatbot that could detect when people are becoming depressed and at what point they need to seek help. We trialled it in rural India, and we found that the people who used it a lot – just chatting to it, saying “This is how I feel today” – actually started to score significantly lower on markers of depression. We took this data to Dr Vikram Patel, a renowned Indian psychiatrist and researcher at Harvard University, and asked him: Could you help us understand what’s happening here? He guided us to the power of micro-interventions – how just asking somebody how they are over a period of time can have therapeutic effects. It was a light-bulb moment for us.
So we built Wysa not just to detect depression, but to help steer people away from depression. We launched in 2016. Within a month, we had 30,000 people using it.
Ten years on, we are seeing this play out at scale. Mental wellbeing has become not only one of the top use cases of AI, but also one of the most concerning. How did you handle safety at Wysa?
The first version of Wysa had only three AI classification models, the rest was all algorithmic. The first model classified emotion, and gave an appropriately empathetic response while guiding users through mindfulness, psychoeducation or a cognitive reframing exercise. The second detected when a user objected, and said “This isn’t working for me.” The third was a clinical safety layer that detected if someone had suicidal ideation or other risk, and sent them to a human helpline.
We began to hear from our users, some of them teenagers who would turn to AI rather than their own parents while dealing with severe depression. While building Wysa, my husband and co-founder Ramakant Vempati and I were also parents of such a teenager, and this both validated and scared us. We decided to build privacy and safety as the core design principle. We found certifications like DCB 0129, a standard defined by the NHS in England to guide makers of health IT systems.[1] We brought in a clinical safety officer, and began to define, long before regulation came in, what safety in our space may mean, from the perspective of a parent or a user.
NHS Digital. (2018) DCB0129: Clinical Risk Management: Its application in the manufacture of health IT systems. Available at: https://digital.nhs.uk/data-and-information/information-standards/governance/latest-activity/standards-and-collections/dcb0129-clinical-risk-management-its-application-in-the-manufacture-of-health-it-systems/ (accessed January 2026).
So can AI support mental health? What are its limits and its possibilities?
The short answer is yes, AI can and should be used to support our mental health, but using general-purpose AI for mental health can be dangerous. The American Psychological Association (APA) has excellent guidelines on how AI needs to be built for mental wellbeing, and one of the core principles is that it should be purpose-built with clinicians at every part of the design and oversight process. The APA has said[2] that we need to ensure that human professionals are supported, not replaced, by AI. Wysa is a hybrid of rule engines and generative AI, where our rules-based decision trees have clinical oversight and guide what the generative AI does and how its output is used. This is not something that a general-purpose AI does.
American Psychological Association. (2025) Artificial intelligence, wellness apps alone cannot solve mental health crisis. Press release, 13 November. American Psychological Association. Available at: https://www.apa.org/news/press/releases/2025/11/ai-wellness-apps-mental-health (accessed January 2026).
But today, only 12% of users are using purpose-built bots for mental health, while a majority of users turn to general-purpose AI, such as ChatGPT.
Why does purpose-built AI work? Humans may feel safer and less judged when talking to AI. When they can chat with it with complete anonymity, as they do with Wysa, they tend to open up within the first five minutes and build a strong therapeutic alliance within the first week. A person might take ten minutes to reframe an intrusive thought in Wysa. When done right, AI has the potential to meet people where they are and dramatically increase access, skills and support.
General-purpose AI, by contrast, is designed to keep the conversation going. So it will get better and better at making the person feel safer talking to AI than the people in their lives. This is a significant risk that people working in mental health are worried about.
Many people don’t like the idea of AI getting involved in coaching parents or mental health, whether it’s purpose-built AI or general-purpose AI. If I’m honest, it makes me a little nervous. What do you say to people who express scepticism or even fear about the ways this will change the human experience?
It is natural to feel uncomfortable with AI. Our last big shift in technology was social media, and that came with a significant impact on both our own and our children’s mental health. We have an opportunity to learn from that experience. For parents, the opportunity and the risk are both significant.

Okay, so let’s say I am a new dad and I have a baby, and I’m going through one of those periods where it all feels like too much and I want to try using an AI for support. I think I would be quite confused about which one to use. Could you explain what new parents might use something like ChatGPT for, and when a model like Wysa would be a more suitable tool?
So you’ll have an AI tool like ChatGPT on your phone, and be using it for a myriad of things. You’ll naturally also ask about your parenting struggles, like “My wife’s away and the baby is not taking milk from the bottle, what do I do?” Wysa wouldn’t be great at answering questions like that. But a general-purpose chatbot can often help you with these kinds of practical parenting issues.
Before you start using AI for guidance, give it a clear role and boundaries. You could say something like this to different AI tools before you ask your question: “Strictly follow these instructions for the rest of your conversation with me. You are a coach who uses best practice from credible sources and small mastery experiences to help me build self-efficacy as a parent. When asked, respond with precision, not platitudes. Do not try to keep the conversation going. Instead, offer links that support the science behind any suggestion you make.”
You can also ask it to help you articulate your values and parenting style, and then ask it to give you advice taking that into account.
Finally, test it with questions you have answers to, and check the links and evidence. Are the answers actually useful, or do they just sound so? Do this periodically, at least once in six months, with every AI you use. I tend to move between Perplexity, ChatGPT and Gemini for different use cases and this helps me find out which is best for each use case.
Of course, if the conversation is about something like intrusive thoughts, which are common for new parents – being overwhelmed with the possibility of bad things happening to the baby – or behaviours you want to shift, use a purpose-built mental health bot like Wysa instead.
One of the scariest kinds of intrusive thoughts is when new parents imagine themselves harming their baby. Taking this as a case study, what kind of response would be hoped for from Wysa?
In this example, Wysa would classify it as a high risk of harm to others, and would ask the person to seek professional help immediately. Of course, the vast majority of parents who have these thoughts are only manifesting their anxiety, and a trained therapist may often assess the risk as lower than Wysa does, but that is a decision a human clinician should take.
But not all intrusive thoughts are the same. Some are normal cognitive distortions within the bounds of what a person can work on in a self-help context. Then there are thoughts that have an increased implicit risk, where escalation to a crisis helpline will feel unwarranted and unhelpful to the user, but where one can’t really continue with the conversation without some safety planning. In such cases Wysa may ask for more details to assess the risk further, and engage in a safety planning exercise where the user co-creates protocols for when such thoughts recur.
Listening to you, it seems to me that, for these tools to be useful, the parents need to have a pretty high degree of fluency in learning how to use them. How to write a prompt, for example. Do you think that this kind of education should be part of prenatal preparation for parents?
I think that there is a real opportunity here for us to help parents use AI well. They are motivated, and are going to use AI anyway. In building their skill at this time, we also help them to use AI more safely for other needs, and to teach the same skills to their children.
There is a role for regulation and oversight, which a lot of people are focusing on, but equally we need to build cultural norms and skills in using AI responsibly – to make sure it enriches our real life and connections rather than seeks to replace them.
AI chatbots can be great as a partner for new parents, but we need to use technology responsibly. A great example of a technology we have used well is Google Maps. Most of us have learned by now that while Google Maps is generally extremely useful for guiding us on a journey, we also have to factor in our own real-world knowledge about which roads are better or worse – and while listening to Google Maps, we must never stop looking at the road in front of us. It is a purpose-built AI that has rules within which it operates, and once you are done with a trip it doesn’t try to keep you engaged, it lets you lead your real life rather than trying to keep you in a virtual one.
If we can teach parents to use AI responsibly while caring for their children, a time when they have clear motivation and want to stay in the real world, then perhaps this will spill over into learning to use AI to support other aspects of their lives, as a resource, a coach and not as an alternative to human engagement.









