• Parenthood & Caregiving

“I built my own AI model specifically to help me parent my son”

Every new technology brings controversy and concern about how it might impact on children’s development

Photo of contributor Daanish Masood Alavi Daanish Masood Alavi
Technologist and AI Advisor
Interviewed by:

minute read

Featured in Journal 2026

Available Languages Available in:

Prefer another language?

minute read

Available Languages Available in:

Do you want this article available in another language?
Daanish's son in his black hole costume, co-designed with AI. Photo: Daanish Masood

Every new technology brings controversy and concern about how it might impact on children’s development. Generative AI is no exception. The technology is too new, making research into AI and parenting limited, so we interviewed someone with personal expertise in both areas.

What follows is a conversation between Michael Feigelson, a dad struggling to get his head around AI (and also Van Leer Foundation Chief Executive) and Daanish Masood, a dad (and AI expert) co-writing an epic bedtime story with his 4-year-old son and an AI trained on the Tao Te Ching and Rumi’s poems.

I’m a beginner when it comes to AI, and not yet sure how I feel about it getting involved in my parenthood. I want to start by asking about how this technology is shaping your personal experience as a dad.

For me, using AI helps to clear my cognitive clutter and free up more time to spend with my son.

Let’s say I want to enforce boundaries at bedtime, but in a gentle way. I’ll describe the problem to an AI model, ask for ideas, and add something like: “Please be thorough and rigorous and draw on extant and current research on early childhood development, and show me your sources.”

I’ll then read the AI’s output, and I’ll go look at the sources that it’s citing, and make sure those sources actually exist – sometimes an AI can outright hallucinate them – and that they’re consistent with what it said in its output. What I’m doing here is just shortening the amount of time that it would have taken for me to do the research on Google, going through the journal articles myself.

The same goes for any number of other questions that come up as a parent. Here’s what I’m feeding my child, are they getting sufficient nutrients? Here are my local schooling options, which one might be best suited to my child’s character?

Or more simple questions around logistics: I’ve got three hours this afternoon and I’d like to go for a swim with my toddler and play in a park. What are my options?

That makes a lot of sense. But what about from an educational perspective – do you also use AI in your parenting in that way?

Sometimes I’ll use AI together with my son, to do a deep dive into a subject he’s interested in. At the moment that’s sea creatures, specifically cetaceans. I’ll set up ChatGPT with voice mode, and I’ve prompted it so it knows his name and some of the things he likes. I’ll sit there as a facilitator, supervising as he has a conversation about various cetaceans – what do they like to feed on, how do they interact with each other?

In these conversations he’ll refer to ChatGPT as “robot”, and he knows that the robot sometimes makes mistakes. He’ll say things like: “Robot? That’s not quite right.” And that’s exactly what I want – he understands that the robot is a way for us to query the body of human knowledge, but the robot’s not perfect, the robot gets things wrong.

What lands for me, as both a dad and a researcher, is the scale gap. My son has picked up a surprising amount about whales from a few dozen short chats and picture books. The model that answers him was trained on hundreds of billions of pieces of text. When he asks, “Do orcas have friends?” he isn’t retrieving a fact. He is testing a small theory about how social life works, and then updating it in real time. The machine finds patterns. He builds understanding.

Using AI helps to clear my cognitive clutter and free up more time to spend with my son.

You told me you’re writing a bedtime story with an AI. That sounded really cool, could you explain how that works?

So, this isn’t with one of the widely available AIs. For this, I built my own AI model specifically to help me parent my son. I use it to generate the story I read to him at bedtime. It’s a narrative epic that involves characters from different parts of the world who are on a quest in space. I generate new chapters every day, and I’ve been reading him this story for over a year. I intend to keep it going until he’s around 8 or 10.

I’ve done this for two reasons. The first is that I can choose what kind of worldview I want to encode in the story. I’m trying to impart to my son a cosmological vision inspired by spiritual wisdom, where all things are seen as connected. So I get my AI model to generate stories that give primacy to texts such as the Tao Te Ching or Rumi’s poems.

What is life about? Why are we here? What is a good way to be? Those are questions that I have wrestled with my whole life, and I want my son to have the tools to seek the answers for himself – acknowledging one’s fears and vulnerabilities, standing up for one’s values, and so on.

I try to keep the story just ahead of him. Last month the crew met people who speak only in colors. He wrestled with it for three nights. Then he said, “Maybe they mix feelings like paint.” That was his bridge, not mine. You see this a lot with young children: from very little data, they make clean, original moves. Large models tend to stay inside what they have already seen. Children jump the gap.

The second advantage of generating his bedtime story chapter by chapter on a daily basis is that I can make it current. I can get it to respond to whatever challenges he is dealing with at that moment. For example, if I notice that he’s wanting more and more toys, I can prompt for a story that will gently illustrate a materialist critique around how happiness isn’t connected to possessing more objects.

He is not a passive audience. If a scene rings false, he stops me and rewrites it. Last week he decided the space pirates should become gardeners “because taking things makes them sad inside.” That is a child building a causal story about feelings and consequences from a handful of examples. It is also a reminder that values stick when children co-create them.

I can choose what kind of worldview I want to encode in the story.

This feels very exciting to me, but also kind of inaccessible – it’s possible only because you know a lot about AI. It makes me wonder whether if I tried to do the same, it could cause problems rather than be helpful.

I get that, and I’m not suggesting everyone train their own model, so let’s talk about the well-known AIs that everyone can access through an app on their phone, for free or with a subscription. ChatGPT is the best known. Others include Gemini, Claude, Perplexity, Grok, and CoPilot.

I think these AI models can be very helpful for parents, with two big provisos. The first proviso is that you do have to develop literacy in AI. It’s like developing literacy in news. Most of us are well aware that when we encounter news online, we can’t just believe it – we have to bear in mind all the reasons why it might be wrong, and try to assess for ourselves how much trust to place in it.

As someone who builds AI models, I have an advantage, but everyone can make an effort to develop their awareness of the problems you can encounter when using popular AI models.

Those models, and those problems, change all the time. Until recently, for example, sycophancy was a big problem: ChatGPT would tend to affirm whatever you said and tell you that you’re special. That’s not such an issue in newer models, but other problems – such as hallucinating – are proving much more intractable.

Another quirk is the “noisy-TV” problem. Some systems get absorbed by things that aren’t interesting to us, like static on a TV. My 4-year-old filters that out better than most adults. When the AI wanders into irrelevant whale trivia he cuts in, “Robot, that’s not what I asked.” I want him to keep that sense of relevance.

The first proviso is that you do have to develop literacy in AI.

The second big proviso is that AI models are not a substitute for human interaction. Nothing beats human connection.

One reason children can do so much with so little is that every word they hear comes wrapped in context. Eye contact, gestures, a look, a point, a touch. Studies show that when you read daily to a child, they hear roughly 1.4 million more words by age 5 on top of everyday conversation. That makes a big difference, but it’s tiny compared to AI models. GPT-3 was trained on about 300 billion pieces of text.

Yet a short, live exchange between two people carries far more signal than a page of text. Children squeeze much more meaning out of each word because it comes with human connection.

That said, the reality is that there isn’t always a human at the ready when we need to talk. I talk to my therapist every week, for example, and a child psychologist every two weeks. But as a first port of call, just to test my feelings and get feedback immediately in the moment, I find it’s extremely useful to be able to query a compressed body of human knowledge using AI.

That makes sense. It’s very similar actually to something Jo Aggarwal said in another interview, about including AI literacy in prenatal education. So can you guide me on how to write a good prompt? If I want to ask an AI model how Rumi might talk to a child about big feelings, how should I phrase the prompt?

Give it the scene and the limits. I’d say:

I’m speaking with my 4-year-old about big feelings. Use a calm, Rumi-like voice. Keep it under 100 words. Plain language. One image a child understands, like gardens or stars. End with one small thing we can do together right now. If you’re unsure, say so, and suggest one place we can check.

That prompt usually gives me something gentle and usable on the first try.

Response from the AI:

Sometimes feelings are like big clouds that fill up the whole sky inside us. Happy clouds, sad clouds, angry clouds. They all belong. Just like flowers in a garden, some are tall, some are small, but each one is important. When a big feeling comes, we can be like a tree – strong and still, with deep roots. The feeling will pass like wind through our branches.

Let’s try something now. Put your hand on your heart. Feel it beating? Let’s take three slow breaths together and feel our hearts being brave.

Daanish's son in his black hole costume, co-designed with AI. Photo: Daanish Masood

Are there things that worry you about using AI with your son?

Yes. I worry about cognitive outsourcing. When he spends ten minutes figuring out how to span a gap with blocks, he is building spatial sense, patience, and frustration tolerance. If a model hands him the answer, we cut that growth short. I also worry about speed. Machines answer instantly. Children need time. The slower path is where depth comes from.

So we keep simple defaults that are easy to follow: sit together, let him guess first, ask the model to cite or to say “I don’t know”, then close off-screen by doing something with our hands. One thing we’re doing with our hands now is making a Hallowe’en costume. He wants to be a black hole for Hallowe’en. We worked with AI to design a costume, which he is really excited about wearing.

Okay, so one last question, a more philosophical one. How different is training an AI from raising a child?

There’s a temptation to draw parallels. Like any parent, I want to impart values to my child. As an AI researcher I’m also interested in the question of how we impart values to an AI.

I need to be careful here, because I’m not saying that AIs are like children and we’re developing them like children. I separate intelligence from consciousness – I don’t think AIs have feelings or inner, subjective experiences.

But one of the framings that I’ve thought about more and more is the pedagogy of the machines. We train the machines to, in turn, teach us. Ultimately, the machines have to be useful to humans, so they have to serve the particular makeup of the human brain, the human nervous system, and how humans develop. Imagine if every child had their own highly individualised AI tutor – it could be a huge levelling factor.

But then the people who train AIs need to be aware of early childhood development. What kind of scaffolding that we offer to young children could be abstracted to be useful in how we train AIs to work with children? What insights from early childhood development around iterative feedback loops could be embedded into the models?

The people who train AIs need to be aware of early childhood development.

Could we find ways to embed AIs in communities to observe how people react in human moments, such as birth, death and marriage? Would that help AIs be better able to support humans in working through the cosmological questions: why are we here, what is a good way to be?

Questions like these are very new and experimental, but I think it would be well worth exploring research that brings together experts in early childhood development and those that are involved in training and building AI models.

Send us feedback about this article

This feedback is private and will go to the editors of Early Childhood Matters.

    Early Childhood Matters
    Privacy Overview

    This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.