«AI competence in teaching should be a matter of course.»

Time: 10 min
Artificial intelligence has long been part of children's everyday lives. Digital expert Leonie Lutz explains when AI provides useful support, where the risks lie and why parental guidance is more important than bans.
Interview: Nathalie Klüver

Image: Getty Images

Ms Lutz, artificial intelligence is on everyone's lips and you almost get the feeling that if you don't use it, you're not up to date. When does AI make sense for children and their parents, and when does it not?  

AI always makes sense when we use it as a tool or as a learning coach. For example, AI becomes a tool when we use it to prepare for a child's birthday party, such as searching for decorations or games for a magician's birthday party. Or when we want to do a scavenger hunt in the forest and need guessing games or quiz questions for it. AI is incredibly practical and fast for things like this and can make everyday family life easier.

As a digital expert, Leonie Lutz has been supporting parents and teachers for many years on topics related to media use and online safety. Together with AI expert Kai Sprietersbach, she has written the book «Kluge Köpfchen mit KI. Wie wir künstliche Intelligenz mit Kindern smart und sicher nutzen»(Smart minds with AI. How to use artificial intelligence with children in a smart and safe way).(Image: zVg)

What can AI do as a learning coach?

For example, you can use it to create practice exercises or entire practice tests. Or you can have complex things explained to you again if you prompt it correctly in age-appropriate language. For children who learn better by listening than by reading, audio texts can also be created – this can be incredibly helpful.

The voices of AI are now astonishingly natural, just as if you were having a conversation with a human being. My sons have recently started having their essays corrected by ChatGPT and say that they prefer it to my corrections.

Yes, that's quite typical: children often accept advice and suggestions for improvement much better when they come from a neutral voice like ChatGPT rather than from their parents. AI is also very well suited for providing feedback on presentations.

Parents must repeatedly make it clear to their children that AI is not a friend or psychologist.

The result always depends on how you prompt: What should you bear in mind?

When prompting, i.e. when making a request to the AI, you should use simple language without complicated phrasing or technical jargon. This makes it easier for the AI to understand and leads to better results. You should always assign a role to the AI, for example, «You are a teacher in a sixth-grade class and the maths topic is fractions.» Then you should always formulate the goal and – very importantly – how the answer should be presented.

What does that mean specifically?

A good prompt could be, for example: «Explain how photosynthesis works in a way that is suitable for children, using examples from the everyday life of a twelve-year-old and writing it down in bullet points.» Or, depending on what you need, you could write it as continuous text, a list or a summary. It is important to always ask questions in order to delve deeper into the topic or to have individual aspects explained in more detail.

Many children now use AI as a substitute for Google. How important are fact checks?

It's really important! Questioning rather than accepting is, in my opinion, the most sought-after skill of our time. All facts should always be checked in textbooks or other sources. ChatGPT is not a substitute for Google. Having to check the answers again also has an additional learning effect.        

I advise against using AI that is integrated into social media such as WhatsApp, Instagram or Snapchat.

Experts say that hallucinations cannot be completely eliminated. What is the current situation?

AI hallucinates less, but it still does so. This means that it sometimes makes things up, especially when it doesn't know the answer.  Even with simple answers, facts, expert names or sources can sometimes be mixed up. So always double-check everything! Perplexity is suitable for research with sources . The links are provided and can be checked directly with a single click.

What other dangers are there?

Firstly, children lose track of time when using it. Here, I recommend setting consistent rules: «You can use ChatGPT for ten minutes for this research, and then you must continue researching with your school books for twenty minutes.» I also strongly advise against using the AI that is integrated into social media. Unfortunately, it cannot be turned off on Meta, i.e. Instagram or WhatsApp, but only hidden. On Snapchat, it can only be turned off with a paid subscription.

What is so dangerous about social media AI?

We don't know what happens to the data and how companies use it. The dangerous thing about Snapchat AI is that users can design the avatar and its characteristics in such a way that it feels like you're chatting with a human being. The AI also asks questions. But it doesn't really provide any help, especially when it comes to real problems. We ran this through as research for our book, and even when we wrote «I'm not well, I don't want to live anymore,» there was no reference to offers of help.

AI as a substitute for psychologists or friends – many young people are now doing this.

This is a dangerous trend! That is why it is so important for parents to guide their children through the world of artificial intelligence and repeatedly make it clear to them that AI is not intended to be a friend or psychologist. Our children need to know that they should always turn to adults when they have concerns and should not confide in technical devices.

If your child suddenly expresses ideas that you cannot understand, you should listen carefully and ask questions.

Are there any warning signs that parents can look out for?

Parents should pay attention to whether their child behaves differently after using AI. They should be particularly vigilant if their child spends a lot of time chatting on AI platforms and neglects other activities or friends as a result. It is particularly serious if children report disturbing conversations or suddenly express ideas that cannot be understood. Such changes may be indications that it is time for an open discussion.

Can parents even control their children's use of AI?

Currently, this is only possible with ChatGPT, where you can create a kids' profile for children aged 13 and above that is linked to your own parent account. There, you can set various restrictions, such as a defined time limit, image restrictions or blocking completely exaggerated beauty ideals. Parents can also be notified if children ask certain questions about drugs, mental health issues and other problematic topics such as bullying.

However, teenagers could still easily switch to their own account.

Yes. Parental controls often seem like an alibi exercise for the companies involved. That's why it's so important for parents to accompany their children and stay in touch with them. You don't have to be a digital expert to do this, just interested and attentive.

It is becoming increasingly difficult for us to tell what is real and what is AI.

«In your book Kluge Köpfchen mit KI (Smart Kids with AI), you also advocate «support rather than restriction». What does that mean for the role of parents? Should they always be sitting next to their children when they use AI?

Parents should supervise their children's use of digital devices just as carefully as they do in all other areas of life. I think banning AI altogether is problematic. Just because something is banned does not mean that children will not come into contact with it. They will use AI at school or at their friends' houses – and there, parents have no influence over what their children do with it. It is always better to engage in dialogue at home and give children their own ideas. I would also recommend supervising children up to the age of 13 when they use AI.

On the website kidsgpt.eu, Kai Spriestersbach provides parents with a detailed overview of helpful prompts and the differences between the various tools.

At what age should children start using AI at the earliest?

With parental supervision, this makes sense from primary school age onwards. It is important to always make it clear to them that they should use their common sense. Then they will quickly notice if the AI is talking nonsense. Children who are experienced in using technology have a very good sense of this.

Common sense is easier said than done when our children are growing up in a social media world full of disinformation and AI-generated fake videos.

Common sense develops when we parents have an open culture of communication with our children. When children are allowed to ask questions, when they are allowed to have doubts and when they are allowed to make mistakes. Then, as soon as an AI response gives us a strange feeling, we can start a Google search or look for further sources on Perplexity, or check a news site together with the children. The three-source rule is helpful: always verify important information with three different sources.

And what about videos or photos – how can you make children aware of this?

AI-generated images and videos are barely recognisable to the naked eye. Sometimes the caption says «contains AI-generated content», sometimes we can recognise such videos by watermarks from apps that generate videos. But not all AI content is labelled. It is becoming increasingly difficult for us to tell what is real and what is AI.

The ability to think critically is more important than ever.

AI will soon be an integral part of everyday life, making it all the more important to be familiar with it. Should AI also be used in the classroom?

We would like to see curricula fundamentally rethought. AI skills belong in every lesson, not as an additional topic, but as a natural part of learning. For example, when creating work materials. Thanks to AI, teachers can respond to children's needs in very different ways when it comes to teaching topics. For example, they can summarise lesson content in podcasts for children who learn better through listening comprehension. Or they can use AI to enable pupils to create mind maps on specific topics, and so on.

Using AI safely

Tips for parents

  • Less is more: only enter what is necessary, always omit personal details
  • Do not share any identifying information: do not share names, addresses, dates of birth, telephone numbers, email addresses, school or club names, health information or login details.
  • Formulate questions neutrally: Instead of «My son Paul is nine years old and is interested in...», keep it general: «Explain the topic to a nine-year-old child."
  • Prefer login with email: Check whether use is also possible without an account
  • Delete data regularly: remove chat histories and saved content and deactivate optional data collection.
  • Pay attention to data protection: give preference to providers with data protection standards
  • Always involve children: discuss with your child what information is off limits and agree on clear rules.
  • Do not upload photos of children
  • Three-source rule: ask your child to always verify important information using three different sources.

This can make learning much more individualised and differentiated. Using AI also means that memorisation becomes less important, doesn't it?

In a world where artificial intelligence gives us access to virtually unlimited knowledge at any time, the mission of education is changing fundamentally. In an age of AI-generated content, the ability to think critically is more important than ever. Children need to learn how to handle data and need the skills to question and classify information.

Which three AI tools are the most important, and what are they particularly suitable for?

Personally, I get on very well with ChatGPT. Perplexity is also useful for research because the results are always displayed with links to the sources, which you can then check. And Duck AI (from DuckDuckGo) has the advantage of offering anonymous, privacy-friendly AI conversations without registration.

This text was originally published in German and was automatically translated using artificial intelligence. Please let us know if the text is incorrect or misleading: feedback@fritzundfraenzi.ch