Will our kids’ education suffer with robots running rampant? We turned to media literacy experts to navigate this rapid new normal.

Like it or not, text-generating artificial intelligence (AI) robots have arrived. These tools are rapidly taking over written tasks that would otherwise take a human brain many hours, such as summarizing entire books in seconds. They can even do a human-like job with essays, blogs and news articles (but rest assured, this article was 100 percent written by a tired mom propped up by caffeine.) More advanced text-generating AI, such as ChatGPT—a.k.a., chat generative pre-trained transformer—can also write code, lists, recipes, emails, and even poems.

As with any innovation, there is the capacity for good (resume writing, for example) as well as not-so-good—namely, plagiarism. There’s also the pressure this tech places on parents and educators to ensure all kids receive quality education rooted in ethics.

We spoke with Matthew Johnson, Director of Education for MediaSmarts, about the impact this technology may have on kids, homework, and what parents can do to support their children as they grow up in a robot-driven world.

What are some educational ramifications of kids using tools such as ChatGPT (reports, summaries, essays, poems, the list goes on)

Matthew Johnson: The most obvious ramification is that it is now possible to have a chat AI create a fairly coherent piece of writing. That being said, it’s going to be a while before the quality is good enough to make online plagiarism more of an issue than it already is.

In many cases, the telltale signs—a sudden jump in the students’ writing fluency, for example—will still be there. In general, the strategies that teachers will need to use to adapt to tools like ChatGPT are the ones that are already best practices in dealing with online plagiarism:

  • don’t use the same essay topics every year
  • develop unique essay topics that emerge from classroom discussion
  • require students to hand in all of the process steps
  • follow up with interviews to gauge how well the student understands what they wrote

What additional pressure does AI place on educators, and how can parents support teachers?

MJ: [AI] definitely places pressure on teachers to adapt and update their teaching practices. At least for a while, it probably will increase the number of plagiarism cases they have to deal with.

The best way parents can support teachers is to talk to their kids about the ethical issues around AI and to work together with them to find a solution when there is a plagiarism case.

How do parents talk to their kids about AI? What should they say?

MJ: First, parents should help their kids understand the role that algorithms, and their use of our personal information, play in our lives. Our resource #ForYou: A Game About Algorithms is a great way to start that conversation.

Second, they should help their kids learn how to use online sources responsibly. Start with prioritizing learning over grades: research has shown that when more attention is paid to the value of the content and the learning process over grades, rates of plagiarism drop. MediaSmarts has a number of resources that parents can use to discuss this topic with their kids as well, including our guides Your Connected Life and On The Loose.

How could language-based AI be used for good in a learning environment?

MJ: There’s definitely potential for these tools to eventually become the equivalent of calculators in math class—automating steps that are largely mechanical, such as writing summaries.

An increasing number of writers and researchers are using some of them now to write executive summaries and identify key points from research, and it makes sense for students in the higher grades to learn to do the same thing.

Are there greater ethical questions surrounding the use of AI, considering today’s kids will become tomorrow’s adults?

MJ: MediaSmarts’ research has found that kids are more likely to see the online world as an ethical space than kids did in the past. However, the online world still has a number of features that make it easier for us to fall prey to moral disengagement (i.e., when an individual convinces themselves that ethical standards do not apply. Think: sharing demeaning photos, denying the harm in sharing, shifting moral responsibility of sharing, and/or blaming the victim.)

It’s important to help kids avoid the “empathy traps” (i.e., when empathy is created through manipulation) of communicating online and to recognize the ways in which we can convince ourselves that it’s okay to do something that we know is wrong—or to not do something we know we should.

More broadly, we’re still in the early stages of exploring the ethics of content-generating machine learning tools like ChatGPT, and there is growing evidence that in many cases the data sets they were trained on were sometimes unethically obtained. Whether and how we use these tools, and which ones we choose to use, is going to be an ongoing ethical discussion.

What’s the biggest takeaway for parents regarding AI?

MJ: What’s probably most important to understand is that, in general, kids don’t understand AI any better than adults do. While most older kids are conscious of things like the recommendation algorithms used by YouTube and TikTok—in part because they, and the influencers they follow, often talk about strategies for “gaming” them—they don’t generally know how algorithms draw on personal information to make decisions.

Nor do kids realize how AI will often use inferences that can be inaccurate or even seem unjust, such as discriminating based on race or gender. Kids also don’t realize how data collected about them now may influence things like jobs, insurance rates or mortgages for years to come.

It’s important to keep perspective about new tools: not to be swayed by the hype or to fall into a moral panic. By taking a digital media literacy approach we can find ways to integrate technological tools into our lives, including our learning, in ways that are both ethical and effective.