When used responsibly, generative AI (“GenAI”) can be a wonderful (and conversational) tool to boost creativity and productivity. But it’s important to be discerning, and to teach our kids to know how to identify AI-generated media.

From helping students with research and writing to empowering businesses across all industries to aiding families with budget plans and personalized vacation itineraries, there’s a lot to like about GenAI.

Along with providing text answers to queries and writing lines of code, GenAI can also be leveraged to create images to freely use on your website, in a school project or a boardroom presentation. It can also generate music and videos, program interactive games and apps, and much more.

The problem is, there are always those who abuse rather than appropriately use new technology—and GenAI is no exception.

Cybercriminals are relying on GenAI to write more convincing scams (without spelling and grammatical mistakes that used to be red flags). AI tools can scrub your social media presence to find voice clips of your family, and then make it sound like your son or granddaughter calling you to ask for money. Sometimes the motivation is political and not for monetary purposes, such as “deepfake” videos impersonating a politician, made to look and sound like they’re saying or doing something they are not.

While it can be challenging to spot something AI-generated, there are a couple of telltale signs you can look for, along with other tips to not falling for manipulated media.

Identifying AI-Generated Photos

When looking at photos of people, pay close attention to the person’s skin, which could look too smooth or waxy, or hair that is just too perfect.

AI sometimes has issues with fingers and toes, often adding or removing one, or limbs may look like they’re an unusual shape or bent in an odd manner.

A person’s eyes may also look blank or “lifeless,” or may look unnaturally shiny.

Behind or beside the people in question, you may see some inconsistencies in lighting or shadows, or there may be perfect patterns to the images, like an AI-generated fence in a backyard scene where all pickets have exactly the same shape and texture.

You can also look for mistakes with objects that seem out of place, like a fire hydrant that may not be exactly on the ground by hovering ever so slightly above it.

So, what to do?

If you see an image you want to double-check, run through this list:

  • Carefully examine the photo for any of the signs mentioned above.
  • Use tools like Google.com (or the Google app) to perform a “reverse image search” to see if that photo has been used anywhere else.
  • While not perfect, you can use AI detection tools like BrandWell, AI Or Not, Illuminarty, Huggingface or Deepfake Detector.
  • Apply common sense. Would Oprah, for example, endorse a risky drug or an investment opportunity?
  • If the photo claims to be from a source, such as your bank or favourite online retailer, reach out to the organization to ask them if this image is in fact from them. But always contact companies with phone numbers you already have for them and not one provided by them in a text, email or direct message.

Identifying AI-Generated Videos

Like photos and audio, it can be challenging to detect AI-generated videos—and certainly a tougher task today than even a year or two ago, as AI is evolving rapidly.

Look at the video carefully and you may see evidence of it being AI generated. Body movement, facial expressions and hand gestures can be a dead giveaway if the motion seems either too jerky or too still, or if the video is only available in slow-motion (like many videos today). Eyes may not blink.

Just like photos, skin may look too smooth or appear too waxy. Eyes may lack “life” to them. Lighting and shadows may not reflect the way it should in the real world (look at how the sun casts a shadow on the ground or on water).  

There may be other background inconsistencies, like objects that might appear to disappear, morph or change shape unexpectedly. Called “artifacting,” visual glitches may appear when the AI is attempting to fill in missing information or create busy scenes.

If the video has speech, the lip-syncing may be off, and the audio won’t perfectly match the mouth movements.

So, what to do?

A few suggestions:

  • Don’t just examine the video for anything that looks odd but study the voice, too. Does it lack emotion or have strange a rhythm or cadence?
  • Always be weary of videos from unknown and unsolicited sources.
  • Check the source of the video. Is it from a reputable news source or verified social media account? Does it make sense the people in the video are doing what they should be doing?
  • AI detection tools can analyze the video in question. Examples include AI or Not, Face Swap Live, Deepware Scanner, FaceApp, WeVerify Deepfake Detector and Intel FakeCatcher.
  • Remain skeptical, as these videos are looking better all the time.

Identifying AI-Generated Audio

Whether a caller sounds like it’s a friend of family member, a celebrity or someone claiming to be from a business or government entity (like Canada Revenue Agency), you may be able to tell something is off.

If could be a fully preprogrammed synthetic voice with a script typed out, or with AI-generated responses, or a real human masking their voice with technology that makes it sound like someone else (this is known as “voice cloning”).

Listen to the audio carefully and you may realize it’s a deepfake, especially if the intonation of the words lacks variations, or the rhythm that seems off. Sometimes the voice sounds a little robotic, or flat.

Another “tell” are inconsistencies in breathing and pauses. Often, AI voices can’t replicate natural spaces in between sentences. Also, listen for inconsistencies in the speaker’s accent or dialect.

Sometimes there are anomalies in background noise, a kind of looped audio or whoosh sound, which may imply AI manipulation.

So, what to do?

  • Some cybersecurity experts say to always let calls go to voicemail and if it’s important, the organization calling you will leave a message or try to reach you another way.
  • If you do accept the call, be aware of deepfakes that may sound like someone you know. If they start asking for money, your spidey senses should be tingling.
  • When in doubt, hang up and contact the person or company at a number you already have for them. If it’s your grandson, reach out with the phone number you have for him. If it’s your bank, only call the number on the back of your debit or credit card. In other words, verify the source.
  • Know that banks, retailers or the CRA will never call you and ask you to urgently confirm your personal information. The same rule applie to someone who claims to be calling from Microsoft, Apple or tech support from your internet service provider, telling you there’s an issue that needs addressing. On a related note, a sense of urgency is also a sign it’s a scam as they’re trying to get you to act without thinking.
  • Software like Audacity or Adobe Audition can be used to analyze the audio waveform and frequency spectrum, which may reveal anomalies indicative of AI voices or manipulation. Other software exists to detect voice AI, such as Resemble AI and ElevenLabs.

A Note About Written Content

Text was probably the first—and easiest—nut cracked by AI technology. With search engines already generations and generations into development, it was less complicated to build a tool that would write believable content than anything else. And it’s scary how humanlike good AI copy can sound (to the trained eye, it still sounds robotic and often clunky or repetitive, but it has come a very long way from its beginnings).

But, as a media outlet, it’s our responsibility to educate our audience on two major things when it comes to what you read on the internet—especially since we are talking to information-sponge parents who just want to do what’s best for their kids. Read on for the two gold rules of discerning good, reliable information from less trustworthy material.

  1. Be your own factchecker. It doesn’t matter what you’re researching—we’re talking anything from potty training tips to how to handle difficult conversations during puberty—you cannot rely on every single website to speak the truth. We’re not saying you can’t dive down a Reddit rabbit hole if it helps you to hear from other people with similar experiences, but consider where you’re getting the information. If you’re on a forum, a blog, an “expert” site from someone who hasn’t make their credentials known, even a website for a business or organization…everything you read needs vetting. There are some things you can take with a grain of salt, of course—not everything is so serious that you need to debunk each claim­—but if you’re looking for information to aid in the management of health, mental health, finances, relationships, etc., do your due diligence.
  2. The source material matters. To build on the first point, fact-checking doesn’t just mean running a google search for the facts at hand. Where you research also counts. Major hospitals and universities or colleges, major bipartisan media outlets, professional organizations and government websites are all examples of trusted sources. If you’ve read something featuring a citation from a study, the study should be peer-reviewed (meaning evaluated by professionals in the field) with appropriate sample sizes, and either affiliated with an academic institution or published in an established academic journal.

The bottom line is a good reminder for both parents and kids: If you can’t find a solid trail of data, from either well-credentialed experts or trusted sources of information, to back up a claim, move on. Flimsy assertions are at best called out for being poorly researched on a school paper or test and at worst, to blame for hurtful and even dangerous decisions people make for themselves and their families. When in doubt, check it out. And in this day and age? Then check it out again.