Most Social Media Users Struggle to Recognize AI, Says Report

A media literacy report reveals that merely 39% of adult Australians feel confident in their ability to identify disinformation online. Read Full Article at RT.com.

Most Social Media Users Struggle to Recognize AI, Says Report
A recent report from Australia highlights that the increasing prevalence of artificial intelligence (AI) is outpacing adult media literacy. This gap poses growing risks for internet users, who are becoming more susceptible to misinformation, according to the authors of the research.

The AI sector surged in 2022 following the introduction of the chatbot and virtual assistant ChatGPT by OpenAI, an American AI research organization. Since then, the industry has drawn billions of dollars in investments, with major technology companies like Google and Microsoft providing tools such as image and text generators.

Despite this technological advancement, users exhibit low confidence in their digital media skills, as detailed in the report titled ‘Adult Media Literacy in 2024’ by Western Sydney University.

In a study involving 4,442 adult Australians, participants were surveyed about their confidence in completing a set of 11 media-related tasks that required both critical and technical skills or knowledge. On average, respondents felt they could confidently perform just four of the 11 tasks.

The findings show that these results have remained “largely unchanged” since a similar study was conducted in 2021. Notably, the ability to recognize online misinformation has not improved; research from both 2021 and 2024 indicated that only 39% of respondents believed they could verify the truthfulness of the information they encountered online.

The recent advent of generative AI in online platforms makes it “even more difficult for citizens to know who or what to trust online," according to the report.

The slow progression of media literacy is particularly alarming given the capacity of generative AI tools to create high-quality deepfakes and misinformation. Associate professor and research author Tanya Notley emphasized this point, stating that it is increasingly challenging to discern when AI has been utilized. “It’s going to be used in more sophisticated ways to manipulate people with disinformation, and we can already see that happening,” she cautioned.

Addressing this issue requires regulatory measures, though progress in this area has been sluggish, Notley noted.

In related news, last week the US Senate passed a bill aimed at safeguarding individuals from the non-consensual use of their likeness in AI-generated pornographic content. This legislation followed a scandal involving deepfake pornographic images of pop singer Taylor Swift that circulated on social media earlier this year.

The report also observed that Australians now prefer online sources for news and information over television and print newspapers, marking a significant shift in how they consume media.

Mathilde Moreau contributed to this report for TROIB News