When ChatGPT entered the public imagination in 2022, Canadians were curious, hopeful, anxious and had plenty of questions. Just three years later, our new report, The State of Generative AI Use in Canada 2025, finds that two-thirds of Canadians have already experimented with generative AI (GenAI) tools.
That is an astonishing rate of adoption for a technology so novel, and it speaks to the profound impact it’s already having on our lives.
But alongside this rapid uptake is a sobering reality: most Canadians are still unsure about what these tools are, how they work or how they affect society. Our new national survey of 1,500 adults, conducted in February and March, reveals that while GenAI use is widespread, deep understanding is not.
Canadians are being ushered into a new era of AI-powered productivity, creativity and communication. But they are forging ahead without the digital literacy needed to navigate AI technologies and their impacts effectively, safely and critically.
News and politics
Only 38% of respondents indicated they felt confident using these tools effectively. Even fewer—36%—told us they were familiar with the rules and ethics around GenAI. These numbers should concern all of us.
Nowhere is this tension clearer than in how Canadians view GenAI’s impact on information, media and politics. Canadians’ comfort levels with GenAI use in newsrooms vary sharply depending on the topic: people are relatively at ease with AI-generated content in entertainment and lifestyle reporting, but not as much with more sensitive topics such as politics, crime or global affairs.
Our survey also reveals that two‑thirds (67%) worry GenAI could be used to manipulate voters or interfere with democratic processes. At the same time, trust in political information online is eroding, with 59% saying they no longer trust the political news they see online due to concerns that it may be fake or manipulated.
Although GenAI tools like chatbots could help voters assess policies proposed by different parties and their potential implications, most Canadians (54%) are unlikely to use them to get information about elections or politics.
Responsible innovation
So what are Canadians asking for? More than anything, our findings show overwhelming support for regulatory guardrails. Canadians want clear rules for companies that develop, use or provide GenAI-powered tools and services.
Seventy-eight percent of Canadians say GenAI companies should be held accountable when their tools cause harm. Nearly eight in 10 also support both the regulation of current state-of-the-art GenAI tools and the proactive regulation of GenAI tools on the horizon.
This is a call for leadership and action. Canada has the chance to set a global standard for responsible AI governance, but must act quickly and decisively. We offer three core recommendations to help chart that path:
1. Policy leadership: Considering the ongoing race among GenAI companies to build the most advanced model, the principles of privacy by design should not be sacrificed simply to gather more user data. The risks associated with data breaches and accidental leaks of personal information in GenAI outputs are significant.
This means prompts and other user inputs should not be used for fine-tuning or training future models without obtaining meaningful consent first. Furthermore, to address Canadians’ concerns about how GenAI companies manage personal information, the Office of the Privacy Commissioner of Canada should take stock of popular GenAI tools and proactively review their privacy and data use policies to ensure compliance with existing privacy regulations.
2. Education reform: Given the relatively low level of GenAI literacy among Canadians, integrating GenAI—and AI literacy more broadly—into the education system is essential. From K-12 through post-secondary, students must learn not just how to use GenAI tools effectively (for example, prompt engineering). They should also understand how these technologies function, where the training data come from and how to evaluate outputs for accuracy and potential biases.
3. GenAI use transparency: Organizations deploying GenAI must clearly disclose when and how these tools are being used, alongside mandatory risk assessments for high-impact deployments. This transparency is particularly important for for-profits, media outlets and public sector entities, as these groups are viewed with the highest levels of distrust among Canadians regarding the safe and ethical use of GenAI.
Dizzying change
As researchers who have spent years studying technology’s impact on society, we are both excited and cautious about what GenAI means for Canada. The pace of change is dizzying, but speed alone is not a measure of progress. What matters is whether this technology serves the public good.
Canadians are not anti-technology. They are curious, pragmatic and hopeful, but they are also alert to the risks. They want to be part of the conversation, and they want to see that conversation lead to thoughtful, inclusive action.
We urge policymakers, educators, tech companies and civil society to listen closely and act urgently. GenAI is not a passing trend. It is reshaping how we work, learn and spend leisure time. Whether that transformation uplifts or undermines society depends on our current choices.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
Two-thirds of Canadians have experimented with generative AI, but most don’t understand its impacts (2025, April 24)
retrieved 24 April 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.