Shadow AI Explained: What It Is and Why It's Growing
- Natasha Tatta

- Sep 4
- 6 min read
Updated: Sep 8

Shadow AI is gaining ground in today’s workplace. It refers to the use of artificial intelligence tools or systems within an organization without official approval or adequate oversight.
Freelancers and self-employed workers are also affected, as we'll cover in the next section.
In other words, it's an employee who circumvents internal regulations or policies to adopt an AI tool, thinking they're doing the right thing or simply to save time. The scenarios are countless:
An employee who uses ChatGPT to write emails, prepare a report or create a presentation without informing their IT department.
An analyst who trains or deploys a machine learning model on an external cloud platform to accelerate their analyses.
A marketing manager who entrusts an AI image generator with the task of producing advertising visuals without going through the usual creation and validation channels.
A customer service advisor who relies on a chatbot to respond to customers, without the data exchanged being secure.
An employee uploading sensitive internal documents to a free generative AI tool for a quick summary or translation.
In all cases, the common thread remains: AI adopted without transparency or official approval.
Shadow AI Among Freelancers and the Self-Employed
The phenomenon of shadow AI doesn't just affect large companies. Self-employed workers, freelancers, and consultants are also affected, and the issues are sometimes even more major.
Professionals often manage their mandates alone, without an ethics committee or IT department to supervise them. The temptation is therefore strong to discreetly integrate AI into their work, especially to:
save time on tedious tasks,
deliver faster to meet tight deadlines,
reduce costs to stay competitive.
Some concrete examples:
A translator or editor who relies on an AI tool to do their work, without mentioning it in their contract.
A consultant who analyzes their client's data with an AI model, without checking the terms of use or data protection.
A graphic designer who generates visuals with MidJourney, without specifying it in their delivery.
The main risk here is a breach of trust. In a service relationship, transparency is essential: the client expects to know how their mandate is being carried out, especially when sensitive data or original content creations are at stake.
If discovered, the hidden use of AI can:
harm the professional's reputation,
question the real value of their work, or even
lead to disputes if the customer considers that the service delivered doesn't conform to what was agreed.
Good practice therefore consists of adopting a policy of transparency: clearly indicating whether and how AI is used, besides explaining the benefits for the customer, and highlighting the added human value (verification, adaptation, expertise).
In other words, for freelancers, AI isn't a problem in itself; it's the lack of transparency and communication that can quickly erode trust and damage client relationships.
Why are workers turning to shadow AI
The answer is simple: because AI works.
Employees and freelance professionals alike are primarily looking to complete their work faster, reduce repetitive tasks, and produce quality results. However, the solutions approved by companies don't always meet their needs:
they're slow or limited,
they simply don't exist,
implementation takes too long due to administrative burdens.
Shadow AI therefore becomes an attractive shortcut, but the consequences can sometimes be serious.
The main characteristics of shadow AI
⚠ Unregulated use
Employees or professionals adopt tools autonomously, bypassing approval processes.
⚠ Security risks
Potentially sensitive data is processed by external platforms, sometimes without encryption or guarantee of confidentiality.
⚠ Non-alignment with internal policies
Tools don't always respect the company's infrastructure or ethical rules.
⚠ Efficiency gains
Despite everything, fast and tangible results encourage workers to keep relying on AI.
Risks associated with shadow AI
🚨 Compliance Violations
In Europe, using an unapproved tool that processes personal data may violate the General Data Protection Regulation (GDPR). In Canada, it may violate provincial privacy laws.
🚨 Data fragmentation
Data can end up scattered across non-integrated tools, creating inconsistencies and redundancies.
🚨 Quality and liability issues
If a chatbot generates an erroneous result, who is responsible? The employee, the AI tool provider, the employer?
🚨 Damage to reputation
AI-generated content published without validation can tarnish a company's image if it contains errors or biases.
The risks are not only organizational: the widespread adoption of AI also raises environmental issues, nuanced in The Environmental Footprint of AI: Google Finally Lifts the Veil.
Shadow AI in numbers
Two recent studies show the extent of the shadow AI phenomenon.
More than 5,000 knowledge workers were surveyed in September 2024 in the United States, Canada, and the United Kingdom. Section found that:
11% of companies explicitly prohibit AI, of which 43% of employees use it regularly, 6% daily and 23% at least once a week.
26% of companies have no clear policy. In this case, 64% of employees still use AI, including 33% every week.
Most employees use free tools, which are less secure and less efficient.

The result: Shadow AI employees feel less competent and use AI less effectively than those whose company positively supports AI use.
The study, conducted by Censuswide in May 2025, is based on a sample of 4,000 full-time office workers in the United States, Canada, Mexico, and Brazil:
79% of Canadian office workers use AI, but only 25% with enterprise-grade solutions.
97% believe that AI improves their productivity.
46% would leave their job for an employer that better leverages AI.
Shadow AI added an average of $308,000 to data breach costs in 2025.
According to this study, Canadian workers consider AI as a true performance lever:
86% feel comfortable using AI.
80% say it frees up their time for strategic or creative tasks.
55% save one to three hours of work per week, 26% up to six hours.
Main benefits cited:
faster execution (61%),
better workload management (43%),
increased accuracy (40%), and
enhanced creativity (39%).
These numbers show that banning or silencing AI doesn't work . Employees always find a way to use AI... and so do students.
Which Sectors See the Most Shadow AI Use
According to the study conducted by Section:
12% in health care,
10% in education ,
9% in retail,
9% in foodservice,
8% in finance,
7% in consulting and professional services.
These percentages speak for themselves: the more pressure a sector is under (efficiency, lack of resources), the more shadow AI takes over.
How to manage shadow AI
✅ Establish clear governance
Companies should develop clear policies on what is and is not allowed.
✅ Provide safe and approved solutions
If employees are given effective tools, they'll have less need to look elsewhere.
✅ Offer training
Supervision must be accompanied by training that is both technical (how to use AI) and ethical (when and why to use it).
✅ Establish a culture of responsible innovation
Encourage employees to experiment, but within a defined framework.
✅ For freelancers and self-employed professionals: focus on transparency
Include a clause in your service contract specifying the use of AI, explaining to your customers how and why it is used, clearly stating what it implies in terms of quality, confidentiality and liability.
The role of government, and practical guides
Several initiatives aim to regulate the use of AI:
These resources offer good guidance for safe and transparent AI adoption.
The Business Dilemma
Banning AI or staying silent about it, only pushes employees further into the shadows. The outcome? Greater risks, weaker oversight, and ironically, lower efficiency. By contrast, embracing and regulating AI creates space for:
higher productivity,
stronger security,
a more skilled workforce, and
a culture built on transparency and trust.
From Shadow to Light: Building a Responsible AI Culture
Shadow AI isn't just jargon. It's a reality that's already well established in businesses and among self-employed professionals. Its rise reveals both a thirst for efficiency and innovation, but also a lack of supervision and trust.
The numbers are crystal clear: banning or ignoring AI just doesn't work.
Employees will continue to use AI, but in riskier and less effective ways. The way forward is responsible, transparent, and regulated adoption, turning AI into a collective asset rather than a shadow practice.
Governance and transparency are are part of a broader shift in our digital tools, one that includes the rise of smart web browsers, starting with Claude.
📩 Need help?
Would you like to structure the use of AI in your business or professional practice? Get inspired by our Artificial Intelligence Usage Policy and our AI Manifesto . We can help you draft and integrate your own AI policies and strategic documents into your business. Write to us at bonjour@infoiaquebec.com

Natasha Tatta, C. Tr., trad. a., réd. a.
A bilingual language specialist, Natasha pairs word accuracy with impactful ideas. Infopreneur and AI consultant, she helps professionals embrace generative AI and content marketing. She also teaches IT translation at Université de Montréal.
🌱 Each Google review is like a seed that helps Info IA Québec grow. Leave us a review and help us inform more people, so AI becomes accessible to everyone! ⭐⭐⭐⭐⭐ Click here!



