AI and Data Protection: Knowing What You’re Throwing Into the Pot
- 20 hours ago
- 4 min read

Sophie is a CPA at a small accounting firm in Laval. Monday morning, she opens ChatGPT, copy-pastes a client’s financial statement (amounts, company name, tax number) and asks the AI to generate an executive summary for her 11 a.m. meeting. One minute later, it’s done.
Beautiful.
Except she just transmitted confidential information to an American server, without end-to-end encryption and without the client’s consent. AI data protection isn’t some niche debate for tech geeks. It affects every professional using generative AI tools to save time at work. ChatGPT, Claude, Gemini, Copilot… they all operate on the same fundamental principle.
What Happens to Your Data When You Hand It Over to AI?
When we type a prompt into a generative AI tool, it can feel like a private conversation, but that’s not really the case.
With most free versions, conversations are stored on the provider’s servers. Some companies use them to improve their models. Others retain them for varying periods of time, in a contractual gray zone, and many share them with third-party contractors for moderation, fine-tuning, or security analysis.
In fact, a recent investigation by Radio-Canada highlighted that many users still don’t fully understand what happens to the data they send to ChatGPT or the privacy risks involved when personal or professional information is shared carelessly.
In other words, an email pasted into ChatGPT for rewriting could technically end up in model training pipelines. A client contract uploaded to Gemini could land in Google’s analysis logs. An HR document shared with Claude could potentially be reviewed by Anthropic teams during security inspections.
This reality has already come at a high cost for some companies. In 2023, Samsung banned employees from using ChatGPT after an engineer pasted proprietary source code into the tool for debugging purposes. Overnight, AI confidentiality stopped being a theoretical concern and became a governance issue.
AI Data Protection in Quebec: Law 25 Changes the Game
In Quebec, responsibility does not fall solely on the AI provider. It also falls on the person transmitting the data. Since September 2023, Law 25 (formerly Bill 64) has regulated how Quebec businesses handle personal information. If personal data is transferred to a tool whose servers are hosted outside Quebec, which is the case for most mainstream AI tools, then organizations:
must have assessed the risks,
must have informed the individuals concerned, and
must be able to justify the transfer in the event of a complaint.
In practice, very few Quebec SMEs have actually gone through this exercise. And yet, the penalties can reach up to $25M or 4% of global annual revenue. That’s not exactly a minor administrative detail.
Best Practices Before Clicking Send
You don’t need to be a lawyer or a developer to reduce the risks. A few simple habits can make all the difference.
1. Classify the Data Before Sharing It
Public data (content already online), internal data (meeting notes), confidential data (contracts, financial figures), sensitive data (medical records, social insurance numbers). The rule is simple: public content is usually fine. Internal data depends on the context. Confidential data requires serious caution. Sensitive data should never be pasted into a public AI tool.
2. Always Anonymize
Replace names with “Client A,” exact figures with rough estimates, and addresses with codes. AI can still work effectively with depersonalized information without losing usefulness.
3. Disable Model Training
Most platforms allow users to disable the use of conversations for future model training. It takes about 30 seconds in the settings, and it should be done immediately.
4. Choose a Paid Subscription
Paid plans generally provide contractual guarantees that your data will not be used for training purposes, along with stronger encryption and clearer data-hosting policies.
For example, OpenAI states that data from Team, Enterprise, and API users is not used to train models by default. The company also outlines several security, encryption, and compliance measures on its Security & Privacy page.
5. Read the Privacy Policy at Least Once
Not line by line, but enough to answer three questions: Where are the servers located? How long is the data retained? Who has access to it?
6. Keep a Record
Which tools are being used, for what types of data, and under what conditions. It helps during audits and forces organizations to think before blindly clicking Send.
Back to Sophie: What She Could Have Done Differently
Sophie could have rewritten the financial statement by replacing the client’s name with “Company X” and using rounded figures instead of exact amounts. The summary would have been just as useful. She also could have used the Team version of ChatGPT, which does not train on conversations and offers a clearer contractual framework for professional firms.
More importantly, she could have taken thirty minutes to understand these principles before learning them the hard way. Because once a data leak happens, there’s no Ctrl+Z for confidentiality.
Going Further: A Practical Webinar to Make Sense of It All
AI data protection isn’t a topic reserved for big IT departments. It’s becoming an everyday skill, just like managing your emails or securing your passwords. And like any skill, it’s learned one step at a time. At the end of the day, before handing your data over to an AI tool, it’s worth understanding what you’re sharing and who you’re sharing it with.
👀 An essential webinar (in French) designed for professionals using AI at work. No jargon. No legal panic. Just practical habits to avoid throwing the wrong ingredients into the pot. It’s part of La cuisine d’Info IA Québec, a French-speaking AI learning community on Skool.

Natasha Tatta, C. Tr., trad. a., réd. a. Bilingual language specialist, I pair word accuracy with impactful ideas. Infopreneur and GenAI consultant, I help professionals embrace AI and content marketing. I also teach IT translation at Université de Montréal.




