Strategy · 10 November 2024 · Updated 19 April 2026
ChatGPT in Your Company: 6 Mistakes to Avoid
The most common mistakes when rolling out ChatGPT in Swiss SMEs - from missing policies to Swiss Data Protection Act (DSG) breaches. With concrete remedies.
Author
ai-edu Team
AI Training Experts
As of April 2026. The most common implementation mistakes remain similar across model generations. Model names and tariff details change, but the logic of the remedies stays the same.
Many Swiss SMEs dive into AI without a clear strategy. From our consulting work, we see the same six mistakes repeatedly. Each one costs time, money, or in the worst case unwanted compliance attention.
Mistake 1: No clear policies
The problem: Employees use ChatGPT without guidance - private accounts, mixed data, no documented responsibilities. In the event of a data protection audit, the company has no traceable practice to show.
The solution: An internal tool policy (5-7 pages) with clear answers to:
- Which tools are permitted? Which are banned?
- Which data categories may flow into which tools?
- Who decides on new tool approvals?
- How is usage documented?
A template with the five mandatory sections is available in the DSG guide for Swiss SMEs.
Mistake 2: Not checking outputs
The problem: AI-generated texts, translations, or analyses are used without review. Hallucinated numbers, incorrect citations, or inappropriate tone go straight to the customer.
The solution: Four-eyes principle before any external send-off. Fact-check against verifiable sources. For legally or financially relevant texts, human final review is mandatory - the responsibility stays with the company.
Mistake 3: Prompts that are too vague
The problem: “Write me an email” produces generic results that require more rework than writing it yourself.
The solution: A structured prompt with context, audience, tone, and desired length.
Bad:
Write an email to a customer.
Good:
Write a professional, friendly email to our long-standing
customer Mr. Mueller. He asked about the status of his order
#12345, which is 3 days behind schedule. Apologise, offer 10 %
discount on his next order. Max. 150 words, formal English,
formal address ("you").
Eight more copy-paste templates are available in the prompt templates.
Mistake 4: Missing training
The problem: Only tech-savvy employees use AI effectively. Everyone else falls behind, which leads to shadow IT (private accounts, unregulated data flows).
The solution: Training per department with concrete use cases - HR, finance, sales, and marketing need different examples. Rule of thumb: 2-3 hours per employee per newly introduced tool, plus an internal point of contact during the first few weeks.
Mistake 5: No success measurement
The problem: You do not know whether AI is paying off - the discussion with the CFO ends in gut feelings.
The solution: Three simple, measurable KPIs from day one:
- Time saved per typical task (sample test with and without AI over 2 weeks).
- Quality indicator (complaints, correction loops, NPS in affected areas).
- Adoption (how many employees actively use the tool weekly?).
Without measurement, tool usage gets lost in the general productivity debate. With measurement, it becomes defensible.
Mistake 6: Ignoring DSG obligations
The problem: Personal data ends up in the free or Plus tier of ChatGPT, where it is used for training by default. For applicant pre-sorting, there is no option for human review. A privacy notice that describes the AI usage does not exist.
The solution: Four concrete steps to a compliance baseline:
- Tariff audit. Personal data belongs in Team/Enterprise tiers with training use disabled - not in Free or Plus.
- Regulate data processing. Sign a DPA with the provider (Art. 9 DSG).
- Meet the transparency obligation. Applicants, customers, and employees must be informed about the AI usage (Art. 19 DSG).
- Build training documentation. In the event of an audit by the Federal Data Protection and Information Commissioner (EDOEB), a traceable training practice is an important part of the “appropriate measures” under Art. 8 DSG.
Covered in depth in the DSG guide - including a tool matrix with DSG status and a 5-point policy.
What comes next
AI is a tool, and like any tool it needs training, rules, and success measurement. The six mistakes described above can be fixed within 4-6 weeks once IT, HR, and the executive board sit down together.
In our training programmes we walk your team through these six points - including a draft policy, a compliance check, and employee workshops.
Related reading on ai-edu.ch:
- DSG and AI in Swiss SMEs - the practical guide
- Prompt templates: 8 frameworks for daily work
- Prompt engineering fundamentals
- Chatbot strategies for SMEs
Sources:
- Swiss Federal Act on Data Protection (DSG) - official text
- EDOEB - Federal Data Protection and Information Commissioner (FDPIC)
- OpenAI Enterprise Privacy - training-use and DPA notes
Tags