Compliance · 19 April 2026 · Updated 19 April 2026
DSG and AI in the Swiss SME - the practical guide
How small and mid-sized Swiss companies use ChatGPT, Claude and Copilot in a DSG-compliant way. With tool matrix, decision tree and policy template.
Author
ai-edu Team
AI Training Experts
As of April 2026. This post is a practice-oriented overview and does not replace legal advice. For concrete questions of interpretation - in particular around sensitive personal data or automated individual decisions - we recommend consulting a law firm specialised in data protection or making a direct enquiry to the Federal Data Protection and Information Commissioner (EDÖB).
Since the revised Swiss Data Protection Act (DSG) came into force on 1 September 2023, the picture is clear: anyone who processes personal data in an AI tool is the controller in the sense of the law - even if the tool is operated by OpenAI, Anthropic or Microsoft. For Swiss SMEs this does not mean banning AI. It means taking five duties seriously and implementing them operationally.
What does the DSG require from a Swiss SME using AI?
The DSG requires five things that every AI project in a Swiss SME must satisfy: a clear purpose, purpose limitation, transparency, data minimisation and data security. Not perfect data knowledge - traceable decisions. The five principles in detail:
- Lawfulness and good faith - the processing must have a recognisable purpose (Art. 6 para. 2 DSG).
- Purpose limitation - data may only be used for the purpose for which they were collected.
- Transparency - data subjects must know that, and for what, their data are being processed (Art. 19 DSG, duty to inform).
- Data minimisation - process only what is strictly necessary.
- Data security - appropriate technical and organisational measures (Art. 8 DSG).
In the case of commissioned processing (i.e. when an AI provider processes data on behalf of the company), a written contract governing the duties is additionally required - Art. 9 DSG. OpenAI, Anthropic and Microsoft offer standardised Data Processing Addenda (DPA) for this purpose.
Which AI uses are DSG-sensitive in a Swiss SME?
Four constellations trigger DSG duties - all others are uncritical. Not every ChatGPT prompt is a compliance case, but these four are:
1. Personal data in prompts
As soon as employees paste names, email addresses, application documents or customer emails into an AI input, data processing begins. In the consumer version of ChatGPT (chatgpt.com without a Team/Enterprise subscription), these inputs are used for training by default - a repurposing inconsistent with Art. 6 para. 3 DSG.
Practical consequence: Free and Plus tariffs are fine for internal brainstorms, not for personal data. For real-world SME workflows, Team or Enterprise tariffs with training-use switched off belong on the table.
2. Cloud processing outside Switzerland
The DSG only permits disclosure abroad if the receiving country offers adequate data protection (Art. 16 DSG). The EDÖB list of states classifies the EU/EEA states as adequate - but the United States only under the Swiss-US Data Privacy Framework for certified companies.
Practical consequence: tools with pure US hosting (standard ChatGPT, standard Claude) need either a certified provider status under the DPF, standard contractual clauses, explicit consent from data subjects, or relocation to an EU or CH region.
3. Automated individual decisions (Art. 21 DSG)
If an algorithm decides about a person without human involvement - for instance in automatic CV screening, credit rejections or tariff setting - Art. 21 DSG applies: data subjects are entitled to a statement and to review by a natural person.
Practical consequence: human-in-the-loop is not only a design principle but a legal duty in Swiss SME contexts. An AI may pre-sort, but the decision must be taken by a person - and documented.
4. Sensitive personal data
Health, religious and trade-union data as well as biometric identifiers are particularly worthy of protection under Art. 5 lit. c DSG. Anyone who feeds such data into AI tools must obtain explicit consent or carry out a Data Protection Impact Assessment (DSFA) under Art. 22 DSG.
Practical consequence: for HR workflows with health data, insurance calls or fiduciary mandates, the rule is: no AI tool without a documented DSFA and without a DPA explicitly tailored to the purpose.
Which AI use case needs which documentation?
The documentation depth depends on two questions: are personal data processed - and does the provider sit in a country with adequate data protection. The two questions yield a compact decision tree:
Are personal data being processed?
├── No -> No DSG duties (e.g. public research)
└── Yes
├── Sensitive data? -> DSFA + explicit consent
└── Regular data
├── Provider in CH/EU -> DPA + duty to inform
└── Provider in US/third country
├── DPF-certified -> DPA + duty to inform
└── Not certified -> Standard contractual clauses + DSFA
In practice, about 80% of SME use cases (text generation without customer data, research, code review) can be covered without major effort - the other 20% require the full documentation chain.
Which AI tools are DSG-compliant for Swiss SMEs?
Eight common AI providers compared directly - hosting region, training-use default, DPA availability and fitness for personal-data workflows in a Swiss SME:
| Tool | Hosting | Training-use | DPA available | DSG-suitable for SMEs |
|---|---|---|---|---|
| ChatGPT Free / Plus | USA | Yes by default | No | Not for personal data |
| ChatGPT Team / Enterprise | USA (DPF) | Off | Yes | Yes, with DPA + informing data subjects |
| Azure OpenAI Service | incl. Switzerland North, Switzerland West | Off | Yes (Microsoft DPA) | Yes, highest compliance tier |
| Microsoft 365 Copilot | Data stays within the M365 tenant | Off | Yes (M365 DPA) | Yes, if the M365 tenant is already compliant |
| Claude (claude.ai Free/Pro) | USA | No by default | No (Pro), Yes (Team/Enterprise) | Pro only for non-critical content |
| Claude API / Enterprise | USA, EU | No | Yes | Yes, with DPA |
| Mistral Le Chat / La Plateforme | EU (France) | Configurable | Yes | Yes, EU hosting simplifies disclosure |
| Google Gemini Workspace | EU region selectable | Off (Workspace) | Yes | Yes, with a Workspace tariff |
Note on Azure Switzerland regions: Microsoft operates Switzerland North (Zurich) and Switzerland West (Geneva) as two cloud regions with data residency inside Switzerland. Azure OpenAI is available in both regions - for SMEs with elevated data-protection requirements or in regulated industries, this is often the simplest choice. (Azure Geographies overview)
The table is a snapshot. Conditions, training defaults and hosting options change - before signing a contract, the provider’s current DPA belongs on the data-protection officer’s desk.
What belongs in an AI usage policy for a Swiss SME?
An AI usage policy needs five sections - no more: scope, permitted data categories, tool approval, transparency and training. An internal policy does not replace the law, but it creates clarity for employees and accountability towards the EDÖB. For most Swiss SMEs the following skeleton is enough:
-
Scope and definitions - which tools are permitted, which are banned? What is an “AI system” in the sense of this policy? Employees with private ChatGPT accounts also count as acting on behalf of the company as soon as they enter business data - that belongs in the scope.
-
Permitted and prohibited data categories - customer personal data without pseudonymisation? Prohibited on free tariffs. Sensitive data (health, religion, criminal records)? Only with written approval from the data protection officer.
-
Tool selection and approval - who decides which AI tools may be used in the company? Usually IT leadership together with management. New tools need an internal onboarding protocol: hosting region verified, DPA signed, training-use deactivated.
-
Transparency externally and internally - when and how are customers, candidates and employees informed about the use of AI? At minimum: chatbot disclaimer on the website, an application-form notice for automated pre-screening, a note in the employee handbook.
-
Training and documentation - who is trained when? Which training records are archived for an EDÖB audit? Rule of thumb: a mandatory annual update, plus coverage for new employees during onboarding.
These five sections fit on 5-7 A4 pages. Anyone who wants more is writing past the goal - a policy that nobody reads protects nothing.
What has the EDÖB publicly signalled about AI so far?
The EDÖB has not yet issued an AI-specific ruling, but three clear lines are visible in published position statements and recommendations:
- Applicability is clear: generative AI is not a lawless zone. Existing DSG duties apply unchanged.
- The duty of transparency is taken seriously: if AI systems prepare decisions that affect people, that must be recognisable to the data subjects.
- International data flows are a focus: for US-based providers, DPF certification or an alternative guarantee construct is expected.
Anyone who subscribes to the EDÖB newsletter or follows the annual activity reports will spot early where supervision is looking.
FAQ for management, HR and IT
Management: we have been using ChatGPT Plus for months. Do we have to document every input retroactively? No. The DSG does not require retroactive documentation, but from today on the data-protection practice must be traceable. A useful approach is a cut-off date from which a tool policy applies, plus a brief internal communication to all employees.
HR: may we have applications pre-filtered by an AI? Yes, but with conditions: the candidates must be informed (a notice in the job ad is enough), the final decision must be taken by a person, and candidates have a right to a reasoned explanation and review (Art. 21 DSG).
IT: do we have to switch from standard ChatGPT to Azure OpenAI Switzerland? If personal data are regularly processed: yes, or at the very least to ChatGPT Enterprise with a DPA and training-use switched off. If the tool is used only for non-personal tasks (code snippets, marketing copy without customer data), the Plus tariff with a clear internal rule is sufficient.
Management: do we really need a data protection advisor? For pure SMEs with fewer than 250 employees, an internal data protection advisor under the DSG is not mandatory (Art. 10 para. 1 DSG), but it is recommended. For sensitive data or profiling with high risk, the recommendation turns into an obligation.
HR: what about private AI tools on company devices? Tricky. Anyone who uses ChatGPT on a company laptop with private means and enters business data while doing so creates an ungoverned commissioned-processing relationship. The clean solution: centrally managed tool licences plus a clear ban on private AI accounts for business content.
Which steps are next for Swiss SMEs?
Three concrete steps in the next 30 days take an SME from DSG grey zone to documented compliance. AI compliance is not a one-off project, but the first month sets the shape of what follows:
- Inventory - which AI tools are already being used in the company today? (Often more than management knows.)
- Tariff audit - are the tools in use on a tier that permits processing of personal data? If not: upgrade to Team/Enterprise or switch to a CH or EU region.
- Policy draft - 5-7 pages along the skeleton above, signed off by IT and the data protection officer, approved by management.
In our training programmes for Swiss SMEs we work through exactly these three steps with your team - from tool audit to policy draft to employee training. DSG-compliant and without legal buzzword bingo.
Sources and further reading:
- Swiss Federal Act on Data Protection (DSG) - the official legal text
- Federal Data Protection and Information Commissioner (EDÖB) - position statements, activity reports, point of contact
- EDÖB list of states with adequate data protection - which countries allow data transfers without an additional contract
- Azure Geographies including the Switzerland regions - Microsoft Cloud with Swiss data residency
Tags