1. Which AI features exist in the Service
- AI Assistant — conversational interface that calls the same internal tool catalogue the application uses (margin queries, engagement lookups, draft emails). Available on Pro and Premium plans. Disabled by default for new tenants; an Admin must turn it on at Settings → AI.
- AI timesheet auto-fill — reads Microsoft Graph calendar, Slack / Teams / Jira webhooks (where the User has connected them) and proposes Timesheet_Line rows. The User must accept each suggestion explicitly; we never write a timesheet on the User’s behalf.
- Engagement health narrative — turns the existing numeric health score into a single English sentence on the home dashboard. No new data is sent to the model that is not already visible on the page.
2. Which AI providers we use
Our current provider for foundation-model inference is Anthropic, PBC (Claude). The default model is claude-haiku-4-5-20251001 on the Pro plan and claude-opus-4-7 on demand on the Premium plan.
We may add additional providers in future (for example, OpenAI or Google for fallback). When we do, we will update this notice and publish the model name, the contractual data-handling terms with that provider, and which features call which model. We commit not to add a new AI provider that does not match the data-handling commitments in Section 4 below.
3. What is sent to the model, and when
A prompt is only sent to the AI provider when a User explicitly invokes a feature in Section 1 (clicks “Ask the assistant”, accepts an auto-fill suggestion, or loads the home dashboard while narrative is enabled). The prompt contains:
- The User’s natural-language question (Assistant only).
- The system prompt that describes the available tools, the User’s role, the tenant’s name, and the company filter in effect.
- Only the data that the model needs to answer — loaded via the same tool catalogue, with the same tenant- and company-scoping, as the Service uses everywhere. The model cannot see data outside the User’s authorised scope.
Customer Data classified as “sensitive” in the application’s data dictionary (for example: government identification numbers, bank account numbers, health information) is excluded from prompts unless the User has explicitly chosen to include a record in their question.
4. What the AI provider does with the data
We use the Anthropic zero-retention API tier. Under that tier:
- Prompts and completions are deleted by Anthropic on receipt of the response. They are not persisted on Anthropic’s infrastructure beyond the time needed to process the request.
- Customer Data is not used to train, fine-tune, evaluate, red-team or otherwise improve any Anthropic model.
- Our agreement with Anthropic contractually prohibits use of Customer Data for any purpose other than processing the specific request.
- Anthropic is identified as a sub-processor in our Sub-processor list and the relevant transfer mechanism (EU SCCs) is in place for transfers from the EU / UK to the United States.
We do not use Customer Data to train, fine-tune or evaluate any model, internal or external. If we ever wish to do so in the future, we will obtain explicit opt-in consent from the affected Customer first; this notice will be updated to reflect any such change, and the change will not be retroactive.
5. Hallucinations & accuracy
Generative models can produce plausible-looking but incorrect text. That is true of all generative AI today. Our approach to mitigating this risk:
- Tool calls over guessing. The Assistant prefers to call a tool and read the result over generating an answer from training data. When it cannot ground an answer, it says so.
- Numbers come from the tool, not the model. Where the Assistant quotes a margin figure or an invoice amount, that number comes from the application’s database via a tool call. The model only narrates the number; it does not produce it.
- Humans approve writes. The Assistant cannot create, modify or delete records on its own. Every write requires explicit human approval inside the application.
- You can disable AI. Workspace Admins can turn AI off for the whole tenant from Settings → AI. Users can opt out of timesheet auto-fill suggestions in their personal settings.
Despite these mitigations, AI outputs are suggestions for human review, not advice you should rely on without question. They are not professional financial, accounting, tax or legal advice. Our liability for AI output is addressed in Sections 11 and 17 of the Terms of Service.
6. EU AI Act — classification and obligations
We have assessed Alsvior EMS against the EU AI Act and conclude:
- Risk class: limited risk. The Service is not a high-risk AI system within the meaning of Annex III of the AI Act. It does not make decisions about people in employment, credit, education, law-enforcement or critical infrastructure.
- Article 50(1) — chatbot transparency. When you are interacting with the AI Assistant, the interface clearly labels it as AI (the “Assistant” pill in the chat surface). This notice is the corresponding written disclosure.
- Article 50(4) — AI-generated text. Where the Service produces text intended for human reading (for example the engagement-health narrative), each output is generated under the editorial responsibility of the Customer’s Director / Owner role, who can edit or remove it. We rely on the editorial-review carve-out for that reason, but we still make this general disclosure for transparency.
- General-purpose AI providers. Anthropic is the provider of the underlying general-purpose model and is responsible for the AI-Act obligations attaching to that role. We are a downstream deployer.
7. DPIA & legitimate-interests assessment
We have completed a Data Protection Impact Assessment for the AI Assistant and a Legitimate Interests Assessment for the engagement-health narrative. Both are available to Customers on request from dpo@alsviorglobal.com under NDA.
8. Material change log
We will list material changes to AI providers, models or data-handling commitments here. When we add an item, we will email Workspace Owners at least 30 days in advance, so they can disable AI before the change takes effect if they wish.
- 12 May 2026 — Initial publication. Anthropic Claude Haiku 4.5 / Opus 4.7 on the zero-retention API tier.
9. Contact & opt-out
Workspace Admins can disable AI for the whole tenant from Settings → AI at any time. For questions about how we use AI, write to dpo@alsviorglobal.com.
