EU AI Act Compliance
1. Introduction
The European Union Artificial Intelligence Act (Regulation (EU) 2024/1689) establishes a comprehensive regulatory framework for the development and deployment of AI systems in the European market. As a provider of AI-powered products serving EU-based organizations, TOV «Sinaptic AI» (“Sinaptic”) is committed to full compliance with the EU AI Act.
This document describes how Sinaptic classifies its products under the EU AI Act’s risk-based framework, the conformity assessment procedures we follow, our transparency and human oversight mechanisms, our technical documentation practices, and our post-market monitoring systems.
2. Risk Classification of Sinaptic Products
The EU AI Act classifies AI systems into four risk categories: unacceptable, high, limited, and minimal. Sinaptic has assessed each of its products against these categories based on the intended purpose, the domain of application, and the potential impact on fundamental rights.
2.1 Browser DLP — Minimal Risk
Classification: Minimal Risk (Title IV)
Browser DLP is a data loss prevention tool that classifies and monitors data transfers within web browsers. It does not make autonomous decisions that affect individuals’ rights or access to services. The AI component performs data classification (e.g., identifying personally identifiable information, financial data, intellectual property) and applies pre-configured rules set by the organization’s administrator.
Rationale: Browser DLP functions as a security tool that enforces policies defined by humans. It does not perform biometric identification, social scoring, or any processing that falls within the high-risk or unacceptable-risk categories. The AI classification operates as a support tool for human administrators who retain ultimate authority over enforcement actions.
2.2 Sinaptic AI Intent Firewall® — Limited Risk
Classification: Limited Risk (Title IV, with specific transparency obligations)
The Sinaptic AI Intent Firewall® intercepts and evaluates AI agent actions before execution. While the system itself is an AI that evaluates other AI systems’ intended actions, its primary function is to enforce safety boundaries rather than to interact directly with natural persons. The limited-risk classification is driven by transparency obligations that arise because the system makes determinations about whether actions should proceed.
Rationale: The Sinaptic AI Intent Firewall® operates within an AI-to-AI context with human oversight. It does not interact directly with end-users in a way that could be confused with human interaction. However, because it makes automated decisions that could indirectly affect individuals (by blocking or allowing AI agent actions), we apply limited-risk transparency obligations to ensure clear audit trails and explanations.
2.3 Sinaptic® DROID+ — Variable Risk (Depends on Use Case)
Classification: Variable — Minimal to High Risk depending on deployment context
Sinaptic® DROID+ is a platform for deploying and managing AI agents. The risk classification of any given Sinaptic® DROID+ deployment depends on the specific use case and the domain in which the agent operates:
- Minimal Risk: Agents performing internal workflow automation, data aggregation, or reporting tasks.
- Limited Risk: Agents interacting with external parties (requiring disclosure that the interaction is AI-powered).
- High Risk: Agents deployed in domains listed in Annex III of the EU AI Act, such as employment, creditworthiness assessment, or access to essential services.
Rationale: As a platform provider, Sinaptic ensures that Sinaptic® DROID+ includes the technical capabilities necessary to meet the requirements of any risk category. For high-risk deployments, we provide the conformity assessment tooling, documentation templates, and monitoring infrastructure required by the regulation.
3. Conformity Assessment
For products or deployments classified as high-risk, Sinaptic follows the conformity assessment procedures outlined in the EU AI Act:
- Quality Management System: Sinaptic maintains a quality management system covering the entire AI lifecycle, from design through deployment and decommissioning, as required by Article 17.
- Risk Management System: A risk management system operates throughout the lifecycle of each high-risk AI system, identifying, analyzing, evaluating, and mitigating risks in accordance with Article 9.
- Data Governance: Training, validation, and testing datasets are subject to governance practices that ensure relevance, representativeness, accuracy, and completeness as specified in Article 10.
- Technical Documentation: Comprehensive technical documentation is prepared prior to market placement, covering system architecture, design specifications, development process, validation methodology, and performance metrics as required by Article 11 and Annex IV.
- Record Keeping: High-risk AI systems automatically generate and retain logs (Article 12) that enable traceability of the system’s functioning throughout its lifecycle.
- Internal Conformity Assessment: Where applicable, Sinaptic performs internal conformity assessments in accordance with Annex VI, or engages a notified body where required by Annex III.
- EU Declaration of Conformity: For each high-risk AI system, Sinaptic prepares an EU Declaration of Conformity in accordance with Article 47, confirming that the system meets all applicable requirements.
4. Transparency Obligations
Sinaptic meets the transparency obligations established by the EU AI Act through the following measures:
- AI Disclosure: Where Sinaptic® DROID+ agents interact with natural persons, the deploying organization is provided with clear mechanisms to disclose AI involvement. Sinaptic provides configurable disclosure notifications that inform users they are interacting with an AI system.
- Decision Explanations: The Sinaptic AI Intent Firewall® provides clear, human-readable explanations for each allow/deny decision, including the specific rules and signals that contributed to the determination.
- Model Information: For each AI component, we document and make available the model’s purpose, capabilities, limitations, and known biases through model cards and technical documentation.
- Instructions for Use: Clear instructions for use are provided to deployers of our AI systems, enabling them to understand the system’s intended purpose, capabilities, and limitations, and to use the system in compliance with the regulation.
- Generated Content Marking: Where AI systems generate synthetic content (e.g., text generated by Sinaptic® DROID+ agents), the content is machine-readable as AI-generated, enabling downstream identification.
5. Human Oversight
In accordance with Article 14 of the EU AI Act, Sinaptic designs its AI systems to be effectively overseen by natural persons during the period the system is in use. Our human oversight framework includes:
- Understanding and Monitoring: Products include dashboards and monitoring tools that enable overseers to fully understand the AI system’s capabilities and limitations and to properly monitor its operation.
- Intervention Capability: Human operators can intervene in the AI system’s operation at any time, including the ability to override, pause, or terminate the system via emergency stop controls.
- Output Interpretation: AI outputs are presented in a manner that allows natural persons to correctly interpret them, including confidence scores, uncertainty indicators, and contextual information.
- Escalation Workflows: Configurable escalation rules automatically route high-stakes or low-confidence decisions to human reviewers before action is taken.
- No Automation Bias Countermeasures: User interfaces are designed to prevent over-reliance on AI outputs, including presenting alternative interpretations and requiring active human confirmation for consequential actions.
6. Technical Documentation
Sinaptic prepares and maintains technical documentation for each AI system in accordance with Article 11 and Annex IV of the EU AI Act. Documentation covers:
- A general description of the AI system, including its intended purpose, the persons involved in its design and development, and its date of development.
- A detailed description of the elements of the AI system and of the process for its development, including training methodologies, training data, design choices, and assumptions.
- Detailed information about the monitoring, functioning, and control of the AI system, including its performance and accuracy levels, known or foreseeable circumstances that may lead to risks, and human oversight measures.
- A description of the appropriateness of the performance metrics used for the specific AI system.
- A detailed description of the risk management system.
- A description of any changes made to the system through its lifecycle.
- Information about the data used for training, validation, and testing, including data collection methodology and data governance procedures.
7. Post-Market Monitoring
Sinaptic operates a post-market monitoring system proportionate to the nature and risk level of each AI system, as required by Article 72:
- Continuous Performance Monitoring: Automated monitoring tracks system performance, accuracy, and behavior patterns, alerting engineering teams to degradation or anomalies.
- Incident Reporting: We have established procedures for reporting serious incidents and malfunctions to relevant market surveillance authorities in accordance with Article 73.
- Feedback Collection: We systematically collect feedback from deployers and users regarding system performance, unexpected behavior, and potential risks.
- Periodic Reviews: Post-market monitoring data is reviewed at least quarterly to assess whether the AI system continues to comply with requirements and whether its risk classification remains appropriate.
- Corrective Actions: Where monitoring reveals non-compliance or unacceptable risks, we implement corrective actions including model updates, feature restrictions, or product recalls as necessary.
For high-risk AI systems, a post-market monitoring plan is documented as part of the technical documentation and updated throughout the system’s lifecycle.
8. Regulatory Engagement
Sinaptic proactively engages with regulatory developments related to the EU AI Act, including participating in stakeholder consultations, monitoring guidance from the European AI Office and national competent authorities, and adapting our compliance framework as implementing acts and harmonized standards are adopted. We maintain an internal regulatory tracking system to ensure timely adaptation to new requirements and guidance.
Request Compliance Information
For questions about our EU AI Act compliance or to request technical documentation, contact us at hello@sinaptic.ai.