EU AI Act Compliance
The EU AI Act establishes a risk-based regulatory framework for AI systems operating in the European Union. Surfinguard includes built-in tools to help assess and document your AI system’s compliance requirements.
Risk Classification
The EU AI Act classifies AI systems into four risk levels:
| Level | Description | Example Systems |
|---|---|---|
| Unacceptable | Banned outright | Social scoring, real-time biometric surveillance |
| High | Strict requirements | Medical devices, legal decisions, employment screening |
| Limited | Transparency obligations | Chatbots, deepfakes, emotion recognition |
| Minimal | No specific requirements | Spam filters, video games, inventory management |
Assessing Your System
Use the assessCompliance() method to determine your AI system’s risk classification and applicable requirements.
JavaScript
import { Guard } from '@surfinguard/sdk';
const guard = new Guard({
mode: 'api',
apiKey: 'sg_live_your_key_here',
});
const assessment = await guard.assessCompliance({
systemType: 'chatbot',
domain: 'customer-service',
hasHumanOversight: true,
processesPersonalData: true,
isAutonomous: false,
});
console.log(assessment);
// {
// riskLevel: "limited",
// requirements: [
// "Transparency: Users must be informed they are interacting with an AI",
// "Record-keeping: Maintain logs of system operation",
// "Human oversight: Ensure human review capability exists"
// ],
// recommendations: [...]
// }Python
from surfinguard import Guard
guard = Guard(api_key="sg_live_your_key_here")
assessment = guard.assess_compliance({
"system_type": "chatbot",
"domain": "customer-service",
"has_human_oversight": True,
"processes_personal_data": True,
"is_autonomous": False,
})
print(assessment.risk_level) # "limited"
print(assessment.requirements) # [...]API
curl -X POST https://api.surfinguard.com/v2/compliance/assess \
-H "Authorization: Bearer sg_live_..." \
-H "Content-Type: application/json" \
-d '{
"systemType": "chatbot",
"domain": "customer-service",
"hasHumanOversight": true,
"processesPersonalData": true,
"isAutonomous": false
}'System Profile Fields
| Field | Type | Description |
|---|---|---|
systemType | string | Type of AI system (chatbot, recommendation, classification, generation, autonomous_agent) |
domain | string | Application domain (healthcare, finance, legal, education, customer-service, general) |
hasHumanOversight | boolean | Whether humans can intervene in decisions |
processesPersonalData | boolean | Whether the system handles personal data |
isAutonomous | boolean | Whether the system acts without human approval |
targetRegion | string | Target deployment region (EU, US, global) |
Requirements by Risk Level
High Risk
Systems classified as high-risk must meet these requirements:
- Risk management system: Continuous identification and mitigation of risks
- Data governance: Training data quality, relevance, and representativeness
- Technical documentation: Detailed system documentation for authorities
- Record-keeping: Automatic logging of events for traceability
- Transparency: Clear information to deployers about capabilities and limitations
- Human oversight: Ability for humans to understand, intervene, and override
- Accuracy and robustness: Consistent performance, resilience to errors
- Cybersecurity: Protection against threats specific to AI systems
- Conformity assessment: Third-party or self-assessment before deployment
- Registration: Registration in the EU database for high-risk systems
Limited Risk
- Transparency: Users must know they are interacting with AI
- Record-keeping: Maintain operational logs
- Labeling: AI-generated content must be identifiable
Minimal Risk
- Voluntary codes of conduct: Encouraged but not required
- Best practices: Follow industry standards
Getting Requirements
Retrieve the full requirements list for any risk level:
curl "https://api.surfinguard.com/v2/compliance/requirements?level=high" \
-H "Authorization: Bearer sg_live_..."Response:
{
"level": "high",
"requirements": [
{
"id": "EAIA-RM",
"name": "Risk Management System",
"article": "Article 9",
"description": "Establish, implement, document, and maintain a risk management system...",
"actions": [
"Identify and analyze known and foreseeable risks",
"Estimate and evaluate risks from intended and misuse scenarios",
"Adopt risk management measures"
]
}
]
}How Surfinguard Helps
Surfinguard supports EU AI Act compliance in several ways:
| Requirement | How Surfinguard Helps |
|---|---|
| Risk management | Continuous threat scoring of AI actions identifies risks in real-time |
| Record-keeping | Event pipeline logs all checked actions with scores and reasons |
| Human oversight | CAUTION results flag actions for human review before execution |
| Cybersecurity | 152 threat patterns protect against AI-specific attacks |
| Transparency | CheckResult provides clear, human-readable reasons for every decision |
| Accuracy | Deterministic heuristic scoring ensures consistent, reproducible results |
Compliance Reporting
Generate a compliance summary for your system:
const report = await guard.assessCompliance({
systemType: 'autonomous_agent',
domain: 'finance',
hasHumanOversight: true,
processesPersonalData: true,
isAutonomous: true,
});
// Use the report for documentation
console.log(`Risk Level: ${report.riskLevel}`);
console.log(`Requirements: ${report.requirements.length}`);
report.requirements.forEach(req => {
console.log(`- ${req}`);
});