ComplianceEU AI Act

EU AI Act Compliance

The EU AI Act establishes a risk-based regulatory framework for AI systems operating in the European Union. Surfinguard includes built-in tools to help assess and document your AI system’s compliance requirements.

Risk Classification

The EU AI Act classifies AI systems into four risk levels:

LevelDescriptionExample Systems
UnacceptableBanned outrightSocial scoring, real-time biometric surveillance
HighStrict requirementsMedical devices, legal decisions, employment screening
LimitedTransparency obligationsChatbots, deepfakes, emotion recognition
MinimalNo specific requirementsSpam filters, video games, inventory management

Assessing Your System

Use the assessCompliance() method to determine your AI system’s risk classification and applicable requirements.

JavaScript

import { Guard } from '@surfinguard/sdk';
 
const guard = new Guard({
  mode: 'api',
  apiKey: 'sg_live_your_key_here',
});
 
const assessment = await guard.assessCompliance({
  systemType: 'chatbot',
  domain: 'customer-service',
  hasHumanOversight: true,
  processesPersonalData: true,
  isAutonomous: false,
});
 
console.log(assessment);
// {
//   riskLevel: "limited",
//   requirements: [
//     "Transparency: Users must be informed they are interacting with an AI",
//     "Record-keeping: Maintain logs of system operation",
//     "Human oversight: Ensure human review capability exists"
//   ],
//   recommendations: [...]
// }

Python

from surfinguard import Guard
 
guard = Guard(api_key="sg_live_your_key_here")
 
assessment = guard.assess_compliance({
    "system_type": "chatbot",
    "domain": "customer-service",
    "has_human_oversight": True,
    "processes_personal_data": True,
    "is_autonomous": False,
})
 
print(assessment.risk_level)     # "limited"
print(assessment.requirements)   # [...]

API

curl -X POST https://api.surfinguard.com/v2/compliance/assess \
  -H "Authorization: Bearer sg_live_..." \
  -H "Content-Type: application/json" \
  -d '{
    "systemType": "chatbot",
    "domain": "customer-service",
    "hasHumanOversight": true,
    "processesPersonalData": true,
    "isAutonomous": false
  }'

System Profile Fields

FieldTypeDescription
systemTypestringType of AI system (chatbot, recommendation, classification, generation, autonomous_agent)
domainstringApplication domain (healthcare, finance, legal, education, customer-service, general)
hasHumanOversightbooleanWhether humans can intervene in decisions
processesPersonalDatabooleanWhether the system handles personal data
isAutonomousbooleanWhether the system acts without human approval
targetRegionstringTarget deployment region (EU, US, global)

Requirements by Risk Level

High Risk

Systems classified as high-risk must meet these requirements:

  • Risk management system: Continuous identification and mitigation of risks
  • Data governance: Training data quality, relevance, and representativeness
  • Technical documentation: Detailed system documentation for authorities
  • Record-keeping: Automatic logging of events for traceability
  • Transparency: Clear information to deployers about capabilities and limitations
  • Human oversight: Ability for humans to understand, intervene, and override
  • Accuracy and robustness: Consistent performance, resilience to errors
  • Cybersecurity: Protection against threats specific to AI systems
  • Conformity assessment: Third-party or self-assessment before deployment
  • Registration: Registration in the EU database for high-risk systems

Limited Risk

  • Transparency: Users must know they are interacting with AI
  • Record-keeping: Maintain operational logs
  • Labeling: AI-generated content must be identifiable

Minimal Risk

  • Voluntary codes of conduct: Encouraged but not required
  • Best practices: Follow industry standards

Getting Requirements

Retrieve the full requirements list for any risk level:

curl "https://api.surfinguard.com/v2/compliance/requirements?level=high" \
  -H "Authorization: Bearer sg_live_..."

Response:

{
  "level": "high",
  "requirements": [
    {
      "id": "EAIA-RM",
      "name": "Risk Management System",
      "article": "Article 9",
      "description": "Establish, implement, document, and maintain a risk management system...",
      "actions": [
        "Identify and analyze known and foreseeable risks",
        "Estimate and evaluate risks from intended and misuse scenarios",
        "Adopt risk management measures"
      ]
    }
  ]
}

How Surfinguard Helps

Surfinguard supports EU AI Act compliance in several ways:

RequirementHow Surfinguard Helps
Risk managementContinuous threat scoring of AI actions identifies risks in real-time
Record-keepingEvent pipeline logs all checked actions with scores and reasons
Human oversightCAUTION results flag actions for human review before execution
Cybersecurity152 threat patterns protect against AI-specific attacks
TransparencyCheckResult provides clear, human-readable reasons for every decision
AccuracyDeterministic heuristic scoring ensures consistent, reproducible results

Compliance Reporting

Generate a compliance summary for your system:

const report = await guard.assessCompliance({
  systemType: 'autonomous_agent',
  domain: 'finance',
  hasHumanOversight: true,
  processesPersonalData: true,
  isAutonomous: true,
});
 
// Use the report for documentation
console.log(`Risk Level: ${report.riskLevel}`);
console.log(`Requirements: ${report.requirements.length}`);
report.requirements.forEach(req => {
  console.log(`- ${req}`);
});