Compliance & Transparency

AI Transparency

Last updated: April 12, 2026 · Operated by Steeled Inc. · EU AI Act Article 11 Technical Documentation

EU AI Act Compliance: CyberStackHub is committed to full compliance with the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) ahead of the August 2, 2026 enforcement date. This page constitutes our Article 11 technical documentation and Article 52 transparency disclosure. A machine-readable version is available at /api/ai-transparency.

Contents

  1. AI System Risk Classification
  2. AI Models and Providers
  3. How Outputs Are Generated
  4. Data Inputs and Governance
  5. Known Limitations and Biases
  6. Human Oversight Mechanisms
  7. Your Rights
  8. Contact and Feedback

1. AI System Risk Classification

Under the EU AI Act, all AI systems must be classified into one of four risk categories. The table below shows how each CyberStackHub AI feature is classified.

AI Feature / Tool Risk Level Basis Obligations
Risk Assessment Tool
Scores organizational security posture 0-100
Limited Risk Informs but does not replace human judgment; no automated decision-making with legal effects Article 52: AI-generated label, methodology disclosure
AI Document Generators
Security policies, incident response plans, compliance gap reports, vendor risk assessments, cyber insurance readiness
Limited Risk Generates template documents; all outputs require human review before use; no autonomous decisions Article 52: AI-generated label, transparency statement
Compliance Readiness Checker
SOC 2, ISO 27001, HIPAA, GDPR, PCI DSS readiness scoring
Limited Risk Advisory tool; scores are informational only; not a certified audit or compliance determination Article 52: AI-generated label, limitation disclaimers
Questionnaire / Assessment Bot
20-question cybersecurity assessment with grading
Limited Risk Informational scoring; no automated action taken based on responses; human decides next steps Article 52: Pre-interaction AI disclosure, output labeling
Breach Exposure Simulation
AI-generated educational breach simulation
Minimal Risk Purely educational simulation; clearly labeled as AI-generated; not connected to live breach databases Voluntary best practice: clear AI-generated label
Password Strength Analyzer
Rule-based + AI analysis of password strength
Minimal Risk Deterministic analysis plus optional AI explanation; no personal data retained Voluntary best practice: no special obligations
Phishing Indicator Analyzer
Analyzes email text and URLs for phishing signals
Minimal Risk Educational advisory tool; user makes all decisions; no automated filtering or blocking Voluntary best practice: AI-generated label on output
Classification note: None of CyberStackHub's AI systems fall into the "High Risk" categories defined in EU AI Act Annex III (e.g., critical infrastructure, employment, credit scoring, law enforcement). Our tools are informational and advisory only — humans retain all decision-making authority.

2. AI Models and Providers

CyberStackHub uses large language models (LLMs) from third-party providers to generate outputs. We do not train, fine-tune, or host our own models.

GPT-4 / GPT-4o
Provider: OpenAI, Inc. (San Francisco, CA)

Used for generating structured security documents, compliance reports, risk recommendations, and narrative explanations. OpenAI processes inputs under their Data Processing Agreement. Inputs are not used to train OpenAI models.

Claude (Sonnet / Haiku)
Provider: Anthropic, PBC (San Francisco, CA)

Used for complex analytical assessments, vendor risk analysis, and longer-form document generation. Anthropic processes inputs under their commercial API terms. Inputs are not used to train Anthropic models.

Model selection: The specific model version used may vary based on task complexity, availability, and cost. All models used are commercially licensed general-purpose LLMs with no special training on customer data.

EU data residency: API calls to OpenAI and Anthropic are routed through US-based infrastructure. For organizations with EU data residency requirements, note that assessment inputs (company name, answers to questionnaire questions) are transmitted to US-based AI providers. Personal data is minimized and inputs contain no special category data.

Machine-readable AI system card
https://cyberstackhub.ai/api/ai-transparency
JSON · Public

3. How Outputs Are Generated

3.1 Methodology overview

When you use a CyberStackHub tool, the following steps occur:

  1. Input collection: You provide structured inputs (company name, industry, answers to specific security questions). These are sanitized and validated server-side before being passed to the AI model.
  2. Prompt construction: Our engineering team has developed structured system prompts that instruct the AI model on the cybersecurity framework to apply, the output format required, and the disclaimers to include. These prompts encode expert cybersecurity knowledge from publicly available frameworks (NIST CSF, ISO 27001, SOC 2, HIPAA, PCI DSS, CMMC).
  3. AI inference: The structured prompt plus your inputs are sent to the AI provider API. The model generates a response based on patterns learned during its pre-training.
  4. Post-processing: Outputs are parsed, validated for format compliance, and returned to you. Numeric scores are calculated programmatically from your answers — not solely from AI inference — ensuring consistency.
  5. Logging: The session ID, tool used, and output metadata are stored. Raw inputs and full AI outputs are stored in our database to support your access to historical reports.

3.2 Scoring methodology

Risk scores and compliance readiness percentages are calculated using a weighted scoring algorithm applied to your questionnaire answers. AI models supplement this with qualitative recommendations and narrative explanations — they do not determine the numeric score. This separation ensures scores are reproducible and auditable.

3.3 Calibration and quality

Our system prompts are reviewed by cybersecurity professionals and updated when major framework versions are released. We conduct periodic accuracy reviews against published industry benchmarks. We do not claim perfect accuracy — see Section 5 (Limitations).

4. Data Inputs and Governance

4.1 What data is used as AI input

AI model inputs consist exclusively of data you provide during tool use:

We do not input into AI models:

4.2 No training on user data (EU AI Act Article 10)

CyberStackHub does not use your inputs to train, fine-tune, or improve AI models — ours or any third-party provider's. Your data is processed solely to generate the output you requested.

Both OpenAI and Anthropic have commercial API agreements that prevent API inputs from being used for model training by default. We have verified this with both providers.

4.3 Data retention

AI-generated outputs are stored in your account so you can access historical reports. Inputs are retained for the same period to allow output re-generation or audit. You can request deletion of all AI-generated content associated with your account by emailing privacy@cyberstackhub.ai. We will process deletion requests within 30 days.

5. Known Limitations and Biases

Transparency requires honesty about what our AI systems cannot do well. The following limitations are known and documented.

Training data cutoff

LLMs have knowledge cutoff dates. Newly disclosed CVEs, recently updated compliance standards, or emerging threat actors may not be reflected in outputs. We note the approximate knowledge reference date in full reports.

Self-reported input bias

Our tools rely entirely on inputs you provide. If inputs are incomplete, inaccurate, or optimistic, outputs will reflect those biases. We do not independently verify your security posture.

Geographic and sector gaps

Our prompts are primarily calibrated to US/EU cybersecurity frameworks and English-language standards. Organizations in other regions or operating under non-standard frameworks may receive less accurate guidance.

Non-determinism

LLM outputs are probabilistic. Running the same inputs twice may produce slightly different recommendations. Numeric scores are deterministic; AI-generated narratives are not.

No live system access

We do not perform active penetration testing, live network scans, or real-time vulnerability detection. All assessments are based solely on what you tell us, not what we observe.

Hallucination risk

LLMs can generate plausible-sounding but incorrect information, especially for very specific technical details, regulatory citations, or statistics. Always verify critical findings against primary sources.

6. Human Oversight Mechanisms

CyberStackHub's AI systems are designed with meaningful human oversight at every stage. No AI output triggers automated action — you decide what to do with every result.

7. Your Rights

Under the EU AI Act and GDPR, you have the following rights in relation to CyberStackHub AI systems:

8. Contact and Feedback

For questions about this AI transparency documentation, EU AI Act compliance, or to exercise your rights:

For general feedback on AI output accuracy: feedback@cyberstackhub.ai

Documentation currency: This page is reviewed and updated quarterly. If you notice outdated information or an inconsistency with our actual practices, please report it so we can correct it promptly. Last review: April 2026.