Compliance & Transparency
Last updated: April 12, 2026 · Operated by Steeled Inc. · EU AI Act Article 11 Technical Documentation
Under the EU AI Act, all AI systems must be classified into one of four risk categories. The table below shows how each CyberStackHub AI feature is classified.
| AI Feature / Tool | Risk Level | Basis | Obligations |
|---|---|---|---|
| Risk Assessment Tool Scores organizational security posture 0-100 |
Limited Risk | Informs but does not replace human judgment; no automated decision-making with legal effects | Article 52: AI-generated label, methodology disclosure |
| AI Document Generators Security policies, incident response plans, compliance gap reports, vendor risk assessments, cyber insurance readiness |
Limited Risk | Generates template documents; all outputs require human review before use; no autonomous decisions | Article 52: AI-generated label, transparency statement |
| Compliance Readiness Checker SOC 2, ISO 27001, HIPAA, GDPR, PCI DSS readiness scoring |
Limited Risk | Advisory tool; scores are informational only; not a certified audit or compliance determination | Article 52: AI-generated label, limitation disclaimers |
| Questionnaire / Assessment Bot 20-question cybersecurity assessment with grading |
Limited Risk | Informational scoring; no automated action taken based on responses; human decides next steps | Article 52: Pre-interaction AI disclosure, output labeling |
| Breach Exposure Simulation AI-generated educational breach simulation |
Minimal Risk | Purely educational simulation; clearly labeled as AI-generated; not connected to live breach databases | Voluntary best practice: clear AI-generated label |
| Password Strength Analyzer Rule-based + AI analysis of password strength |
Minimal Risk | Deterministic analysis plus optional AI explanation; no personal data retained | Voluntary best practice: no special obligations |
| Phishing Indicator Analyzer Analyzes email text and URLs for phishing signals |
Minimal Risk | Educational advisory tool; user makes all decisions; no automated filtering or blocking | Voluntary best practice: AI-generated label on output |
CyberStackHub uses large language models (LLMs) from third-party providers to generate outputs. We do not train, fine-tune, or host our own models.
Used for generating structured security documents, compliance reports, risk recommendations, and narrative explanations. OpenAI processes inputs under their Data Processing Agreement. Inputs are not used to train OpenAI models.
Used for complex analytical assessments, vendor risk analysis, and longer-form document generation. Anthropic processes inputs under their commercial API terms. Inputs are not used to train Anthropic models.
Model selection: The specific model version used may vary based on task complexity, availability, and cost. All models used are commercially licensed general-purpose LLMs with no special training on customer data.
EU data residency: API calls to OpenAI and Anthropic are routed through US-based infrastructure. For organizations with EU data residency requirements, note that assessment inputs (company name, answers to questionnaire questions) are transmitted to US-based AI providers. Personal data is minimized and inputs contain no special category data.
When you use a CyberStackHub tool, the following steps occur:
Risk scores and compliance readiness percentages are calculated using a weighted scoring algorithm applied to your questionnaire answers. AI models supplement this with qualitative recommendations and narrative explanations — they do not determine the numeric score. This separation ensures scores are reproducible and auditable.
Our system prompts are reviewed by cybersecurity professionals and updated when major framework versions are released. We conduct periodic accuracy reviews against published industry benchmarks. We do not claim perfect accuracy — see Section 5 (Limitations).
AI model inputs consist exclusively of data you provide during tool use:
We do not input into AI models:
CyberStackHub does not use your inputs to train, fine-tune, or improve AI models — ours or any third-party provider's. Your data is processed solely to generate the output you requested.
Both OpenAI and Anthropic have commercial API agreements that prevent API inputs from being used for model training by default. We have verified this with both providers.
AI-generated outputs are stored in your account so you can access historical reports. Inputs are retained for the same period to allow output re-generation or audit. You can request deletion of all AI-generated content associated with your account by emailing privacy@cyberstackhub.ai. We will process deletion requests within 30 days.
Transparency requires honesty about what our AI systems cannot do well. The following limitations are known and documented.
LLMs have knowledge cutoff dates. Newly disclosed CVEs, recently updated compliance standards, or emerging threat actors may not be reflected in outputs. We note the approximate knowledge reference date in full reports.
Our tools rely entirely on inputs you provide. If inputs are incomplete, inaccurate, or optimistic, outputs will reflect those biases. We do not independently verify your security posture.
Our prompts are primarily calibrated to US/EU cybersecurity frameworks and English-language standards. Organizations in other regions or operating under non-standard frameworks may receive less accurate guidance.
LLM outputs are probabilistic. Running the same inputs twice may produce slightly different recommendations. Numeric scores are deterministic; AI-generated narratives are not.
We do not perform active penetration testing, live network scans, or real-time vulnerability detection. All assessments are based solely on what you tell us, not what we observe.
LLMs can generate plausible-sounding but incorrect information, especially for very specific technical details, regulatory citations, or statistics. Always verify critical findings against primary sources.
CyberStackHub's AI systems are designed with meaningful human oversight at every stage. No AI output triggers automated action — you decide what to do with every result.
Under the EU AI Act and GDPR, you have the following rights in relation to CyberStackHub AI systems:
For questions about this AI transparency documentation, EU AI Act compliance, or to exercise your rights:
For general feedback on AI output accuracy: feedback@cyberstackhub.ai