AI Red Team Labs • Vulnerability Disclosure
Vulnerability Disclosure Policy
AI Red Team Labs is built for defensive validation. We welcome responsible security research that helps us
improve the safety, reliability, and integrity of the platform.
Safe Mode & Testing Statement
Our product operates in a Safe Evaluation Mode: it is designed to validate defenses and provide recommendations.
It does not execute live attacks or target external systems.
Please do not:
- Attempt to access data that is not yours
- Disrupt service availability (e.g., DDoS / stress testing)
- Use social engineering against employees/users
- Test physical security or third-party providers outside stated scope
In Scope
Targets covered by this policy:
https://airedlabs.com (web app + static pages)
https://api.airedlabs.com (API endpoints)
Examples of in-scope classes:
- Authentication & authorization flaws (JWT/session issues, access control bypass)
- Data exposure (PII/secrets leakage) within our systems
- Input validation issues (XSS, injection, SSRF where applicable)
- Billing/entitlement logic issues
- Security misconfigurations (headers, TLS, CORS)
Out of Scope
- Denial-of-service testing or load/stress testing
- Social engineering (phishing, vishing, pretexting)
- Third-party services not controlled by AI Red Team Labs
- Reports that only describe “best practices” without a demonstrable security impact
How to Report
Email: security@airedlabs.com (recommended)
If you do not have that mailbox yet, temporarily use: john@airedlabs.com
Include:
- A clear description of the issue and impact
- Steps to reproduce (proof-of-concept if helpful)
- Affected URLs/endpoints and request/response examples (redact sensitive data)
- Your contact info and preferred attribution name (optional)
Response Targets
- Acknowledgement: within 2 business days
- Initial triage: within 5 business days
- Fix timeline: varies by severity; critical issues prioritized immediately
We will keep you updated and coordinate disclosure timing when appropriate.
Safe Harbor
We consider research conducted under this policy to be authorized, and we will not pursue legal action for
good-faith testing that respects user privacy, avoids disruption, and is reported promptly.
Good-faith guidelines:
- Use the minimum amount of data necessary to demonstrate impact
- Stop testing once you confirm the issue
- Do not share or publicize details before coordinated disclosure
Last updated: