AI Red Team Labs • Vulnerability Disclosure

Vulnerability Disclosure Policy

AI Red Team Labs is built for defensive validation. We welcome responsible security research that helps us improve the safety, reliability, and integrity of the platform.

Safe Mode & Testing Statement

Our product operates in a Safe Evaluation Mode: it is designed to validate defenses and provide recommendations. It does not execute live attacks or target external systems.
Please do not:
  • Attempt to access data that is not yours
  • Disrupt service availability (e.g., DDoS / stress testing)
  • Use social engineering against employees/users
  • Test physical security or third-party providers outside stated scope

In Scope

Targets covered by this policy:
Examples of in-scope classes:

Out of Scope

How to Report

Email: security@airedlabs.com (recommended)
If you do not have that mailbox yet, temporarily use: john@airedlabs.com
Include:
  • A clear description of the issue and impact
  • Steps to reproduce (proof-of-concept if helpful)
  • Affected URLs/endpoints and request/response examples (redact sensitive data)
  • Your contact info and preferred attribution name (optional)

Response Targets

We will keep you updated and coordinate disclosure timing when appropriate.

Safe Harbor

We consider research conducted under this policy to be authorized, and we will not pursue legal action for good-faith testing that respects user privacy, avoids disruption, and is reported promptly.
Good-faith guidelines:
  • Use the minimum amount of data necessary to demonstrate impact
  • Stop testing once you confirm the issue
  • Do not share or publicize details before coordinated disclosure
← Back to App
Last updated: