Braintrust AWS Breach Prompts Urgent API Key Rotation After Unauthorized Access

Braintrust AI Platform Breach Exposes Customer API Keys in AWS Account

HIGH
May 10, 2026
May 11, 2026
m read
Cloud SecurityData BreachSupply Chain Attack

Impact Scope

Affected Companies

Braintrust

Industries Affected

TechnologyOther

Related Entities(initial)

Organizations

Amazon Web Services (AWS)

Other

Braintrust

Full Report(when first published)

Executive Summary

Braintrust, a platform for evaluating and monitoring artificial intelligence models, has disclosed a security incident involving unauthorized access to one of its Amazon Web Services (AWS) accounts. The breach, detected on May 4, 2026, exposed sensitive API keys belonging to its customers. These keys are used to connect to various cloud-based AI services, making this a significant supply chain attack risk for the AI development community. In response, Braintrust has urged all customers to immediately rotate any API keys stored within its platform. The incident underscores the critical importance of secure secret management and vendor risk assessment in the rapidly expanding AI industry.

Threat Overview

On May 4, 2026, Braintrust's security team identified suspicious activity within one of its AWS cloud environments. The investigation revealed that an unauthorized actor had gained access to an account that contained customer API keys. These keys are essentially passwords that grant programmatic access to third-party AI services (like OpenAI, Anthropic, etc.) and cloud platforms.

Upon discovery, Braintrust initiated its incident response protocol, which included:

  • Locking down the compromised AWS account.
  • Rotating all internal credentials and secrets.
  • Conducting a full audit of access across related systems.

Customer notifications began on May 5, with the strong recommendation to revoke and regenerate all API keys that had been entrusted to the Braintrust platform. While Braintrust reports that only one customer was directly affected by the unauthorized access, three other customers have reported suspicious spikes in their AI service usage, which are now under investigation.

Technical Analysis

While the initial access vector was not disclosed, the nature of the breach strongly points to the use of compromised credentials. This aligns with the MITRE ATT&CK technique T1078.004 - Valid Accounts: Cloud Accounts. In this scenario, an attacker obtains legitimate credentials for a cloud account (through phishing, infostealers, or other means) and uses them to access the environment, appearing as a legitimate user and bypassing perimeter defenses.

Once inside the AWS account, the attacker likely performed discovery actions to locate and exfiltrate the stored API keys. The goal would be to abuse these keys to perform actions on behalf of Braintrust's customers, such as:

  • Incurring large costs by making expensive AI model API calls (Financial Theft).
  • Stealing proprietary data or models from the customers' AI service accounts (Data Theft).
  • Using the compromised access to pivot into the customers' own cloud environments.

The suspicious usage spikes reported by other customers suggest attackers may have already begun to abuse the stolen keys.

Impact Assessment

The impact of this breach extends beyond Braintrust to its entire customer base:

  • Supply Chain Risk: Braintrust's customers are now at risk of having their own AI service accounts compromised using the stolen keys. This could lead to financial loss, data breaches, and operational disruption for them.
  • Financial Loss: The direct abuse of AI API keys can be extremely costly. Attackers can rack up huge bills in a short amount of time, as seen in the suspicious usage spikes.
  • Data Exfiltration: Attackers could use the keys to access and steal sensitive data that customers were processing with their AI models.
  • Erosion of Trust: The incident damages trust in third-party AI development platforms, particularly those that require access to sensitive secrets like API keys.

IOCs — Directly from Articles

No specific technical Indicators of Compromise (IOCs) were mentioned in the source articles.

Cyber Observables — Hunting Hints

Customers of Braintrust and similar platforms should proactively hunt for signs of compromise:

  • Cloud Cost Anomalies: Monitor cloud and AI service provider billing dashboards for any sudden, unexplained spikes in usage or cost. This is often the first indicator of API key abuse.
  • API Log Auditing: Review API access logs from AI providers (e.g., OpenAI, Anthropic, AWS Bedrock) for requests originating from unexpected IP addresses or geographic locations.
  • Unusual Model Usage: Look for API calls to models that your organization does not typically use, or a high volume of calls at unusual times (e.g., overnight, weekends).
  • CloudTrail Log Analysis: In AWS, monitor CloudTrail logs for suspicious activity related to secrets management services (e.g., Secrets Manager, Parameter Store) or IAM role usage.

Detection & Response

  • API Key Rotation: The primary and most urgent response action is to revoke all API keys that were stored in Braintrust and generate new ones. This immediately invalidates the stolen credentials.
  • Usage Monitoring (D3FEND: D3-RAPA - Resource Access Pattern Analysis): Implement real-time monitoring and alerting on API usage. Set thresholds for cost and call volume, and trigger alerts for any significant deviation from the established baseline.
  • Credential Scanning: Use tools to scan code repositories and other assets for hardcoded API keys to ensure they are not inadvertently exposed.

Mitigation

  • Secrets Management: Avoid storing raw API keys in third-party platforms whenever possible. Use dedicated secrets management solutions (e.g., HashiCorp Vault, AWS Secrets Manager) that provide temporary, scoped credentials and robust audit trails.
  • Least Privilege Principle: When creating API keys, grant them the minimum permissions necessary to perform their function. For example, a key for a specific AI model should not have access to all models or administrative functions.
  • IP Allowlisting: If the service provider supports it, restrict API key usage to a specific list of trusted IP addresses, such as your application's egress IPs.
  • Vendor Security Assessment: Thoroughly vet the security practices of any third-party vendor before entrusting them with sensitive secrets or data. Inquire specifically about how they store, encrypt, and audit access to customer credentials.

Timeline of Events

1
May 4, 2026
Braintrust detects unauthorized access to one of its Amazon Web Services (AWS) accounts.
2
May 5, 2026
Braintrust begins notifying customers and advises them to rotate API keys.
3
May 10, 2026
This article was published

Article Updates

May 11, 2026

Expanded technical analysis of Braintrust breach, detailing additional MITRE ATT&CK techniques, potential 'model-jacking' abuse, and enhanced detection/mitigation strategies.

Further analysis of the Braintrust AI platform breach reveals additional MITRE ATT&CK techniques involved, including T1528 (Steal Application Access Token), T1539 (Steal Web Session Cookie), and T1496 (Resource Hijacking). The report highlights the potential for attackers to engage in 'model-jacking' or 'cryptojacking for AI' using stolen keys, incurring costs or manipulating models. New detection methods like AWS GuardDuty alerts and expanded mitigation strategies, including D3FEND Cloud User and Group Permissions, CSPM/CWPP, and billing alerts, are also detailed.

Timeline of Events

1
May 4, 2026

Braintrust detects unauthorized access to one of its Amazon Web Services (AWS) accounts.

2
May 5, 2026

Braintrust begins notifying customers and advises them to rotate API keys.

Article Author

Jason Gomes

Jason Gomes

• Cybersecurity Practitioner

Cybersecurity professional with over 10 years of specialized experience in security operations, threat intelligence, incident response, and security automation. Expertise spans SOAR/XSOAR orchestration, threat intelligence platforms, SIEM/UEBA analytics, and building cyber fusion centers. Background includes technical enablement, solution architecture for enterprise and government clients, and implementing security automation workflows across IR, TIP, and SOC use cases.

Threat Intelligence & AnalysisSecurity Orchestration (SOAR/XSOAR)Incident Response & Digital ForensicsSecurity Operations Center (SOC)SIEM & Security AnalyticsCyber Fusion & Threat SharingSecurity Automation & IntegrationManaged Detection & Response (MDR)

Tags

AIAPI KeysAWSBraintrustCloud SecurityData BreachSupply Chain Attack

📢 Share This Article

Help others stay informed about cybersecurity threats

🎯 MITRE ATT&CK Mapped

Every tactic, technique, and sub-technique used in this threat has been identified and mapped to the MITRE ATT&CK framework for consistent, actionable threat language.

🧠 Enriched & Analyzed

Observables and indicators of compromise (IOCs) have been extracted and cataloged. Risk has been assessed and correlated with known threat actors and historical campaigns.

🛡️ Actionable Guidance

Detection rules, incident response steps, and D3FEND-aligned mitigation strategies are included so your team can act on this intelligence immediately.

🔗 STIX Visualizer

Structured threat data is packaged as a STIX 2.1 bundle and can be visualized as an interactive graph — relationships between actors, malware, techniques, and indicators.

Sigma Generator

Sigma detection rules are derived from the threat techniques in this article and can be converted for deployment across any major SIEM or EDR platform.