OpenAI invests in Adaptive Security to tackle Deepfake threats

The first investment into cybersecurity came up to $43 million

OpenAI invests in Adaptive Security to tackle Deepfake threats

On April 2, 2025, New York-based cybersecurity startup Adaptive Security announced a $43 million Series A funding round, co-led by OpenAI’s Startup Fund and Andreessen Horowitz (a16z).[1] This marks OpenAI’s first-ever investment in the cybersecurity sector, a move that signals growing concern over AI-driven cyber threats.

With generative AI fueling sophisticated attacks like deepfakes and phishing, Adaptive Security’s focus on AI-powered defense has caught the attention of major players.

Adaptive Security, founded by Brian Long and Andrew Jones, offers a platform that simulates real-world cyberattacks using AI-generated scenarios, such as deepfake voice calls or targeted phishing emails. The company already serves over 100 enterprise clients since its public launch in January 2023, showing strong early traction.

The investment, which also includes investors like Abstract Ventures and Google and Shopify executives, will allow Adaptive to grow its technology to counter the rising wave of AI-enabled social engineering attacks.

The investment comes at a critical time. Just last year, a Hong Kong bank lost $25 million after criminals used AI-generated voices to deceive a manager during a video call. Posts on X reflect a mix of excitement and caution, with some users calling Adaptive’s approach “a game-changer” for security training, while others question if OpenAI’s involvement might blur lines between AI innovation and security risks. This funding round highlights a new chapter for both OpenAI and the cybersecurity industry.

How the generative AI is used to scam regular people

Adaptive Security’s platform is designed to fight fire with fire. By using AI to mimic real-world attack scenarios- like deepfake voice calls, SMS scams, or email phishing – it trains employees to spot and stop these threats before they cause harm. This is crucial as AI tools become more accessible to criminals.

For example, a recent case saw hackers clone the voice of Wiz CEO Assaf Rappaport in a deepfake attack aimed at tricking his team, showing how even cybersecurity leaders are vulnerable.

OpenAI’s decision to back Adaptive Security isn’t just a business move – it’s a response to the darker side of AI innovation. Generative AI, like OpenAI’s own ChatGPT, can be misused to create convincing fake content, making traditional security measures less effective. Ian Hathaway, a partner at the OpenAI Startup Fund, said:[2]

AI is reshaping the cybersecurity threat landscape faster than most organizations can respond.

The interesting thing about this investment is the timing for OpenAI. The AI leader has been under fire for the ways in which its technology can be misused, and this may be an attempt to tackle the issue directly. With $43 million in new funding, Adaptive will be building out its platform, providing increased simulations and training that's more sophisticated. For cybersecurity workers, it's a sign of a change: AI isn't a menace alone – it's increasingly one of the chief tools in the battle against cyberattacks.

OpenAI’s role in cybersecurity

OpenAI's venture into cybersecurity is noteworthy. Its investment in Adaptive Security shows commitment to fighting AI threats, and the sector will benefit. CEO Brian Long posted on X that investment will help “in preventing sophisticated AI-powered cyberattacks,” which is in line with the sector's requirement for improved defenses. Yet OpenAI's technology has also been exploited by bad actors, posing a difficulty for the company.

The cybersecurity community is watching closely. OpenAI's Startup Fund, which has invested in over a dozen startups since 2021, is wading into a space where trust and responsibility are paramount.[3]

Analysts fear that OpenAI's involvement could create conflicts of interest since the same AI technology that powers Adaptive's defenses also facilitates the attacks. The duality is fueling controversy surrounding X, mirroring admiration for Adaptive's mission and doubt over OpenAI's involvement. Artificial intelligence is revolutionizing cybersecurity. Adaptive Security's $43 million funding from OpenAI and Andreessen Horowitz highlights the need to combat AI threats. As attacks become more advanced, technologies like Adaptive's can prove to be a game-changer for secure networks vs. costly breaches. The market wonders if this partnership will revolutionize cybersecurity or unveil new threats in the AI arms race.

About the author
Gabriel E. Hall
Gabriel E. Hall - Passionate web researcher

Gabriel E. Hall is a passionate malware researcher who has been working for 2-spyware for almost a decade.

Contact Gabriel E. Hall
About the company Esolutions

References
Files
Software
Compare