Artificial intelligence is no longer just a lab experiment. It’s quietly becoming part of everyday software, helping developers write code, assisting analysts with research, and powering tools inside banks, hospitals, and tech companies. Over the last few years, large language models (LLMs) have moved from curiosity to core infrastructure for many digital products.
But while companies rushed to build smarter systems, one important piece lagged behind: security. The way AI systems behave is very different from traditional software, and that difference is forcing the cybersecurity world to rethink how protections actually work. As a result, a new discipline is emerging within the security community: AI penetration testing, often referred to as AI pentesting.
Why AI Systems Create New Security Risks
Most software behaves in predictable ways. You give it an input, the code follows a set of rules, and it produces an output. Security testing has always relied on this predictable structure.
Large language models don’t work that way.
They interpret language, guess intent, and generate responses based on probabilities rather than strict logic. Sometimes that works brilliantly. Other times, it opens doors that security teams never expected.
A few of the risks security teams are already studying include:
- Prompt injection attacks, where malicious input manipulates the model’s behavior
- Data leakage, where hidden training information appears in responses
- Model manipulation, where attackers influence AI decisions through crafted prompts
- Unsafe API actions, where an AI assistant triggers unintended system commands
These issues become even more serious when AI systems connect to databases, APIs, or automated workflows.
When AI Connects to Real Systems, the Stakes Get Higher
Many modern AI applications don’t operate alone. They often act as the interface for complex systems behind the scenes. Think about a typical AI-powered tool today. You may read corporate documents, access customer databases, launch backend services, or send requests to an external API. Security researchers point out that risk often occurs not in the model itself, but in how the model works with other systems. Even a seemingly harmless prompt may cause the AI Assistant to obtain sensitive information or execute unintended commands.
The Growing Field of AI Pentesting
To evaluate these risks, security professionals are adapting traditional penetration testing techniques to AI environments.
AI pentesting examines how language models behave when exposed to adversarial inputs, unexpected prompts, or manipulated data sources. Instead of probing network ports or software vulnerabilities, testers analyze how AI systems interpret language and how that interpretation affects downstream systems.
Among the engineers exploring this space is Nayan Goel, a Principal Application Security Engineer whose work focuses on the intersection of AI systems and modern application security.
Modern research examines what happens when large language models move from controlled environments into real-world software ecosystems. Once AI interacts with APIs, data pipelines, and automated workflows, the number of possible failure points increases quickly.
Research Is Starting to Catch Up
For a long time, most work on AI security stayed inside academic circles. Researchers studied theoretical attacks or analyzed how machine-learning systems could be manipulated.
Goel has contributed to this discussion through research on topics including federated learning for secure AI models, securing AI systems in adversarial environments, and protecting autonomous systems. Some of this work has been presented at international conferences such as IEEE and Springer, reflecting growing recognition of these challenges in both academic and industry settings.
Building Security Standards for AI Applications
As more organizations deploy AI tools, the need for common security guidelines is becoming apparent. Organizations such as OWASP have started publishing guidance specifically for generating AI systems and large language models (LLMs).
Organizations such as OWASP have started publishing guidance specifically for generative AI systems and large language models (LLMs). Goel has also contributed to community efforts focused on defining security practices for AI-driven systems, including work connected to OWASP’s agentic security initiatives.
These guidelines represent an early attempt to bring structure to a field that is evolving quickly. The goal of these projects is to help developers integrate security controls into AI applications before vulnerabilities become widespread.
Turning Research Into Real Security Tools
Beyond research frameworks, security teams also need practical ways to test AI systems.
To help address that gap, Goel’s recent work includes developing and testing methods aimed at identifying vulnerabilities such as prompt injection across AI models, an area that continues to receive attention as generative systems become more widely used. One interesting feature of this tool is its multi-agent testing approach, where different analyzer agents evaluate each other’s behavior during testing. This setup helps mimic coordinated attack strategies that might occur in real-world scenarios.
A version of this framework was presented at events such as BSides Chicago, where researchers and practitioners share approaches to evaluating the resilience of AI systems in real-world conditions.
AI Is Also Becoming Part of the Defense
While AI introduces new security risks, it may also help solve some of them. Security researchers are experimenting with machine-learning systems that monitor behavior patterns, detect suspicious activity, and automate threat detection.
Teaching Future Security Engineers
Another important part of the AI security ecosystem is education. Universities are expanding programs that combine cybersecurity with artificial intelligence, but many real-world security problems still aren’t fully covered in traditional courses.
Activities like these help bridge the gap between academic research and the practical skills engineers need in industry.
Why AI Pentesting Will Matter More in the Future
In every major technological transformation, new security challenges have arisen. Web security became indispensable when the Internet spread in the 1990s. When cloud computing expanded, organizations were forced to review infrastructure protection measures. AI seems to be in the same situation today. Large language models are built into everything from in-house tools to customer applications. As their influence grows, so does the importance of carefully testing them. AI pentesting is still a young field, but it’s gaining attention quickly. With new research, security frameworks, and testing tools emerging, the industry is starting to build the foundation needed to secure intelligent systems.
Digital Trends partners with external contributors. All contributor content is reviewed by the Digital Trends editorial staff.

