Steve Wilson, Chief AI and Product Officer at EXABEAM – Interview Series

Steve Wilson is the chief AI and product officer at Exabeam, where his team applies cutting-edge AI technologies to address real-world cybersecurity challenges. He founded and co-hosted the OWASP AI Security Project, the industry-standard OWASP Top 10 for the large-scale language model security list.
His award-winning book, O’Reilly Media, was selected as the best cutting-edge cybersecurity book by Cyber Defense Magazine.
Exabeam is a leader in intelligence and automation, providing the world’s smartest companies with the power to operate securely. By combining the scale and power of AI with the strength of our industry-leading behavioral analysis and automation, organizations can look at security incidents more comprehensively, spot anomalies missing from other tools, and obtain faster, more accurate, and more repeatable responses. Exabeam empowers global security teams to combat cyber threats, mitigate risks and simplify operations.
Your new title is Exabeam’s Chief AI and Product Officer. How does this reflect the importance of AI in the continuous evolution of cybersecurity?
For over a decade, cybersecurity has been one of the first areas to truly embrace machine learning, and we have been using ML as the core of detection engines to identify abnormal behaviors that humans may miss. With the advent of newer AI technologies, such as smart proxy, AI has evolved from important to absolute center.
The sum of my role as Chief AI and Exabeam product officer fully reflects this evolution. In a company dedicated to embedding AI into its products, and in industries like cybersecurity, the role of AI is becoming increasingly important, and it makes sense to unify AI strategies and product strategies in one role. This integration ensures that we are strategically aligned to deliver transformative AI-driven solutions to security analysts and operations teams that rely most on us.
Exabeam pioneered the “proxy AI” in secure operations. Can you explain what this means in practice and how it distinguishes it from traditional AI approaches?
Proxy AI represents a meaningful evolution of traditional AI approaches. It is action-oriented – able to initiate the process, analyze information and provide insights before analysts ask for it. Apart from mere data analytics, Agesic AI acts as an advisor and provides strategic advice across the SOC, guiding users to the easiest victory, and providing step-by-step guidance to improve their safety posture. Additionally, agents, as a specialized package, are not a clumsy chatbot, each one is tailored to a specific personality and dataset, which will seamlessly integrate into the workflow of analysts, engineers and managers to provide targeted, impactful help.
With Exabeam Nova integrating multiple AI agents into its SOC workflow, what is the future of the security analyst role? Is it constantly evolving, shrinking or becoming more professional?
The security analyst role is certainly developing. Analysts, security engineers and SOC managers are all overwhelmed by data, alerts and cases. The real future shift is not only about saving mundane task time, although the agency will certainly help there – but also about elevating everyone’s role to team leadership. Analysts still need strong technical skills, but now they will lead a team of agents ready to accelerate their mission, expand decisions and truly drive safety posture improvements. This shift position analysts become strategic orchestration rather than tactical responders.
Recent data show a disconnect between executives and analysts about the productivity impact of AI. Why do you think there is such a perception gap and how to solve it?
Recent data show a significant disconnect: 71% of executives believe AI has significantly improved productivity, but only 22% of frontline analysts (daily users) agree. At Exabeam, we have seen this gap grow with AI commitments in cybersecurity. Creating flashy AI demos is never easy, and vendors quickly claim that they solve all SOC challenges. Although the executives of these demos were initially dazzling, many of them fell in the hands of analysts. The potential exists, and there is a pocket of real rewards, but there is still too much noise and too little meaningful improvements. To bridge this view gap, executives must prioritize AI tools that truly empower analysts, not just impress in the presentation. Trust and true productivity will increase when AI really improves analyst effectiveness.
AI is accelerating threat detection and response, but how do you maintain a balance between automation and human judgment in high-risk cybersecurity incidents?
AI capabilities are developing rapidly, but the underlying “language model” of today’s smart agents was originally designed for language translation (including nuanced decision-making, game theory, or handling complex human factors). This makes humans more important than ever in cybersecurity. AI will not lower the analyst role. It has risen. Analysts are now leaders in the team, leveraging their experience and insight to guide and mentor multiple agencies to ensure decisions are still subject to contextual and nuanced information. Ultimately, what balances automation with human judgment is to establish a symbiotic relationship, and AI expands human expertise rather than replace it.
How does your product strategy develop when AI becomes the core design principle rather than add-ons?
At Exabeam, our product strategy is fundamentally made by AI as the core design principle, rather than superficial add-ons. We built Exabeam from scratch to support machine learning (from log ingestion, parsing, enriching and standardization) to fill in a powerful common information model specially optimized for feeding ML systems. High-quality structured data is not only important to AI systems, but also their lifeblood. Today, we embed smart agents directly into critical workflows, avoiding universal, clumsy chatbots. Instead, we target critical use cases that can bring real, tangible benefits to users.
With Exabeam Nova, your goal is to “from assist to autonomy.” What are the key milestones for fully autonomous and secure operations?
The idea of total autonomous and secure operation is fascinating, but premature. A completely autonomous agent, spread throughout any field, is not efficient or safe at all. Although AI decisions are improving, it has not reached human-level reliability and will not reach human-level reliability for some time. At Exabeam, our approach is not chasing all autonomy, which my group at OWASP sees as a core vulnerability called over-agent. Provide agents with autonomy in operating rights with more reliable testing and verification. Instead, our goal is to work under the supervision of SOC human experts, capable but meticulously guided. The combination of human supervision and targeted proxy assistance is a realistic, influential pathway.
The biggest challenge you face is the biggest challenge of integrating Genai and machine learning at the scale required for real-time network security?
One of the biggest challenges in extending Genai and machine learning to cybersecurity is balancing speed and accuracy. Genai alone cannot replace the absolute scale of our high-speed ML engine handle, that is, data that continuously process data. Even the most advanced AI agents have a “context window”, which is very insufficient. Instead, our recipe involves using ML to distil large amounts of data into actionable insights, which our smart agent then translates and manipulates effectively.
You co-created the OWASP top 10 for LLM applications. What inspired this and how do you think of it shaping AI security best practices?
When I launched the OWASP top 10 for the LLM app in early 2023, there was little structured information about LLM and Genai Security, but very high interest. Within a few days, more than 200 volunteers joined the program, bringing different opinions and expertise to shape the original list. Since then, it has read more than 100,000 times and has become the basis of international industry standards. Today, this work has been extended to the OWASP Gen Gen Security Project, covering the AI Red Team, ensuring proxy systems and handling cybersecurity offensive uses for AI generations. Our team has recently exceeded 10,000 members and continues to advance AI security practices worldwide.
Your book “The Scripts of the Developers of LLM Security” won the highest award. What are the most important gains or principles in the book that every AI developer should understand when building secure applications? ”
The most important gain from my book LLM Security for LLM Security is simple: “It is a huge responsibility to have a strong power.” While understanding traditional security concepts is still essential, developers are now facing a whole new set of LLM challenges. This powerful technology is not a free pass, it requires proactive, thoughtful security practices. Developers must expand their perspective from the outset, identify and resolve these new vulnerabilities, embed security into every step of their AI application lifecycle.
As proxy AI becomes more mainstream, how do you view the development of the cybersecurity workforce in the next 5 years?
We are currently in an AI arms race. Adversaries are actively deploying AI to achieve their malicious goals, making cybersecurity professionals more important than ever. The next five years won’t lower the cybersecurity workforce, they will boost it. Professionals must embrace AI and integrate it into their teams and workflows. Security roles will shift to strategic command without personal efforts and involve more effective responses with AI-driven agent teams. This transformation enables cybersecurity professionals to lead decisively in the battle against evolving threats.
Thank you for your excellent interview, and readers who hope to learn more should visit Exabeam.