From Pessimist to Cautious Optimist: My First AI Security Conference
- Shanni Gurkevitch
- Sep 4
- 5 min read
This is a guest post by Eytan Schulman a Heron AI Security community member
Early last month, I had the opportunity to attend my first conference on AI Security, joining researchers, engineers, and security leaders from organizations like Anthropic, Meta, RAND, and Palo Alto Networks. Coming from an engineering and cybersecurity background, I was curious to dive deeper into the field, meet some of the people shaping it, and figure out how much it speaks to me, along with where I might fit in. The conference I attended was the AI Security Forum Vegas 25, and from my perspective as a newcomer, it was a big success. The event struck a nice balance between presentations, fireside chats, and one-on-one meetings among attendees. Prior to attending, I had only a high-level understanding of the field. I was leaning pessimistic about the future of AI, convinced that Murphy’s Law would eventually catch up with LLM-based systems no matter what we did. By that I mean that if a system can be misused, exploited, or broken, eventually it will be. The more I read, the more I worried about how nation states, criminal groups, or rogue labs could exploit AI, from deploying offensive agentic cyber systems against critical infrastructure to accelerating bioweapon development. These are not abstract science fiction risks. Rather, these potential hazards were the kinds of issues I had begun to study before the conference, and once I arrived, I found they were central topics of discussion among practitioners.
Over the years I have attended conferences that drew thousands of participants, sometimes focused on entertainment, other times on professional fields. What stood out here was twofold. The intimate size of about 300 attendees created real opportunities for real discussion. And the shared urgency that brought everyone together gave the event a unique energy. In the talks and side conversations, it was clear that the passion for solving AI security challenges was not abstract. Everyone knew the problems would be complex and multifaceted, and that solutions would require a collective effort across the industry. Topics ranged from protecting model weights and trusted execution hardware to data center observability, model evaluations for performance and security, red teaming, and securing AI labs. There were also discussions about how to build trust into agent ecosystems, including issues like agent identity, trusted execution environments, and the inevitability of prompt injection risks.
A Hallway Survey
Between sessions, I decided to run a small hallway survey. I asked 17 people a single question:
“Are you optimistic or pessimistic about the risk of an existential threat from AI within the next 10 to 15 years?”
For simplicity, I classified those who believed the risks could be managed or avoided as optimists, and those who believed catastrophe was likely as pessimists. This was never meant to be scientific. My goal was simply to take the pulse of my fellow attendees, especially given the pessimism I carried into the conference.
The results surprised me. Out of 17 people, 9 leaned optimistic and 8 leaned pessimistic. The almost perfect split suggested there is no single consensus, even among those immersed in the field. The real insights emerged when I listened to how people explained their answers.
One person described themselves as “a scientific optimist but an economic pessimist.” They were confident that technical research is progressing toward effective safety and control mechanisms, but skeptical that such mechanisms would actually be implemented. In their view, short-term business incentives might overwhelm safety priorities.
Another attendee was blunt: “The only way catastrophe will not happen is if everything goes right.” They felt the range of possible failure modes was so wide that requiring perfection made pessimism the only rational stance.
Then, there were those who believed progress might only come through disruption. One person argued that the industry might need a sharp but limited failure, something serious enough to be a wake-up call but not disastrous in human terms, to shift priorities from profit to safety.
Hearing this mix of confidence, caution, and conditional hope sharpened my own reflections about where I stood.
My Own Shift
I arrived leaning pessimistic, certain that AI systems would eventually fall victim to Murphy’s Law. But as the day unfolded, I noticed my own mood shifting. The risks themselves had not shrunk in my mind. What changed was my awareness of the people working to address them.
Walking into rooms where every conversation revolved around the secure development of advanced AI was energizing. Seeing 300 people dedicate themselves to these challenges, people with different skills, backgrounds, and perspectives, gave me optimism not rooted in certainty but in community. Even if the obstacles are enormous, the fact that a growing group of exceedingly capable, motivated people is determined to face them matters. I met researchers probing technical limits, engineers focused on building safer systems, policy makers drafting governance frameworks, and security professionals who try to understand how attackers think. And the concerns were not just about today’s models, but also about how to prepare for the risks that may emerge on the path toward AGI.
Closing Thoughts
I left the conference feeling both encouraged and unsettled. Encouraged, because so many talented people are working on AI safety. Unsettled, because the number of potential bad outcomes remains large, and the path to reducing them is uncertain and long.
Whether optimism or pessimism will prove more accurate is still unknown. What does seem certain is that the future of AI safety will depend not only on technical breakthroughs, but also on whether the world is willing to place safety above speed and profit. As one associate in the AI security space put it: “There are countless ways for things to fail, but only by working on the problem do we give success a chance. Doing nothing almost guarantees failure.”
I left slightly more optimistic than when I arrived, with a stronger desire to learn about the field and to find where I can contribute to making the positive outcomes a reality. For me, that meant a mental shift: AI Safety and Security stopped being a distant, abstract debate and became something I could personally take part in. I began to think seriously about how my engineering and cybersecurity background could be redirected into work that strengthens this field.
The conference made it clear how much opportunity there is for people with cybersecurity experience. Skills like threat modeling, red teaming, incident response, and adversarial thinking are not only relevant but in high demand. AI security will require practitioners who can anticipate attacks, build resilient systems, and defend critical infrastructure. For those of us coming from cybersecurity, this is a chance to bring our expertise into a new frontier where the stakes are higher and the potential for impact is real.