The annual RSAC Conference in San Francisco is a great opportunity to gauge what the cybersecurity community cares about — and what it worries about. This year, the biggest theme at the conference couldn’t have been clearer: everyone wants to know how to use, and protect themselves from, artificial intelligence.
As part of our monthly Reporters’ Notebook video series, Eric Geller, senior reporter at Cybersecurity Dive, sat down with Rob Wright, news director at Dark Reading, and Alissa Irei, senior site editor at TechTarget SearchSecurity, to discuss the biggest trends on stage and in the hallways at RSAC.
AI has been a ubiquitous presence at RSAC for the past several years, but as businesses’ AI considerations have matured, the conversations at the conference have also advanced. This year, the talk at RSAC was about the trade-offs of agentic AI solutions, the regulatory environment in the U.S. and abroad and the future of security work in an increasingly automated world.
Given the many repetitive tasks involved in running a security operations center, some security executives are bullish on AI’s potential to speed up and increase the quality of their teams’ work. AI agents can analyze and generate reports on reams of data without ever getting tired, freeing up human employees to perform more nuanced and complicated tasks. Research shows that companies are racing ahead with testing and implementing AI agents in their workflows.
At the same time, AI poses many serious cybersecurity threats. The technology is helping automate ransomware attacks, speed up lateral movement and understand targets’ vulnerabilities. AI tools themselves have flaws that can expose businesses to intrusions, and AI agents are susceptible to hijacking attacks that could lead to data theft or business disruptions.
AI’s impact on the workforce also remains uncertain. Some corporate leaders are optimistic about shrinking salary budgets, while others have a hard time imagining AI agents replacing humans on complex projects, and still others are struggling to hire experienced workers in the current environment.
In addition to AI, another theme pervaded this year’s RSAC Conference: the U.S. government’s absence. For political reasons, the Trump administration blocked several agencies from attending the conference, including the Cybersecurity and Infrastructure Security Agency, whose officials ended up being unable to attend anyway because of the partial government shutdown. The administration’s abrupt withdrawal from RSAC highlighted its pattern of disinvestment in cybersecurity, a trend that has strained relationships between federal agencies and their domestic and international partners.
Dark Reading’s Rob Wright: Hi, I’m Rob Wright with Dark Reading.
TechTarget SearchSecurity’s Alissa Irei: I’m Alissa Irei with Search Security.
Cybersecurity Dive’s Eric Geller: And I’m Eric Geller with Cybersecurity Dive.
DR’s Rob Wright: And we are here to talk about RSAC conference 2026, which happened last week. You guys were there on the ground in San Francisco. I was covering it from afar and I have my own thoughts on this, but wanted to see what you guys thought of the show last week, what you heard, and how it sort of stacked up against the, I guess the message of the show, the theme going into last week at San Francisco, which I think stood out to all three of us. Alissa, why don’t you take it away?
TTSS’s Alissa Irei: Sure. So, the theme of the conference was community, which was an interesting, and it seems maybe pointed choice, because obviously the acronym on everyone’s lips at the conference and in general is AI. So, I think that the choice to underscore the importance of community was intentional. The importance of human operators and human in the loop or human on the loop. And that I think there’s anxiety, you know, not just in our field, but in every, in every field about job replacement and AI use. So, the organizers at the conference, at least, were, I think, making the point that we still need humans. Artificial intelligence is not intelligent without human operators and that for the safety of ourselves and others, we do need humans involved in these processes. Eric, what was your impression of the conference on the ground versus the theme?
CD’s Eric Geller: Well, obviously, everywhere you look there was that focus on AI and particularly looking for understanding of the threats landscape, but also trying to get ahead of it with new defensive solutions. That was the common theme in a lot of the sessions, even if they weren’t billed as AI talks. But for my money, I think that the big theme that I was noticing was the tagline on all the posters did say, as you said, the power of community, but there was a big part of the community missing, which was the federal government, which pulled out of the conference a few weeks before it began. And of course, every year there’s a lot of people from the government who come, and both listen to what the community has to say and discuss their own plans. This is one of the places where those conversations are the most fruitful, according to a lot of folks I spoke to both before and during the conference, you know, there’s some anxiety about what that means that the government may not be as interested in participating in these kinds of events as it used to be. Of course, there have been a lot of cuts at the agencies that work closely with the business community and the security researchers who really make up a lot of the attendance of both RSA and a lot of the other conferences that we cover. So, I think that was a striking contradiction that a lot of folks saw.
On the one hand, the emphasis on community, the other hand, a major part of the community choosing not to participate for reasons really unrelated to the show itself. Now, that’s probably a one-off. I think we’re going to see them back at future shows, potentially even back at RSAC next year. At least in this case, you have a lot of people wondering whether it sends a broader signal. And of course, we’re looking for more information right now from the government about the cybersecurity strategy that they just put out.
Many folks said to me, RSAC would have been a perfect place to kind of roll out the information about what that means in practice. And of course, that was that did not happen. So, there were a lot of hallway conversations about the fact that there is this big void that’s being left by some of the federal agencies that normally would be participating and even stewarding some of these conversations.
DR’s Rob Wright: Yeah, that’s interesting. I know that my colleague Becky Bracken at Dark Reading had a story about how that. You know there were other governments, other nations that had brought their cybersecurity experts over to, you know, from the EU to discuss some of the things that were going on in their neck of the woods, but the gap was definitely noticeable. I wrote a story a few weeks ago about spyware, spyware policy — sort of a potential shift in the policy here in the US and how a lot of the spyware opponents that worked for the different civil society organizations, cybersecurity researchers, vendors that sort of specialized in this stuff were very fearful that there was a shift taking place in the US government in terms of policy. And they noted the same thing — there was a lack of communication. There was a lack of, I guess, people that were still in government working on this and communicating “All right, here’s the strategy. Here'’s what's going on and this is the direction we're going in.” And they’re sort of that — one person told me they were just sort of flying blind that there wasn’t any sort of communication or cooperation with the government at this stage and they were kind of lost at sea. A lot of people had left their positions in different agencies and so they’re just kind of winging it now hoping for the best, but not really sure. So, Eric, to your point, I think it is, it’s made a major impact.
TTSS’s Alissa Irei: It's an interesting moment too to have, I feel like it’s a moment of sort of unprecedented — I mean, to state the obvious unprecedented change — and it,s a moment that in a perfect world would see a lot of public private partnerships and cooperation and input from the private sector on public regulations and legislation. So, it is a notable absence and I think one that’s unlikely to see anyone’s anxieties about AI, which are plentiful regardless of what the federal government is or isn’t doing.
DR’s Rob Wright: Yeah, yeah, my anxiety. I’ll just tell you guys what it’s like up here. It’s actually off camera, so it’s like way above my —
TTSS’s Alissa Irei: Out of frame.
DR’s Rob Wright: Yeah, it’s out of frame. But yeah, let’s talk about AI. I mean, I know from, just managing like all of the stories that were coming in and looking at all the sessions and covering my own sessions that is just like every other one, probably more than probably two-thirds of the sessions were, had some type of AI component to them or were solely focused on AI. Obviously a big focus at the show. The one thing I thought was pretty interesting from my perspective as an outsider and just talking to people leading up to the show and then also people that were there last week as well was that there seemed to be a bit of a split. Maybe not a bit of maybe that’s being too gentle and too kind, but a split between, I guess, the C-level folks and the higher-ups in terms of what they thought of AI and what the researchers were seeing at the ground level in terms of you had a lot of researchers saying we need more human oversight. We need to be careful with agentic AI rollout. We need to be careful with vibe coding, coding assistance, all this stuff, and we just need more guard rails and more oversight. And then you had a lot of people. I mean, there was one person in particular, whose name I do not recall, but spoke at one of the sessions and said that human oversight — we need to get rid of it because it’s going to slow things down. And the whole point of AI is to speed things up. What were you guys seeing there or hearing?
TTSS’s Alissa Irei: It seems like on the business side, there’s so much enthusiasm for new AI use cases and experimentation and asking for forgiveness, not permission. And, at least from what I saw and heard, that creates a lot of opportunity for bad things to happen. I think Eric, I think you wrote a piece about a session, talking about the vulnerabilities that vibe coding introduces and, just the lack of oversight. So, it seems troubling, to say the least. On the flip side, I think to your point, Rob, about the C-level. I did go to a session with the CISO of Exabeam, who spoke about how the agentic AI that they’ve deployed in their SOC autonomously and independently found a North Korean malicious insider that they had hired. It was his first day. And according to him, the agentic AI flagged this person’s activity within hours, if not minutes of logging into his account for the first time. So, I think there are exciting examples of it working. How consistently it is working is unclear to me and then how we'’e going to manage these enormous vulnerabilities that it's introducing is terrifying, I think to a lot of us, Eric, I’ll let you weigh in. I know you, you wrote about this topic.
CD’s Eric Geller: Yeah, one of the quotes that stood out to me in that panel I covered was a guy who basically said, “If AI wrote your YARA rules, you should just delete them now because they’re probably crap.” And it really speaks to this hunger for automation. And also I think this hunger for frankly, profit margins. The fewer people you can pay to do this work, the more money you’re going to make, the better you’re going to look to shareholders, the more venture funding you can raise. This is really only partly about security. It’s largely about looking profitable by shedding some of that labor cost. And of course, we’ve seen what happens when you let the AI run rampant, it miscategorizes things. It can cost you a lot of money if you let it do its thing without human supervision. And I think the theme that emerged in a lot of these talks that focused on AI was not so much a balancing act, but kind of both at the same time. Yes, you want some kind of a genetic solution, taking those mundane tasks off the plate of your specialized expert human. But you also want some kind of governance framework in place so that there’s a human periodically dropping in to review what’s going on. And if you’ve got an agent, an AI agent that is out of control, you’ll see the signs of that when you drop in and check on what it’s doing. If it’s mismanaging things, if it’s mislabeling things, you’re going to see evidence of that. And so I think that’s where a lot of the conversations ended up was, yes, there’s a real reason why especially stock managers are looking for ways to kind of change the role of the analyst and bring AI more into the threat analysis part of the job. But at the same time, just as you need human supervisors for human workers, you’re going to need human supervisors for AI workers, because nothing human or machine is infallible. And particularly the scale at which some of these companies operate, the stakes involved in protecting the networks or leaving them defenseless. We’re talking about a lot of money that can be made or lost. And so you do want a human being involved checking the work of the AI agent.
DR’s Rob Wright: Yeah, and that makes sense to me. I know the session that I, one of the sessions that I covered last week, one of the stories I wrote, it was from a Checkpoint session and the researchers basically said that, you know, we spent 20 years building up all these security measures to protect, you know, our networks and shore up defenses around the endpoint and move execution to the cloud where it's theoretically, or I guess in practice a lot of times safer, and the AI coding assistance were basically punching holes through these defenses and setting security back. I mean, literally they said setting security back a decade because now it was giving attackers a route from their endpoint, you know, from employees endpoint to the crown jewels, to development environments, to really important data. And that didn't used to be the case. And so, all this work that was being done for the last 10, 20 years is now just being thrown away. And the thing that I think they thought was shocking about all of this was just how many companies were rushing to these tools without any sort of acknowledgement that even without a vulnerability, even if you're not exploiting a flaw, like a critical flaw, you’re still, you’ve created a tunnel from like just a simple workstation that’s probably under-protected still to this day to, you know, some really important parts of the network that are highly privileged. And they were they were pretty surprised that people were just kind of like full steam ahead to this stuff and not taking a beat to say, Hey, is this the best idea? Do we need to do more to protect the stuff? Do we need to do more to oversee what the agents are doing and the privileges that we’re giving to these vibe coding tools? And I think that that was something that surprised them. And then I was surprised myself to hear their surprise that they just saw this sort of cavalier attitude going into it. And I don't know — based on what I was seeing at the show and hearing at the show, I don’t think that's going to change anytime soon, even with all the research that's out there about the various vulnerabilities and the various threats and the expanding attack surface that AI introduces. It doesn't seem to me like very many organizations or very many people are going to suddenly say, “Well, we need to take a step back. We need to slow down with this.” If anything, it feels like pressure is continually mounting to make the most of your investment in AI. Like Eric, to your point, shed cost, you know, like save money and reduce workforces. So that was the concerning thing for me was just seeing that that split and that that guess dichotomy.
TTSS’s Alissa Irei: It’s tricky too, because we talk a lot about on SearchSecurity. We participate a lot in the discourse around security culture and the importance of security being a business enabler and not being the Department of No, and aligning yourself with the business objectives, which is all true and important. On the other hand, the culture does seem, to your point Rob, like it’s going in that direction of full steam ahead. Don’t ask questions. Don’t say anything that’s going to slow down the road to profits generated from AI.
DR’s Rob Wright: Yeah. It’s distressing. I guess any closing thoughts from the show, takeaways, surprises, anything that stuck out to you other than the stuff we've already talked about?
CD’s Eric Geller: Well, I’ll offer one that's sort of related to AI, which is about the CVE program. We’ve really been hearing a lot of warnings about this program for almost a year. I don’t remember when last year. I think it was April of last year when they almost lost their government funding. In the year since then, people have been saying that this is not sustainable. People have been actually working in Europe to create alternatives to the CVE program. There are at least two of them in operation right now, one of them run by the European Union. And in addition, of course, to the precariousness of not having a guaranteed government funding source, there’s also the other problem that is really kind of battering this program right now, which is AI, because the vulnerability reports are coming in faster than they can handle. You know, there was a person speaking on the panel about CVE from GitHub who said that, I mean, that the numbers, just the incredible volume of VONE reports that are submitted through their system, and a lot of them are coming from AI agents that are out there looking for vulnerabilities. A lot of them are low quality. A lot of them are basically hallucinating vulnerabilities where none exist. That is an incredible amount of work to sort through. And for a program that was already struggling to classify and label these vulnerabilities just to get them in and out the door and give them a number, now you have AI making it even harder to deal with this because it’s really a tidal wave of reports, most of which are garbage.
This is not what this program needed at this moment if you ask the folks involved, but it is a trend that is only going to accelerate. I think about the AI agent that jumped to the top of the Hacker One tables last year in terms of reporting the most vulnerabilities. We’re not putting that genie back in the bottle. What that means for the CVE program, which is really at the bedrock of everything that everybody does in cyber defense, just having that CVE number. What that does to the program in the near future is something that I'll be watching very closely.
DR’s Rob Wright: I bet the AI companies love this because they’re probably going to say, “Well, you’re going to need AI to decipher all the AI slop that’s coming in and, you know, sort through it all and find the good stuff and not have the humans do it.
TTSS’s Alissa Irei: That actually makes me think about an informal conversation I had with Diana Kelly, who’s the CISO at Noma Security. She gave a session, she gave a talk on model collapse and the inevitability of, I think the theme of the talk was Idiocracy, the movie. If the models keep consuming their own content, then at some point, you know, we all become very, very stupid. Which brings us back to that theme of community and the importance of human contributions and human intelligence. I’ll also add — I’ll be the voice of optimism here — that there were moments in the conference, like for example, the CISO from Exabeam’s talk that I mentioned earlier, where I think there are some exciting examples of AI doing what it’s supposed to in the SOC. And we know that SOC analysts are overworked and overstressed and that if these AI agents can alleviate some of that burden and can sift through some of that noise and bubble up the ones that are actually actionable. That would be awesome. So, is it the end of the world as we know it is coming or a new level of nirvana in the SOC? Probably somewhere in between would be my guess.
DR’s Rob Wright: I'll try to be optimistic. I like ending on an optimistic note, so we’ll leave it there.
TTSS’s Alissa Irei: The power of community.
DR’s Rob Wright: The power of community and the power of positive thinking about AI and its future applications for cybersecurity.
TTSS’s Alissa Irei: There we go.
DR’s Rob Wright: Yeah, there we go. Thanks so much, guys. Really appreciate it.