By Byron Hawkins, CISO of Cranium AI
As I sit here with my coffee contemplating RSAC week, I can’t help but reflect on how history seems to be repeating itself. The buzz around AI today mirrors the excitement of the dotcom era, when the World Wide Web promised to revolutionize everything. Back then, every company rushed to put an “e” in front of their products and services. Today, it’s all about adding “AI” to everything.
The CISO’s Pragmatic Perspective
The parallels between the late 1990s/2000s and today are striking. I vividly recall sitting in the office of a no-nonsense Army veteran who led Information Security at a prominent brokerage firm. The company’s CEO had brashly announced an aggressive launch date for their new eCommerce platform despite only recently signing the contract with EY to build it. The CISO’s wisdom still resonates: “He has the luxury of announcing dates and getting people excited about our new product, but we have to protect it. So, focus on how the hackers will attack us and secure this damn platform.”
This pragmatism defines the CISO mindset, in my opinion. While many business leaders focus on the transformative potential of AI and agentic systems, CISOs are thinking differently: “How will our adversaries use AI against us, and what’s my strategy?”
The Gap in AI Security Implementation
The release of NIST CSF 2.0 in February 2024 marked a significant evolution in cybersecurity frameworks. With its new “Govern” function joining the established Identify, Protect, Detect, Respond, and Recover functions, the framework now provides a more comprehensive approach to cybersecurity risk management.
However, a concerning gap exists between AI adoption and proper security governance. As software providers, third parties, and internal IT teams rush to implement AI systems, they are doing so without first establishing proper governance. Based on recent industry analysis, this gap is particularly evident in several key areas of the NIST CSF.
AI Inventory Challenges
The NIST CSF 2.0 emphasizes the importance of asset identification as the foundation of any effective cybersecurity strategy. Yet many organizations struggle to maintain a comprehensive inventory of their AI systems. This includes:
- Shadow AI implementations that bypass security reviews
- Third-party AI services embedded in approved applications
- AI models with unclear lineage or documentation
- Legacy systems being retrofitted with AI capabilities
Key Question: Describe your enterprise AI inventory lifecycle process.
Governance Gaps
The newly introduced “Govern” function in NIST CSF 2.0 addresses gaps in the earlier version by integrating governance into the framework’s foundation, defining the organization’s cybersecurity risk management strategy—including roles, responsibilities, policies, and oversight.
Despite its introduction, many organizations haven’t adapted governance structures to account for AI systems risks, such as:
- Authentication and authorization integrations
- Model vulnerabilities
- Data governance (training to runtime)
- AI supply chain (third- and fourth-party)
Key Question: How does your AI Governance Council operate?
Supply Chain AI Risk Management
It’s predicted that 45% of organizations worldwide will have been attacked via software supply chain attacks by the end of this year—a threefold increase since 2021. AI systems often rely on external models, APIs, and data sources, expanding the supply chain attack surface while undermining the effectiveness of current cybersecurity defenses.
Key Question: How do you feel about the security of AI in your supply chain, and what evidence are you collecting to demonstrate your due diligence?
It’s predicted that 45% of organizations worldwide will have been attacked via software supply chain attacks by the end of this year—a threefold increase since 2021.
– Dark Reading
AI Red Team Testing Gaps
Traditional security testing approaches may not adequately address AI-specific vulnerabilities and attack paths (from front-end input interfaces to back-end resources and systems). CISOs know that shifting left will close the gap between developers and security teams. However, resource constraints, ad-hoc approaches, and inadequate platforming for testing AI systems can cause undue friction and slow the organization’s time to market.
Key Questions: What’s your perspective on automating AI Red Team testing? Does your governance council require evidence of test results?
Pragmatic Approach to AI Security
As CISOs today, we stand on the shoulders of giants from the late 90s and early 2000s—a time when security was very much an afterthought and budgets were scarce. Although we’re slightly better resourced and CEOs are more versed on the very real threats to their organizations’ cybersecurity, we must remain pragmatic and cautiously optimistic about the promises of AI-powered security solutions.
These new solutions will create new risks within our organizations, and we need an approach to ensure we stay one or two steps ahead.
Basic new technology adoption statistics tell us that the majority of CISOs are in enterprises that will not be on the bleeding edge of AI technologies. While we have time to prepare, the clock is ticking.
If you’re uncertain where to start, here’s a simple approach:
- Start with the inventory: You can’t secure what you don’t know exists. Develop an enterprise AI asset inventory (internal and supply chain) with clearly defined ownership and risk classifications.
- Cross-functional governance: Establish clear integration plans and work with your peers across the company to establish prioritized security and DevSecOps transformation goals, building security into every step of the lifecycle—from governance to action.
- Create an AI security tech strategy: In the early stages of the dotcom era, there was a patchwork of security tools, each serving different functions. The AI era is no different. CISOs should be strategic and discerning.
While we want to avoid a patchwork of solutions—and the downside of inefficiencies, increased costs, and visibility gaps—CISOs should look for platforms that provide broad visibility into AI system components (like an AI Bill of Materials) and pragmatic functionality that enables integration of red team testing, along with extensibility for ingesting run-time model operations from SIEM platforms.
- Focus on the business risk: While the goal remains the same—“don’t get popped”—we must communicate in terms the business understands. Partner with your peer leading AI initiatives and understand her priorities and blockers. Bring recommendations in terms of ROI, with estimates for time to market and cost avoidance defined by your internal measures. (Hint: Consult your peers in finance, marketing, and human resources.)
Final Thoughts
As you use RSAC events to connect with former colleagues, gain valuable insights, and make new acquaintances, maintain your clear-eyed beginnings in the profession of cybersecurity. Think of your company’s risk profile and ask which areas of your attack landscape have increased in this post–Generative AI era—and filter every pitch and offering through the lens of pragmatism.
I’d love to hear your thoughts and experiences. How is your organization addressing AI security within the NIST CSF 2.0 framework? What gaps have you identified in your current strategies? How are you addressing them?
Until next time,
Coffee with the CISO
Your questions and comments are always welcome. Join the conversation by commenting below or reach out directly.