The Case for Evidence-Based AI Third-Party Continuous Technical Assessment
Key Takeaways
- “Trust me” isn’t enough in the AI era. Traditional questionnaire-based vendor diligence and self-attestations no longer provide real assurance.
- Evidence-based, continuous assessment is the new standard. Enterprises must demand verifiable proof—secure code scans, AI Bills of Materials (AI BoMs), and automated monitoring—to truly understand what’s in the Blackbox.
- AI risk multiplies legacy software-supply-chain weaknesses. Interconnected AI frameworks, rapid adoption, and opaque dependencies create vulnerabilities far beyond those seen in incidents like Log4j.
- Transparency drives competitive advantage. SaaS providers who embrace secure, read-only validation will accelerate enterprise sales, justify premium pricing, and earn lasting trust.
- Procurement and security leaders must lead the shift. Reward vendors that prove transparency; move beyond compliance theater to genuine, evidence-based trust.
I recently had a potential vendor refuse to let my team scan their code repository so we could automatically generate an AI Bill of Materials as part of our third-party diligence. Instead, they wanted us to read their SOC2 and accept their self-attestation of what AI models they were using for their AI agent. Oh, did I mention that we want to give that AI agent access to our CRM and allow it to make decisions for us? Speaking with my executive counterpart at the vendor I felt his pushback boiled down to a “trust me” argument.
Immediately, I thought of Pat Opet’s post sounding the alarm SaaS eroding decades of sound security practices (see post here). As the CISO of an AI and SaaS company, I understand the concern and the desire to protect the most valuable intellectual property within your enterprise, but this is a new age, and we need to open our minds to a new and better way.
The bottom line up-front:
The days of questionnaire-based diligence, vendor-scoped audits, and contractual promises are over. In the AI era, trust must be earned through verifiable evidence, not documents.
The wake-up call we’re still sleeping through
You only need to look back at Log4j to understand the fragility of our software supply chains. Shockwaves were felt in every security operations center globally, when Log4Shell hit the scene (details here). The US Department of Homeland Security estimated it would take at least a decade to find and fix every vulnerable instance of the library.
But here’s what should terrify every CISO: most organizations had no idea they were using Log4j.
The vulnerability was buried deep in dependency chains – third-party code calling fourth-party libraries calling fifth-party components. As a result, SaaS vendors confidently checked “yes” to questions about their secure coding practices; they passed their SOC2 audits, and they were still catastrophically vulnerable. It turns out, neither they nor their customers truly understood what was in the Blackbox.
During this time, I had just become the executive responsible for TPRM at one of the world’s largest banks and this moment taught me along with my peers across the financial services industry, that we lacked the necessary visibility into component parts of our third-party software.
The AI Multiplier Effect: Why The Problem Is Worse Now?
A vulnerability in one of the component parts of AI frameworks could make Log4j look like a minor XSS issue by comparison. Here’s why:
Scale and Speed:
AI systems are being adopted faster than any technology in history. A vulnerability in a core AI framework like PyTorch, TensorFlow, Langchain, or a widely used embedding model could impact millions of applications overnight. Unlike traditional software that requires months of integration, AI capabilities are being ‘vibe coded’ (rapidly integrated with minimal testing) into applications in days through simple API calls.
Opacity and Complexity:
Simple questionnaires for AI systems provide virtually no transparency into the SaaS provider’s security practices for things like their IAM permissions for AI services, whether they’re adhering to data usage agreements with foundation model providers, or how their AI agents are accessing and processing customer data. Are customer records being used to fine-tune the provider’s models? Are AI agents making API calls that were never authorized by the customer?
These aren’t theoretical risks. Stanford’s 2025 AI Index Report documents a 56% surge in AI-related incidents in 2024 alone — from data breaches to algorithmic failures compromising sensitive information. We’re deploying systems faster than we can secure them (download report here).
Interconnected Dependencies:
AI applications rarely use a single model. They chain together foundation models, fine-tuned variants, embedding services, vector databases, and orchestration layers – each with their own supply chain. One compromised component can cascade through the entire stack.
Move Beyond Questionnaires
As fellow AI and SaaS providers, we need to understand that our customers have a fundamental right to understand what’s in the Blackbox – especially those powered by AI. This isn’t about distrust; it’s proportionate diligence for systems that are processing the most sensitive data and making automated decisions affecting your business. Virtually, everyone in cybersecurity agrees that we’re accelerating AI adoption while the threat landscape explodes. The acceleration also increases the attack surface as the embedded dependencies proliferate with every added feature and product. So, what are we doing?
The good news is that technologies exist to drive evidence-based TPRM. What’s needed is a mindset shift from “trust but verify” to “verify, then trust”.
Secure scanning of code repositories is the new standard
Independent validation of basic secure coding practices via secure read-only methods. No human access to proprietary code is needed. Automated scans in minutes for minimally invasive validation.
AI Bill of Materials (BoM)
Our experience has shown that machine generated AI BoMs are 15 – 20% more accurate than manually generated BoMs. Regulatory expectations will continue to increase as AI is integrated into critical infrastructure. (see EU Cyber Resilience Act and US Executive Order 14028) Understand the scope of dependencies, AI model families, identify known vulnerabilities.
Continuous monitoring over point-in-time attestations
Model changes or major code updates should automatically notify customers with updated AI BoM. Join a trusted industry community that collaborates on establishing standards for AI governance and security. Receive regular threat intelligence related to AI models
What This Means for SaaS Providers: Embrace the Scrutiny
If you’re a fellow SaaS provider reading this and feeling defensive, I understand. Sharing code repository access feels invasive. Try thinking about this way: security transparency is becoming a competitive advantage. From my experience in embracing evidence-based security diligence, SaaS providers should expect:
- Accelerated sales cycles when working with Global Procurement teams.
- Superior security should translate into premium pricing.
- Attract enterprise customers from highly regulated industries.
- Catch vulnerabilities earlier due to more frequent and higher quality reviews.
The alternative is to stick to the old methods while your competitors differentiate themselves by embarrassing the new standard of evidence-based transparency.
The Bottom Line
In an era where 49% of organizations experienced third-party cyber incidents in the past year, customers are strengthening their diligence muscles. History has taught us that without a well-maintained inventory of software supply chain components, responses to large scale events will be tedious, manual, and expensive.
- Companies have both the right and responsibility to look inside the Blackbox. Securely scanning code repositories, AI BoMs, and continuous monitoring aren’t invasive audits – they’re the basic diligence required for the systems we entrust with our most sensitive operations.
- For my peer SaaS providers: security transparency isn’t a threat; it’s the future of trust in enterprise software. The vendors who embrace this shift will win in a market that has both the tools and intelligence to go deeper.
- For enterprise customers: Your questionnaires were designed for a different era. Demand machine-generated evidence. Require transparency. Move beyond compliance theater to genuine security validation.
- For Global Procurement teams: this is an opportunity for you too. Partner with your vendors to incent them to be more transparent. For those willing to embrace greater transparency, reward them with accelerated procurement cycles.
The stakes are too high, and the technologies exist to deliver higher quality assessments. Let’s start today before the next supply chain crisis forces our hand.