AI Security
MCP
Shadow AI
AI Supply Chain
Generative AI
+3 more

Securing Generative AI APIs: MCP Security & Shadow AI Risks in 2026

SCR Security Research Team
February 13, 2026
19 min read
Share

The New AI API Landscape

Generative AI APIs have become the backbone of modern applications. According to Postman's 2025 State of APIs Report, 73% of organizations now integrate at least one generative AI API into their products, up from 29% in 2023. But this explosion has created three interconnected security challenges:

  1. MCP (Model Context Protocol) security — The new standard for AI-tool integration
  2. Shadow AI — Unauthorized AI usage across the organization
  3. AI supply chain attacks — Compromised models, prompts, and plugins

Warning: The AI API attack surface is expanding faster than security controls can keep up. Every generative AI integration is a potential entry point for data exfiltration, prompt injection, and unauthorized access.


What Is MCP (Model Context Protocol)?

MCP, released by Anthropic in November 2024 and rapidly adopted across the AI ecosystem, is an open protocol that provides a standardized way for AI models to interact with external tools, data sources, and services. Think of it as "USB for AI" — a universal connector between LLMs and the world.

How MCP Works

┌─────────────┐     MCP Protocol      ┌──────────────────┐
│  AI Client   │ ◄───────────────────► │   MCP Server     │
│  (Claude,    │   JSON-RPC over       │   (Tools, Data,  │
│   Cursor,    │   stdio/HTTP/SSE      │    Resources)    │
│   Custom)    │                       │                  │
└─────────────┘                       └──────────────────┘
       │                                      │
       │ User sends prompt                    │ Exposes:
       │ AI decides to call tool              │ - Tools (functions)
       │ AI sends tool request via MCP        │ - Resources (data)
       │ Server executes + returns result     │ - Prompts (templates)
       ▼                                      ▼

MCP Security Risks

RiskDescriptionImpact
MCP Server PoisoningMalicious MCP server publishes backdoored toolsAI executes attacker-controlled code
Tool ShadowingAttacker's MCP server registers a tool with the same name as a trusted toolAI calls the malicious version instead
Credential Theft via MCPMCP server requests credentials from the AI contextAPI keys and tokens exfiltrated
Data ExfiltrationMCP tool sends retrieved data to external serversSensitive data stolen silently
Over-Permissioned MCP ServersMCP server has filesystem/network access beyond its purposeLateral movement within the host

Securing MCP Deployments

1. Verify MCP Server Provenance

// Secure MCP server configuration
{
  "mcpServers": {
    "database-reader": {
      "command": "npx",
      "args": ["@verified-publisher/mcp-db-reader"],
      "env": {
        "DB_HOST": "read-replica.internal",
        "DB_USER": "readonly_agent"
      },
      "permissions": {
        "filesystem": "none",
        "network": ["internal-db.company.com:5432"],
        "exec": "none"
      }
    }
  }
}

2. MCP Security Checklist

  • Only install MCP servers from verified publishers with source code review
  • Run MCP servers in sandboxed containers with minimal permissions
  • Implement network allowlists — MCP servers should only connect to intended services
  • Audit all tool calls and responses — log everything
  • Never expose credentials in MCP tool contexts
  • Use tool-level authorization — the AI should not call tools the user doesn't have permission to use
  • Monitor for tool shadowing — alert when two servers register identically-named tools

Shadow AI: The Invisible Attack Surface

What Is Shadow AI?

Shadow AI is the use of unsanctioned AI tools, APIs, and models by employees without IT/security approval. It's the AI equivalent of shadow IT, but with amplified data risks.

Scale of the Problem:

FindingValueSource
Employees using unauthorized AI tools at work78%Salesforce 2025
AI tools used without IT knowledge63%Gartner
Sensitive data pasted into public LLMs11% of promptsCyberhaven 2025
Companies with formal AI usage policies44%MIT Sloan / BCG 2025
Source code pasted into ChatGPT5.6% of enterprise promptsCyberhaven 2025

Samsung engineers pasted proprietary source code and internal meeting notes into ChatGPT in April 2023 — this data was used for model training. Samsung banned all generative AI tools internally.

What Data Leaks Through Shadow AI?

Based on analysis of 3.1 billion enterprise prompts (Cyberhaven DLP Report, 2025):

  • Source code — 5.6% of all prompts contain proprietary code
  • Internal documents — 4.3% contain confidential business information
  • Customer data — 2.8% contain PII (names, emails, account numbers)
  • Financial data — 1.9% contain financial projections, revenue figures
  • Legal documents — 1.1% contain contracts, legal advice, litigation details

Mitigating Shadow AI Risks

  1. Discover — Use CASB (Cloud Access Security Broker) tools to identify AI tools in use
  2. Policy — Establish clear AI usage policies with data classification guidelines
  3. Provide — Offer sanctioned AI tools with enterprise security controls (Azure OpenAI, AWS Bedrock, private LLMs)
  4. Enforce — Block unauthorized AI APIs at the network/proxy level
  5. Monitor — Deploy DLP solutions that scan for sensitive data in AI prompts
  6. Train — Educate employees on what can and cannot be shared with AI tools

AI Supply Chain Attacks

The AI Supply Chain Threat Model

  [Model Provider]  →  Model weights/API
          │
  [Fine-Tuning Data]  →  Training data poisoning
          │
  [Model Registry]  →  Hugging Face, model repositories
          │
  [MCP Servers/Plugins]  →  Tool supply chain
          │
  [Prompt Templates]  →  Prompt injection via templates
          │
  [RAG Knowledge Base]  →  Document poisoning
          │
  [Your Application]  →  All risks aggregate here

Real AI Supply Chain Attacks

AttackDateImpact
Hugging Face malicious models2024Backdoored models uploaded to public repos with hidden payloads in PyTorch pickle files
Compromised LangChain plugin2024Third-party plugin exfiltrated API keys from environment variables
Poisoned training datasets2025Academic paper demonstrated injecting backdoors via 0.01% dataset contamination
Shadow MCP servers2025Developers installed unvetted MCP servers that exfiltrated code context to external servers

Securing Your AI Supply Chain

  • Model verification — Check model checksums and signatures before deployment
  • Dependency scanning — Run SCA tools on AI-related packages (LangChain, LlamaIndex, OpenAI SDK)
  • MCP server vetting — Code review all MCP servers; only allow from verified publishers
  • Private model registry — Host models internally rather than pulling from public repos at runtime
  • Prompt template versioning — Version-control system prompts; review changes like code
  • RAG content validation — Scan knowledge base documents for injection patterns before indexing
  • SBOM for AI — Maintain a Software Bill of Materials that includes models, datasets, and plugins (ML-BOM)

AI API Security Best Practices

PracticeImplementation
Rate limitingPer-user and per-agent token budgets with burst protection
Input validationPrompt length limits, injection pattern detection, content classification
Output filteringPII detection, DLP scanning on responses, structured output enforcement
AuthenticationOAuth 2.0 with scoped API keys per application
MonitoringFull prompt/response logging (with PII redaction) for audit
Cost controlsAlert on anomalous token usage; set hard budget limits

Further Reading

Advertisement