Surf Data
Criar ContaCriar Conta
Voltar ao Blog
MCPLGPDSecurityAI Agents

39.7% of AI interactions expose sensitive data. Is your MCP server protected?

Data from Cyberhaven, Astrix Security, and ANPD reveal the gap between agent adoption and security. Understand the risks.

Diogo Felizardo·Founder, Surf Data
25 de fevereiro de 20263 min de leitura

39.7% of enterprise AI interactions expose sensitive data.

This isn't a prediction. It's real data from 2025, according to a Cyberhaven report. And the landscape is rapidly deteriorating, especially for companies connecting AI agents to their internal data via MCP (Model Context Protocol).

The explosive growth of MCP

The MCP ecosystem grew from 100,000 to 97 million downloads per month in less than a year. What started as an Anthropic initiative quickly became an industry standard: OpenAI adopted it, Google and Microsoft joined in.

This growth is no coincidence. MCP solves a real problem: it allows AI agents to access external data and tools in a standardized way. Instead of building custom integrations for each LLM, data teams can expose their databases through a single protocol.

But the speed of adoption is far outpacing organizations' ability to implement adequate security.

The risks nobody is talking about

According to Astrix Security, 53% of MCP servers in production use static credentials. We're talking about API keys that are never rotated, stored in .env files scattered across repositories and development environments.

This means an AI agent with access to a compromised MCP server can:

  • Exfiltrate sensitive data — names, national IDs, emails, financial records
  • Execute malicious queries — if there's no blocking of destructive operations
  • Escalate privileges — using static credentials with broad database access

The problem is that most MCP servers were built to "work," not to be secure. The priority is delivery speed, and security becomes an afterthought.

LGPD and record fines

In Brazil, the situation gains an extra layer of complexity: the LGPD (General Data Protection Law). The ANPD (National Data Protection Authority) ramped up enforcement in 2025, adopting a less tolerant stance and actively seeking out violations.

Penalties can reach 2% of gross revenue, capped at R$50 million per violation (~$9 million USD). And the trend is escalating — the ANPD is increasingly attentive to AI usage with personal data.

For data teams exposing databases via MCP, the compliance risks are concrete:

  • Personal data without masking being sent to AI agents
  • No audit logs — impossible to prove who accessed what
  • Lack of granular control — all agents see all data
  • No consent or legal basis for data processing via agents

The gap between adoption and security

CData data shows that 85% of companies plan to implement AI agents by the end of the year. But the security infrastructure to support this implementation simply hasn't kept up.

The math doesn't add up:

  • On one side, pressure to adopt AI agents and generate value from data
  • On the other, security infrastructure still stuck in the pre-agent model
  • In the middle, data teams receiving demands without the right tools

This gap between agent adoption and data security is the most underestimated problem in the market right now.

How to protect your data

Regardless of the tool you use, there are essential practices for anyone exposing data via MCP:

  • Never use static credentials — implement automatic token rotation with expiration
  • Mask sensitive data — national IDs, emails, phone numbers, and names should be masked before reaching the agent
  • Block destructive operations — agents should not be able to execute DROP, DELETE, INSERT, or UPDATE
  • Implement audit logs — record every query, every agent, every result
  • Control granular access — not every agent needs to see all data
  • Limit results — row caps per response prevent mass exfiltration

At Surf Data, we built all these layers natively. Automatic PII masking, SHA-256 hashed tokens, dangerous SQL blocking, immutable audit logs, and a 100-row response limit. Everything designed for LGPD compliance from day zero.

Conclusion

If your team is giving agents access to internal data, the question isn't "if" something will go wrong. It's "when."

The window of opportunity to solve this problem is now — before the next incident, not after. Start by auditing your existing MCP servers, implement the practices listed above, and consider a managed solution with built-in security.

Compartilhar