Skip to main content


AI tools are rewriting the rules of software development, but they also introduce new security risks, especially when it comes to how code, data, and system context are handled behind the scenes. For CIOs looking to accelerate development with AI, the question isn’t just “What can this tool do?” It’s also “What risk does it introduce?” 

We’ll look at core security concerns leaders should think about before approving AI tools for development. Using examples like Copilot, Claude, Cursor, Vertex, and Bedrock, we’ll show where enterprise risks show up and what strong, secure defaults actually look like.

Summary

AI tools are reshaping development, but they also introduce hidden security risks. CIOs must look beyond productivity to understand what data these tools access, how it’s stored, and where exposure can happen. From Copilot to Claude, Cursor, Vertex, and Bedrock, this guide outlines key red flags, must-have privacy features, and what to watch for in enterprise settings.

What Data Are These Tools Touching?

Before jumping into specific protocols or policies, it’s essential to level-set: What kinds of data do AI development tools actually see? As Principal Developer at Slingshot, Steve Anderson put it: “anything that’s got access to your data should have some scrutiny on it.”

“The biggest concern is data leakage,” said Doug Compton, Principal Developer at Slingshot. “Are you going to be leaking your source code or your data to an LLM that your competitor can then query against?”

For most tools, what they see is what developers feed them: source code, context files, environment variables, and occasionally database content or logs. That data can include intellectual property, proprietary algorithms, or sensitive credentials. 

Slingshot’s Principal Developer Andrew Meyer added: “a lot of these tools automatically pull in related source code, so you may not even know you’re sending along a secrets file.” This potential leakage is why visibility and control matter.

Quote: “A lot of these tools automatically pull in related source code, so you may not even know you're sending along a secrets file."

Granular Privacy by Design

The ability to precisely control what data gets processed, stored, or ignored is a critical differentiator when evaluating AI tools. Tools with built-in, enforceable privacy settings give CIOs a clearer path to secure adoption.

One example is Cursor, which provides developers and admins with multiple layers of control.

  • A .cursorignore file blocks sensitive files (such as .env) from most features, although terminal or MCP tools may still access them.
  • Full Privacy Mode stops Cursor from storing code, with enterprise tiers extending zero-retention to external LLMs.
  • Enterprise plans let admins enforce privacy settings across teams.

“Cursor especially says that they don’t store data at all, it’s only in memory temporarily until processing is done,” said Doug.

Cursor is also SOC 2 Type II certified and allows customers to request a copy of the report directly. 

This level of control over what gets processed, for how long, and under what conditions is exactly the kind of functionality CIOs should prioritize when evaluating tool readiness for enterprise environments.

Solid Defaults, But Opt-Out Required

Strong encryption and compliance certifications are table stakes for enterprise AI tools. But retention policies often reveal the real risk. When tools default to storing data or using it for training, the burden shifts to enterprise teams to lock things down.

Anthropic’s Claude Code, for example, encrypts data in transit and at rest. It offers region-specific clusters and holds both SOC 2 Type 2 and ISO 27001 certifications. 

But retention policies raise flags. As Doug pointed out, “They recently introduced an opt-out, which means that by default, they will train on your data unless you go in and say otherwise.” 

New consumer users are by default opted in to have their data retained for up to five years for model training, unless they actively opt out. If you do opt out, Claude will retain your data for only 30 days and will not be used for model training. 

Anthropic hasn’t hidden that opt-out setting, but it does put the burden on enterprise teams to turn off training and configure retention settings proactively. Admins on enterprise plans should enforce data deletion and turn off training use entirely.

Security Depends on Subscription

Some AI tools offer enterprise-grade security controls, but only at the highest subscription levels. This tiered approach can create uneven protections across teams and lead to accidental data exposure if not tightly managed.

GitHub Copilot is a clear example. The free and lower-tier versions allow GitHub to train on user data by default. In contrast, Copilot for Business offers enterprise controls to turn off training and improve retention transparency.

“GitHub Copilot, the free version, they do say they will train on your data,” Doug mentioned.

While GitHub has made strides toward transparency, including publishing its security controls and model training policies, the tiered approach can be confusing for CIOs trying to standardize usage across teams. Without proactive governance at the individual level, developers may inadvertently expose private code.

CIOs should ensure that enterprise licenses are enforced across teams and that access to lower-tier plans is restricted to avoid security gaps.

Privacy by Omission

Some tools prioritize privacy by not storing data unless explicitly told to. This model puts control in the hands of the enterprise, which can be a strength or a risk depending on how well governance is enforced.

Amazon Bedrock follows this approach. They don’t store prompts or completions by default. Ephemeral processing is the default behavior unless you configure otherwise. 

Bedrock enables customers to define retention and logging through integrations such as S3, CloudTrail, and CloudWatch. Security controls include private VPC connectivity (via AWS PrivateLink), customer-managed KMS keys, and FedRAMP High compliance in GovCloud.

Bedrock’s approach emphasizes user ownership: users define lifecycle policies for agent memories and logs, ensuring that nothing is stored unless explicitly configured.

This model is particularly appealing for CIOs with strict compliance mandates. But as they say: with great power comes great responsibility. Retention, logging, and deletion policies must be deliberately set and consistently monitored. 

Slingy Quote: With great power comes great responsibility

Without strong internal controls, the same flexibility that enables compliance can also create blind spots.

Transparent but Complex

Some AI tools provide strong privacy and compliance options, but only if the enterprise has the expertise to configure them properly. In these cases, security is not automatic. It depends on how well teams understand and manage the platform.

Google Vertex is a strong example. It offers robust enterprise-grade security features:

  • Encryption at rest and in transit
  • Region-pinned data storage and processing
  • CMEK (Customer Managed Encryption Keys) support, but note that the prompt-level is in preview
  • Full compliance envelope: ISO 27001/17/18, SOC 1/2/3, HIPAA, GDPR, and FedRAMP High (in Vertex AI Search and Generative AI)

Where Vertex shines is in user control. You can turn off prompt caching, as well as configure BigQuery and Cloud Logging retention policies (note it’s part of GCP overall, not unique to Vertex). You can use partner models without exposing prompts to third parties, though Google does log prompts briefly for abuse monitoring or grounding unless teams configure exceptions.

The tradeoff is complexity. These features are powerful but require deep familiarity with the Google Cloud ecosystem. For teams without that expertise, the risk is not in the tool’s limitations, but in the potential for misconfiguration.

What CIOs Should Look for (and Watch Out For)

The team agreed on a few red flags and best practices for evaluating tools:

  • 🚩 Red Flag: No SOC 2 certification. “At this scale, that would be a major concern,” said Steve.
  • 🚩 Red Flag: Default retention of code unless explicitly opted out
  • 🚩 Red Flag: No centralized way to enforce data handling policies across dev teams
  • ✅ Green Light: Org-level controls to enforce privacy settings across dev teams
  • ✅️ Green Light: Explicit .ignore functionality to prevent accidental file exposure
  • ✅️ Green Light: Customer-managed retention and logging (S3, BigQuery, etc.)

Steve summarized it best: You should “encourage AI tool usage, just make sure it’s an AI tool that acts responsibly.”

Red Flags and Green flags for AI Tools

Final Take: What’s Slingshot’s Advice?

Don’t just ask what AI can do; ask what risk it introduces. Choose artificial intelligence tools that default to privacy, not exposure.

Enterprise CIOs need to think beyond productivity gains. Security posture, data governance, and developer guardrails matter just as much as model quality or IDE integration. Tools often offer thoughtful defaults, but some require deeper configuration and ongoing diligence.

CIOs who understand not only what a tool can do, but also how it handles data, are better equipped to lead their organizations into a secure, AI-augmented future.

Savannah Cartoon Headshot

Written by: Savannah Cherry

Savannah is our one-woman marketing department. She posts, writes, and creates all things Slingshot. While she may not be making software for you, she does have a minor in Computer Information Systems. We’d call her the opposite of a procrastinator: she can’t rest until all her work is done. She loves playing her switch and meal-prepping.

View All Posts
Dougf Cartoon Headshot

Expert: Doug Compton

Born and raised in Louisville, Doug’s interest in technology started at 11 when he began writing computer games. What began as a hobby turned into his career. With broad interests that range anywhere from snorkeling, science, WWII history and real estate, Doug uses his “down time“ to create new technologies for mobile and web applications.

Linkedin
Steve Cartoon Headshot

Expert: Steve Anderson

Steve is one of our AWS certified solutions architects. Whether it’s coding, testing, deployment, support, infrastructure, or server set-up, he’s always thinking about the cloud as he builds. Steve is extremely adaptable, and can pick up the project and run with it. He’s flexible and able to fill in where needed. In his spare time, he enjoys family time, the outdoors and reading.

Linkedin
Andrew Cartoon Headshot

Expert: Andrew Meyer

Andrew is a developer who started out coding as a young kid: he’d bring home thick programming books from the library and teach himself. That passion for learning and programming would later turn into a career. From game development to SaaS applications, Andrew has worked the gamut of tech, spanning the game, healthcare, marketing, and applicant-tracking industries. Andrew would describe himself as a Big Kid because he is curious about new technologies and loves to explore new ideas.

Linkedin

Frequently Asked Questions

AI tools can expose sensitive code, credentials, and proprietary data if not properly secured. Risks include data leakage, unintended retention, and lack of centralized controls for enterprise teams.

Tools like Cursor, Amazon Bedrock, and Google Vertex offer advanced privacy settings such as no default data retention, .ignore file support, and customer-managed encryption keys (CMEK).

Yes, by default, tools like GitHub Copilot (free tier) and Claude opt users into data retention for training unless settings are changed. Enterprise tiers typically offer opt-out options and stricter controls.

Key certifications include SOC 2 Type II, ISO 27001, and FedRAMP. These demonstrate that the tool follows strict security and compliance standards for enterprise use.

Use enterprise subscriptions with enforced org-level settings, restrict access to free tiers, and require tools with centralized policy management and explicit data retention controls.

Savannah

Savannah is our one-woman marketing department. She posts, writes, and creates all things Slingshot. While she may not be making software for you, she does have a minor in Computer Information Systems. We’d call her the opposite of a procrastinator: she can’t rest until all her work is done. She loves playing her switch and meal-prepping.