Where Does Your Data Go? The Risks of Uploading to AI, Google and the Cloud

Let’s be honest, we’re all glued to our phones, chasing information, content, and shortcuts that make life feel “easier.” For decades, technological shifts arrived in rolling waves: computers, mobile phones, smartphones, giving us enough time to adapt.

But in the past year, the AI tidal wave has crashed down all at once. Tools like ChatGPT are embedded into nearly everything, and as I use these platforms more in my own day-to-day, one question keeps nagging me: where does my information actually go, and is it protected, if at all?

The truth is that the moment you upload, paste, or share anything on a third-party platform (ChatGPT, Google, FB etc.), you trade convenience for control. You hand over ownership and oversight, relying on that platform to use and safeguard your data responsibly.

For years, I buried this deep inside, but recently, I’ve started shifting my digital habits to better protect my information.

And as your lawyer friend, I want to share both the risks and some practical tips we can all use as we navigate this evolving landscape.

TL;DR — Tips for Protecting Your Data

  1. Remove sensitive data before uploading. Names, dollar amounts, personal or confidential details should always be stripped out.

  2. Treat the internet like a public space. Just because you upload from your home or phone doesn’t make it private. If you wouldn’t shout it in a busy coffee shop, don’t paste it into ChatGPT, Gmail, or Google Drive.

  3. Upgrade your platform. If you’re handling sensitive work, use enterprise-level AI or cloud services with real data protections. Avoid free/public platforms, and steer clear of public Wi-Fi unless absolutely necessary.

The quick tips above provide some baseline protection. But each platform: AI tools, Google Drive, Dropbox, Gmail each have different uses, risks and terms baked in.

Here’s how the major platforms stack up.

LLMs: Smart but Leaky

AI tools like ChatGPT, Claude, and Gemini are powerful. But unless you’re on enterprise settings, your uploads may be:

  • Stored for review or product improvement

  • Used to train future models

  • Processed on servers outside your country

Risks: Breaching NDAs, losing control of proprietary ideas, or relying on outputs that shift with each query (LLMs generate probabilistic and often times inconsistent results).

Safer Practices: Redact sensitive details, summarize instead of pasting full documents, and upgrade to enterprise tiers when handling confidential material.

Google Drive: Convenience vs. Control

Google Drive is seamless for collaboration, but your files don’t just sit in a folder. They live on Google’s servers, governed by Google’s Terms of Service.

Risks:

  • Broad licensing language that may allow use of content for product improvement

  • Shared links spreading further than intended

  • Dependence on Google’s retention policies, not your own

Safer Practices: Encrypt or password-protect sensitive files, restrict link sharing, use two-factor authentication and keep local backups.

Gmail & Search: A Permanent Trail

Every email and search query leaves a footprint. Even after you delete, data may persist in backups.

Risks:

  • Gmail attachments route through Google’s servers.

  • Search queries can unintentionally reveal confidential strategies.

Safer Practices: Use encrypted file transfer tools for sensitive docs, avoid putting contract language or proprietary details into search engines, and never email highly sensitive personal information.

Think of AI as an Intern

Even setting privacy aside, AI results shouldn’t be treated as gospel.

LLMs generate text probabilistically, as in they predict the most likely words in a chain and not the “right” answer. That’s why you sometimes get slightly different results to the same question. The outputs can look polished and convincing but still be incomplete or inaccurate – your “AI slop”.

Think of AI as a very smart intern: fast, creative, and helpful for brainstorming, simplifying, or editing, but always in need of human oversight (in its current state anyway).

What’s Safe / Not Safe to Upload

✅ Safer:

  • Redacted templates or generic text

  • Public-facing content (blogs, marketing drafts)

  • Hypotheticals (“What’s a fair termination clause?”)

❌ Not Safe:

  • Unredacted client contracts

  • Proprietary financials, employee data, health info

  • Anything you’re legally bound to keep confidential

Parting Thought

The tools we use daily are powerful, but far from private. Before uploading, run through your mental checklist and ask: If I broadcast this to the world, would it matter? If the answer is yes, pause.

Always be sure to protect your sensitive information in a way that makes sense for you, because once it’s out, there’s no pulling it back.

Want more practical, plain-language legal strategy? Subscribe here:

https://bradfordtobin.substack.com/

Disclaimer: This post is for informational purposes only and reflects the author’s personal views. It does not constitute legal advice, does not create an attorney-client relationship, and should not be relied upon as a substitute for consultation with qualified legal counsel. The content may be considered attorney advertising in some jurisdictions, including New York and Connecticut. Prior results do not guarantee a similar outcome.

Next
Next

Owning Your Data and Your Community: Why Collecting Emails and Customer Information Is Essential To Your Business