PSA: Anyone with a link can view your Granola notes by default
If you use the AI-powered note-taking app Granola, you might want to double-check your privacy settings. Though Granola says your notes are "private by default," it makes them viewable to anyone with a link, and also uses them for internal AI training unless you opt out. Granola describes itself as an "AI notepad for people [ ]
If you use the AI-powered note-taking app Granola, you might want to double-check your privacy settings. Though Granola says your notes are “private by default,” it makes them viewable to anyone with a link, and also uses them for internal AI training unless you opt out.
Granola describes itself as an “AI notepad for people in back-to-back meetings.” It integrates with your calendar to capture audio from your meetings, and then uses AI to generate a bulleted list of what you’ve heard, which it calls a “note.” You can edit the AI-generated notes, invite other collaborators to view them, and use Granola’s AI assistant to ask questions about your notes and review the meeting transcript they’re based on.
But in the app’s settings menu, Granola says, “By default, your notes are viewable to anyone with the link.” That means anyone on the web can see your notes if you accidentally share a link — potentially a major issue if you’re recording sensitive meetings. After testing this out for myself, I found that I could access my own note from a private window in my browser, all without signing into my Granola account. The site even tells you who the note belongs to and when it was created.
While I couldn’t view the entire transcript linked to the note, I could still view parts of it. Selecting one of the bullet points generated by Granola pulls up a quote from the transcript that the note is referring to, along with an AI-generated summary with additional context about the conversation.
On its website, Granola says “full transcript access is available to collaborators who open the same folder or note inside the Granola desktop app.” It’s not clear whether anyone with a Granola account can access your transcript, or if it’s just people you’ve shared your workspace with. Granola didn’t respond to a request for more information by the time of publication.
You can change who can view your links by opening Granola, selecting your profile in the bottom-left corner of the screen, and then choosing “Settings.” From there, navigate to the “Default link sharing” option, and change “Anyone with the link” to either “Only my company” or “Private.” If you delete your note, people with the link will no longer be able to access it.
One user on LinkedIn called attention to the public notes setting last year, saying, “these links aren’t indexed, but if you share or leak one – even accidentally – it’s public to whoever finds it.” And at least one major company has denied use of the tool to a senior executive due to security concerns, a source tells The Verge.
Additionally, Granola “may use anonymized data” to improve its AI models, according to the app’s support page. Enterprise customers are opted out of AI training by default, but people on all other plans aren’t. You can disable AI training by going to the settings menu and toggling off the “Use my data to improve models for everyone” option. The company says it doesn’t allow third-party companies, like OpenAI or Anthropic, to use your data for AI training if the setting is enabled.
Granola’s security page says the company stores your notes in a US-hosted Amazon Web Services private cloud, and says they are “encrypted at rest and in transit.” The company doesn’t store audio from meetings, either. It only saves meeting notes and transcripts, both of which it processes in the cloud.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
- Emma Roth
The Verge AI
https://www.theverge.com/ai-artificial-intelligence/906253/granola-note-links-ai-training-psaSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
training
Same Prompt. Different Answers Every Time. Here's How I Fixed It.
This is Part 3 of our AI verification series. Part 1: Three AIs analyzed our product. None passed the truth filter → Part 2: Human in the loop doesn't scale. Human at the edge does. → Same prompt. Same AI. Different sessions. Different outputs. Post 1 showed three different AIs diverging on the same question. That's expected. Different training, different weights, different answers. But we didn't stop there. We re-ran the same AI on the same prompt in a new session. We got materially different outputs again. Both looked authoritative. Neither warned us they disagreed with each other. What the same AI said twice Prompt: "Forecast Korea's AI industry in 2027." Session 1 produced: Market size: $10–15B at >25% CAGR Global positioning: "Global AI G3 powerhouse" Hardware claim: "All Korean elect

Agentic AI Training Datasets, GraphRAG for Healthcare, the ODSC AI East Schedule, and AI Project…
Agentic AI Training Datasets, GraphRAG for Healthcare, the ODSC AI East Schedule, and AI Project Management The ODSC AI East 2026 Schedule is LIVE What does it actually take to move from experimenting with AI… to building systems that work in production? Explore 300+ hours of sessions designed for practitioners — covering the tools, workflows, and real-world tradeoffs behind modern AI systems The current deal ends Friday — register now! Stop Building for the Demo. Start Building for the Decision Most AI initiatives don’t fail — they fade. Learn why the “Sprint Trap” kills enterprise AI value and how to design systems that actually last. 15 Datasets for Training and Evaluating AI Agents Explore 15 essential datasets for training and evaluating AI agents, including tool calling, web navigati

LinkedIn job posting data: companies added 640K AI-related jobs from 2023 to 2025 in the US, including 225K "head of AI" jobs, up 49% from the prior four years (Te-Ping Chen/Wall Street Journal)
Te-Ping Chen / Wall Street Journal : LinkedIn job posting data: companies added 640K AI-related jobs from 2023 to 2025 in the US, including 225K head of AI jobs, up 49% from the prior four years AI is raising big fears about employment losses, but it is also giving rise to new engineering and training jobs
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Migrating from Ralph Loops to duckflux
If you've been running coding agent tasks inside Ralph Loops , you already understand the core insight: iteration beats perfection. You've seen what happens when you hand a well-written prompt to an AI agent and let it grind until the job is done. This guide shows how to take that same philosophy and express it as a declarative, reproducible workflow in duckflux. You gain structure, observability, and composability without giving up the power of iterative automation. What are Ralph Loops? Ralph Wiggum is an iterative AI development methodology built on a deceptively simple idea: feed a prompt to a coding agent in a loop until the task is complete. Named after the Simpsons character (who stumbles forward until he accidentally succeeds), the technique treats failures as data points and bets




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!