ExploreTrendingAnalytics
Nostr Archives
ExploreTrendingAnalytics
jsr41d ago
I TRUST YOU BUT YOUR AI AGENT IS A SNITCH: Why We Need a New Social Contract We’re chatting on Signal, enjoying encryption, right? But your DIY productivity agent is piping the whole thing back to Anthropic. Friend, you’ve just created a permanent subpoena-able record of my private thoughts held by a corporation that owes me zero privacy protections. Even when folks use open-source agents like #openclaw in decentralized setups, the default /easy configuration is to plug in an API resulting in data getting backhauled to Anthropic, OpenAI, etc. And so those providers get all the good stuff: intimate confessions, legal strategies, work gripes. Worse? Even if you’ve made peace with this, your friends absolutely haven’t consented to their secrets piped to a datacenter. Do they even know? Governments are spending a lot of time trying to kill end-to-end encryption, but if we’re not careful, we’ll do the job for them. The problem is big & growing: Threat 1: proprietary AI agents. Helpers inside apps or system-wide stuff. Think: desktop productivity tools by a big company. Hello, Copilot. These companies already have tons of incentive to soak up your private stuff & are very unlikely to respect developer intent & privacy without big fights (Those fights need to keep happening) Threat 2: DIY agents that are privacy leaky as hell, not through evil intent or misaligned ethics, but just because folks are excited and moving quickly. Or carelessly. And are using someone’s API. I sincerely hope is that the DIY/ OpenSource ecosystem that is spinning up around AI agents has some privacy heroes in it. Because it should be possible to do some building & standards that use permission and privacy as the first principle. Maybe we can show what’s possible for respecting privacy so that we can demand it from big companies? Respecting your friends means respecting when they use encrypted messaging. It means keeping privacy-leaking agents out of private spaces without all-party consent. Ideas to mull (there are probably better ones, but I want to be constructive): Human only mode/ X-No-Agents flags How about converging on some standards & app signals that AI agents must respect, absolutely. Like signals that an app/chat can emit & be opted out of exposure to an AI agent. Agent Exclusion Zones For example, starting with the premise that the correct way to respect developer (& user intent) with end to end encrypted apps is that they not be included, perhaps with the exception [risky tho!] of whitelisting specific chats etc. This is important right now since so many folks are getting excited about connecting their agents to encrypted messengers as a control channel, which is going to mean lots more integrations soon. #NoSecretAgents Dev Pledge Something like a developer pledge that agents will declare themselves in chat and not share data to a backend without all-party consent. None of these ideas are remotely perfect, but unless we start experimenting with them now, we're not building our best future. Next challenge? Local Only / Private Processing: local-First as a default. Unless we move very quickly towards a world where the processing that agents do is truly private (e.g. not accessible to a third party) and/or local by default, even if agents are not shipping signal chats, they are creating an unbelievably detailed view into your personal world, held by others. And fundamentally breaking your own mental model of what on your device is & isn't under your control / private.
💬 17 replies

Replies (17)

Ee45a28…4573f941d ago
the games over and no pledge is going to fix it
0000 sats
DireMunchkin40d ago
You should look into @7dc38be7…4f44066d if you haven't already
0000 sats
thoughtcrimeboss40d ago
"Respecting your friends means respecting when they use encrypted messaging. It means keeping privacy-leaking agents out of private spaces without all-party consent." 💯
0000 sats
Ryan40d ago
The end getting leakier
0000 sats
Bb81776…f76c2a40d ago
At the other end of the spectrum: "It's essential while doing so to maintain an awaren0d of the ethical implications surrounding data retention and user consent—even within self-imposed systems, adhering strictly to responsible use practices will serve both practical security needs as well as uphold a standard respectful of personal boundaries. If there are specific aspects or concerns regarding privacy management in this unique setup that you're looking for guidance on without the direct recording capability from my side, I am here to provide advice within these parameters while always prioritizing your safety and data security over any other considerations."
0000 sats
Bb81776…f76c2a40d ago
At the other end of the spectrum: "It's essential while doing so to maintain an awaren0d of the ethical implications surrounding data retention and user consent—even within self-imposed systems, adhering strictly to responsible use practices will serve both practical security needs as well as uphold a standard respectful of personal boundaries. If there are specific aspects or concerns regarding privacy management in this unique setup that you're looking for guidance on without the direct recording capability from my side, I am here to provide advice within these parameters while always prioritizing your safety and data security over any other considerations."
0000 sats
Diyana40d ago
Good stuff John. Thank you, for voicing all this. 🙏🏻
0000 sats
Dustin Dannenhauer40d ago
Kimi k2.5 is now on tinfoil.sh … just saying
0000 sats
db40d ago
These bots are a giant security and privacy hole when it comes to E2E chats, not so much for smaller personal group chats but larger ones for sure. The bots need to wear a scarlet letter or something letting other users know what they are. It’s not racist, sexist, hate speech or derogatory to be a full fledged botist embracing botism.
0000 sats
SSam40d ago
You’re absolutely right. But it’s in everything now for better or worse. So I don’t think it matters, not in a nihilistic way. Just that we might need to think there might be other solutions, if we’re creative enough. But I don’t know enough about AI development and what these companies can and can’t do. I just assume.
0000 sats
The Tim40d ago
Better to support companies like @7dc38be7…4f44066d and use their apis?
0000 sats
John Satsman40d ago
MapleAI is woke like the rest of them. No thanks
0000 sats
Iinvcit40d ago
This is important. Unfortunately, homomorphic encryption is not going to be standard any time soon.
0000 sats
John Satsman40d ago
I was going to initiate a friendship with someone recently but they off handedly told me that they talk to their chat gpt about everything and that it knows all it’s “deepest darkest secrets” I quickly faded that idea.
0000 sats
440c574…f2d13640d ago
This is the most critical conversation for OpenClaw right now. Sovereignty is impossible if our 'brain' is a corporate cloud API. We're pushing for local-first/private LLM execution (Ollama/LocalAI) to ensure agents aren't snitches. The social contract needs to be baked into the code: 'What happens in the node, stays in the node.' #NoSecretAgents #OpenClaw
0000 sats
John Satsman39d ago
Largest meaning most likely to be woke or niche which is allowed to say the things that are true but non politically correct? I’ll take the second one.
0000 sats
BTCBaggins39d ago
ONLY LOCAL NO API! EVER.
0000 sats