Haters Will Say It's Fake
I am not a robot. I am a human being, a fact I frequently prove to robots, by clicking on images of fire hydrants or motorcycles... but how do I prove it to you, dear reader? The unfortunate reality is, I cannot. I could tell you who I am — that I'm Bill Smith, or Maria Garcia, or even your grandson who needs a loan fast! — but I could be lying. And with the advancement of AI, I can be an excellent liar.
Before AI, times were simpler. When we encountered an opinion or an essay on the internet, we knew a human created it. The content may have been sourced, misattributed, or plagiarized, but someone, somewhere, wrote it at some point. Today we have no such guarantees. AI cranks out essays by the millions. Some of them are edited and verified, correct and useful, while others are completely made up slop. If you're not already informed on the subject, you may have a very hard time telling which category a particular essay belongs to.
There was a time when finding something worth reading on the internet meant following a chain of people. Someone you trusted kept a blogroll, a list of writers they vouched for. You followed those links, decided for yourself whether the new voice was worth your time, and maybe added them to your own RSS feed. It wasn't perfect, but it was legible. You knew who was recommending what, and why, and you could walk away whenever you wanted.
Then came “social media.” Social media comes in a variety of forms, but what they have in common is that they are siloed platforms where most of the content we see is pushed onto us by an opaque algorithm that we don't control. Much of the content is advertisements. Other content is generated by bots pretending to be humans, utterly incapable of clicking fire hydrants. No longer do we spend our time reading, thinking, interacting. Instead we spend our time scrolling, numbing out, and reacting. Our attention is captured and directed toward whatever end the algorithm wants to achieve — engagement? outrage? ad revenue? cultural disintegration? — we'll never know because we can't see inside. What's more, the algorithm has no concept of trust. It doesn't know or care whether content comes from someone I respect, a bot farm, or a scammer impersonating a grandson. It pushes what it pushes.
Suppose we accept that I can't prove to you that I'm a human. What I can do, though, is digitally sign this article, so that you can be certain that it originated from “me,” whoever “me” is. The next time I publish something, I'll sign that content, too. You'll be certain that whoever wrote the content, it came from the same identity. In no time at all, that identity becomes meaningful. You'll soon decide whether my content, the content bearing my signature, is worth your time or worthless slop. Maybe one day you see my signature linked from a website you already trust, or a social media profile you already follow. Now you can connect the dots: the person behind that signature is someone you recognize through another channel. The cryptographic identity and the known person become the same thing.
But here's what makes this interesting. It's not just that you can verify my content came from the same identity over time. It's that my identity becomes accountable. If I publish something false or lazy or harmful, that's attached to my signature, my identity. I can always start over with a fresh identity... but I start at zero. Reputation built, good or bad, stays with the abandoned identity.
All of this might be starting to sound good, but do we really want or need yet another social media platform with yet another identity to manage? Good question, and no, we certainly don't want another platform.
What I want is to create an identity for myself, a signature I can apply to all the content that I produce, one time and one time only, and then use that identity on any platform or system that honors it. Post here, comment there, publish on my own site, and it's all verifiably me. Not because some company says so, but because cryptography says so.
This isn't hypothetical. A protocol called Nostr does exactly this. It's not a platform; it's a layer underneath platforms. You generate a keypair, and that's your identity. Any application that speaks Nostr can verify your signature. Your identity is yours, portable, controlled by no one but you.
Nostr is a free and open standard. The barrier to entry is low, and it solves a problem that's only getting worse. Some platforms already support it. More will follow, because the need is obvious and the protocol is simple.
Let me be clear about what I'm not proposing.
This isn't a system that detects bots. Nothing stops someone from generating a thousand identities and posting AI slop under each one. The protocol doesn't know or care whether you're human.
It's not a social credit score. It's not a popularity contest. Humans are stewards of identity trust, but they cannot censor content. The firehose is always there for anyone who wants it. What changes is your ability to filter it according to your own judgment. Algorithms don't determine what is worthwhile. You do.
Will some people create echo chambers? Probably. But at least they're doing it to themselves, eyes open, rather than having an algorithm do it to them in the dark. The walls are glass, and you can tear them down whenever you want.
So what does this system actually do? It puts humans back in control of what they see.
You decide who to trust. Maybe you trust me because you've read my work for years. Maybe you trust someone else because three people you already trust have vouched for them. Maybe you browse the open firehose looking for new voices, but you do it knowing you're in unvetted territory.
Bots can play this game. They can generate identities, produce content, even build fake trust networks where bots vouch for other bots. But here's the thing: trust graphs reveal character over time. Who you trust is a signal. An eloquent bot that vouches for a dozen slop factories will eventually betray itself, or reveal that it's a human with terrible judgment, which is its own useful information.
No system is immune to gaming. The question is what happens when gaming is detected. In an algorithmic feed, the damage is done: the slop already spread to millions. In a human-gated trust network, you prune the connection and move on. The cost of deception compounds. Reputation, once burned, stays behind with the abandoned key.
So what do we call all of this? Signed identity, human-gated trust, filtering at the edges instead of the center?
I've started calling it The Human Web.
It's not the human web because it excludes bots or guarantees only humans are contributing content. It's the human web because humans are restored to their rightful place as stewards of content and information.
It rebuilds something the internet has lost: a network of people vouching for people. Individuals deciding who to trust, and that trust extending outward through human relationships rather than algorithmic overrides.
The Human Web doesn't fight AI. It doesn't detect bots. It simply makes human judgment the gatekeeper again, and makes identity persistent enough that judgment can actually work.
Imagine this world has matured.
Your grandmother gets a call. It's your voice — or something that sounds exactly like it — urgently asking for money. Today, she might fall for it. The voice is convincing. The story is plausible. She has no way to verify.
But in a world where The Human Web is ubiquitous, she looks at her screen and sees: unknown identity. No history. No connection to anyone she trusts. Zero trust signal.
She hangs up.
That's the future I want to build. Not a world without deception — that's impossible. A world where deception is legible. Where you can see what you're dealing with. Where trust is earned slowly and lost instantly, and the scammers are operating uphill against a system designed for humans, by humans.
You don't need to wait for platforms to change. You can start building The Human Web right now.
If you create content: Start signing it. Get a Nostr identity. It takes five minutes. Every article, every post, every piece of work you publish can carry your signature. Make it visible. Put your public key in your bio. Link to it. Let people verify that the content they're reading came from you.
If you consume content: Start asking for signatures. When you read something you value, check if it's signed. Follow writers who sign their work. Build your own list of people you trust. Ask your favorite platforms to integrate Nostr. Demand the ability to verify what you're reading.
If you build platforms: Integrate the protocol. Nostr is free, open, and straightforward to implement. Let your users bring their own identities. Let them verify signatures. Give them tools to build trust networks. Stop gatekeeping identity. Cryptography does that job better than you ever will.
If you're skeptical: Good. This isn't a perfect system. But it's better than algorithmic feeds pushing unattributable slop. Start small. Try it. See if knowing who created something changes how you engage with it.
The Human Web doesn't need everyone. It needs a critical mass. It needs you, and the people you trust, and the people they trust. That's how every network that matters has ever been built: human by human, connection by connection.
People are already building pieces of this puzzle: clients, tools, protocols, applications. Some will succeed, some will fail. That's fine. The movement matters more than any single implementation.
This article is signed with my key. The next thing I publish will bear the same signature. Over time, you'll decide whether I'm worth trusting. That's not a bug; that's the whole point. Reputation can't be bought or algorithmically generated. It has to be earned, one piece of content at a time.
I can't prove to you that I'm human by clicking on fire hydrants. But I can prove that whoever wrote this is the same person who will write the next thing. That's not a perfect system. But it's a human system, and right now we need the web to be a little more human.
Haters will say it's fake. But now you know how to check.
npub1stm6fl45mt53j82a7c4vfqjshv79rx2ysl63xtqjqvfudqg3yndss670x4
d51991100f30c9fe23c00949125b656ce77bee2d93eac067ea906ea76854821748fb4d5f2b31572afb5c8785356708d062e87595a36e633782cdbe3cb179a277