alt.hn

1/10/2025 at 11:57:06 AM

Decentralized Syndication – The Missing Internet Protocol

http://tautvilas.lt/decentralized-syndication-the-missing-internet-protocol/

by brisky

1/12/2025 at 1:15:53 AM

While everyone is waiting for Atproto to proto, ActivityPub is already here. This is giving me "Sumerians look on in confusion as god creates world" vibes.

https://theonion.com/sumerians-look-on-in-confusion-as-god-c...

by glenstein

1/12/2025 at 5:10:50 AM

These are still too centralized. The protocol should look more like BitTorrent.

- You don't need domain names for identity. Signatures are enough. An optional extension could contain emails and social handles in the payload if desired.

- You don't need terabytes of storage. All content can be ephemeral. Nodes can have different retention policies, and third party archival services and client-side behavior can provide durable storage, bookmarking/favoriting, etc.

- The protocols should be P2P-first rather than federated. This prevents centralization and rule by federated cabal. Users can choose their own filtering, clustering, and prioritization.

by echelon

1/12/2025 at 12:38:40 PM

> Nodes can have different retention policies, and third party archival services and client-side behavior can provide durable storage, bookmarking/favoriting, etc.

That's completely achievable in AP. Most current servers use reasonable retention, extended for boosted posts.

by viraptor

1/12/2025 at 9:20:26 PM

Then it is a bit strange why it wasn’t designed to be ‘BitTorrent-like’ from the beginning as the parent suggests.

by MichaelZuo

1/12/2025 at 12:32:12 PM

There's no known way to make this work well yet, but feel free to invent that. Until that happens, federated is mostly the best we have, because most people don't want to be responsible for their own servers.

P.S. ActivityPub is a euphemism for Mastodon's protocol, which isn't just ActivityPub.

by immibis

1/12/2025 at 10:47:29 AM

Isn't this ipfs?

by RobotToaster

1/13/2025 at 5:43:06 AM

Isn't this nntp?

by thwarted

1/12/2025 at 12:19:31 AM

I would love to have an RSS interface where I can republish articles to a number of my own feeds (selectively or automatically). Then I can follow some my friends' republished feeds.

I feel like the "one feed" approach of most social platform is not here to benefit users but to encourage doom-scrolling with FOMO. It would be a lot harder for them to get so much of users' time and tolerance for ads if it were actually organized. But it seems to me that there might not be that much work needed to turn an RSS reader into a very productive social platform for sharing news and articles.

by remram

1/12/2025 at 9:57:54 AM

This interface already exists. It's called RSS. Simply make feed titled "reposts" and add entries linking to other websites. I already have such a thing on my own website with the precise hope that others will copy it.

by James_K

1/12/2025 at 7:06:32 PM

At some level yes, but I would like to be able to de-duplicate if multiple people/feeds repost the same article, and it would need a lot more on the discovery side (so I can find friends-of-friends, more feeds from same friend I follow, etc). Like a web-of-trust type of construct which I see as necessary with the accelerating rise of bots on all platforms.

by remram

1/12/2025 at 10:53:53 PM

Deduping can be done on the reader end. As for a web of trust, you can put a friends list on your website.

by James_K

1/14/2025 at 12:16:19 AM

Sure, as long as I'm willing to do the 90% missing bits manually, RSS does everything else automatically. /s

There is a major discovery problem and saying "out of scope" does not make it go away.

by remram

1/14/2025 at 1:10:35 AM

I'm not simply saying that discovery is out of scope, but that it's an actively harmful feature. The computer has no good metric to judge what is shown to you. I think the old ways are much better, that is you hear about something if a human decides to share it with you. This process seems impossible to automate and the effects of trying have been very detrimental.

That said, the web has already developed a solution to this issue. It's basically just indexing. It's a very valuable technology, hence why Google got so much money from doing it.

by James_K

1/14/2025 at 5:08:57 AM

I don't want it automated, I want the tool to help. I don't need it to pick feeds and follow them, but being able to lists what other feeds are splitting off of feeds I already follow, or find the feeds being reposted to me, ... would be helpful. This can be built on OPML, this can be built on RSS, but your notion that we already have everything we need does not resonate with me.

You don't have to get all defensive about RSS, I think it's fantastic technology, but it is missing a few key components to be a replacement for social media in a meaningful sense. I know you know this, because you are here on a social media site right now instead of reading OP's feed. Whereas I don't want to ever leave my reader and instead consume news and recommendations and comments from my friends (of friends) right from there, instead of using gameable/botable centralized platforms with topics and moderators I don't control or trust.

My vision is not clear I know, I hope I can find time to prototype (or at least spec out) what I mean some time soon.

by remram

1/14/2025 at 11:28:29 AM

The key thing any social media platform needs is users. I know very few people who have an RSS feed and none of them use it to repost content.

by James_K

1/12/2025 at 2:28:24 AM

That looks close to custom feeds in the ATProto / BlueSky world.

by fabrice_d

1/13/2025 at 9:54:20 AM

This is pretty-much exactly what I use Pinboard for.

by AndrewDucker

1/12/2025 at 7:39:20 AM

Its not obvious to me that what is missing here is another technical protocol rather than more effective 'social protocols'. If you havent noticed, the major issues of today is not the scaling of message passing per-se but the moderation of content and violations of the boundary between public and private. These issues are socially defined and cannot be delegated to (possibly algorithmic) protocols.

In other words what is missing is rules, regulations and incentives that are adapted to the way people use the digital domain and enforce the decentralized exchange of digital information to stay within a consensus "desired" envelope.

Providing capabilities in code and network design is ofcourse a great enabler, but drifting into technosolutionism of the bitcoin type is a dead end. Society is not a static user of technical protocols. If left without matching social protocols any technical protocol will be exploited and fail.

The example of abusive hyperscale social media should be a warning: they emerged as a behavior, they were not specified anywhere in the underlying web design. Facebook is just one website after all. Tim Berners-Lee probably did not anticipate that one endpoint would succesfully fake being the entire universe.

The deeper question is, do we want the shape of digital networks to reflect the observed concentration or real current social and economic networks or do we want to use the leverage of this new techology to shape things in a different (hopefully better) direction?

The mess we are in today is not so much failure of technology as it is digital illiteracy, from the casual user all the way to the most influential legal and political roles.

by openrisk

1/13/2025 at 12:00:30 AM

> The deeper question is, do we want the shape of digital networks to reflect the observed concentration or real current social and economic networks or do we want to use the leverage of this new techology to shape things in a different (hopefully better) direction?

Here is a book on the topic - Compliance Industrial Complex;

https://www.amazon.com/Compliance-Industrial-Complex-Operati...

It's about anti-policies (anti hate, anti money laundering, etc.), securitization of governance (private companies create and enforce what should be law) and pre-crime, using technology to do this instead of addressing underlying social problems.

by miohtama

1/12/2025 at 9:31:13 PM

> If you havent noticed, the major issues of today is not the scaling of message passing per-se but the moderation of content and violations of the boundary between public and private.

Are those the major issues of today? Those are the major issues for censors, not for communicators.

by pessimizer

1/12/2025 at 11:24:29 PM

yes, moderation is an issue that doesn't scale. therefore, many technologists ignore it in favor of "oh, fancy serverless architecture". priority should be on building moderation and tools like reply controls (e.g. only mutuals), shared inboxes (for friends to assist cleaning out hate mail), mod appeals and the like. It's a thorny issue that involves /listening/ to community organizers, who go through pains with poorly written software to try to keep a community civil.

by pluto_modadic

1/12/2025 at 10:22:31 PM

Are spammers and scammers "communicators"? How about organized misinformation campaigns? In what kind of deeply sick ideological la-la-land is any kind of control of information flow "censorship".

by openrisk

1/12/2025 at 8:00:50 AM

NOSTR has solved most of these topics in a simple way. Anyone can generate a private/public key without emails or password, and anyone can send messages that you can verify as truly belonging to the person with that signature.

They have hundreds of servers running today by volunteers, there is little cost of entry since even cellphones can be used as servers (nodes) to keep you private notes or keep the notes from people you follow.

There is now a file sharing service called "Blossom" which is decentralized in the same simple manner. I don't think I've seen there a way to specify custom domains, people can only use the public key for the moment to host simple web pages without a server behind.

Many of the topics in your page are matching with has been implemented there, it might be a good match for you to improve it further.

by nunobrito

1/12/2025 at 12:10:29 PM

Can NOSTR handle 100 million daily active users?

by brisky

1/12/2025 at 1:48:45 PM

Your question rephrased: "Can EMAIL handle 100 million daily users?".

The answer is yes.

NOSTR is similar to emails. They depend on nostr/email providers and aren't depending on any single of them, what exists is a common agreement (protocol). The overwhelming majority of those providers are free and you can also run your own from the cellphone.

Some providers might become commercial like gmail, still many others will still provide access for free. Email is doing just fine nowadays, NOSTR will do fine as well.

by nunobrito

1/12/2025 at 9:02:55 PM

This is all necessarily true of any "protocol". It is absolutely not true that every protocol scales efficiently to 100 million active users all interacting though, so it is basically a meaningless claim.

E.g. ActivityPub has exactly the same claims, and it's currently handling several million, essentially all interactable. Some parts are working fine, and some parts are DDoSing every link shared on any normally-connected instance.

by Groxx

1/13/2025 at 4:06:19 PM

That isn't an argument either. Emails have difficulties too, it doesn't mean a restriction on scale.

by nunobrito

1/13/2025 at 8:14:51 PM

Email has some social scaling problems, but the technical scaling side is almost perfect - you push to the destination. Adding more destinations doesn't increase cost, and it splits the load.

Contrast this with e.g. Secure Scuttlebutt (under a normal setup). Adding more people to the connected graph (implying also a more densely connected graph) means exponentially more traffic and more storage for each member, due to the friend-of-a-friend proxying built in. It doesn't take long at all to reach the point where a normal internet connection can't keep up, and you fall farther and farther behind.

Mastodon falls somewhere in between - adding servers mostly splits load when you add instances, but caching behavior grows linearly and tends to happen simultaneously ~everywhere, and it doesn't take long before it's beyond what hobbyists can handle. It even affects outsiders, due to preloading link previews, which is a core privacy decision that's arguably part of the protocol.

Where does nostr fit in? Because "a protocol" describes ^ all of those. It's a mostly meaningless descriptor, beyond "probably not tied to a single company" (but not more than "probably").

by Groxx

1/14/2025 at 9:15:51 AM

Now that is unfair. You are pointing out flaws at Mastodon and Scuttlebutt and then applying the same tag to NOSTR without being correct.

NOSTR is the same as Email on that regard. You push a note to the destination (relay) and adding more destinations doesn't increase the load, it splits the load.

There is no centralization at NOSTR, there is no mandatory record of who sends who and what. It is just like email, send it over and let the receiving parties do whatever they want with it.

In fact, it goes beyond email to the point that you can write a NOSTR note on a piece of paper and still be certain that the note was written by a specific person and is unmodified.

by nunobrito

1/14/2025 at 6:11:25 PM

I'm just pointing out that "it's a protocol" describes nothing useful.

>NOSTR is the same as Email on that regard... [etc]

This is useful information :)

by Groxx

1/12/2025 at 12:48:29 AM

1. Domain names: good.

2. Proof of work time IDs as timestamps: This doesn't work. It's trivial to backdate posts just by picking an earlier ID. (I don't care about this topic personally but people are concerned about backdating not forward-dating.)

N. Decentralized instances should be able to host partial data: This is where I got lost. If everybody is hosting their own data, why is anything else needed?

by wmf

1/12/2025 at 1:26:45 AM

If the data is a signed hash, why does it need the domain name requirement? One can host self-authenticating content in many places.

And one can host many signing keys at a single domain.

by evbogue

1/12/2025 at 2:08:13 AM

In the article, the main motivation for requiring a domain name, is to raise the barrier to entry above “free” to mitigate spamming/abuse.

by catlifeonmars

1/12/2025 at 4:04:04 AM

A 1-time fixed cost will not deter spam, it only encourages more spamming to lower the averaged per-spam cost. Email spamming requires some system set up, that's a 1-time fixed cost above $10/year but it does not stop spam.

by uzyn

1/12/2025 at 4:19:05 PM

It’s one time fixed cost per stream of messages, with some out of band mechanism to throttle posting per-stream. I’m not sure I agree with the original articles choice of throttling mechanism (binding to the universal Bitcoin clock), but the concept still makes sense: in order to scale up production of spam, you still need to buy additional domains, otherwise you’re limited to one post every n-minutes, and domain registration slow enough for block lists to keep up.

by catlifeonmars

1/12/2025 at 2:13:32 AM

One person per domain is essentially proof of $10.

by wmf

1/12/2025 at 7:39:40 AM

There was a psychological study that decided that community moderation tends to be self healing if, and only if, punishing others for a perceived infraction comes at a cost to the punisher.

I believe I have the timeline right that this study happened not too long before StackOverflow got the idea that getting upvoted gives you ten points and downvoting someone costs you two. As long as you’re saying something useful occasionally instead of disagreeing with everyone else, your karma continues to rise.

by hinkley

1/12/2025 at 4:04:08 PM

While interesting, this seems entirely tangential to the conversation to me. I’m not seeing the connection between paying for the right to participate and punishment. What am I missing?

by catlifeonmars

1/12/2025 at 5:55:24 PM

If there’s no system to refute the uniqueness of a handful of identities (sock puppets used for griefing) then the system won’t scale. If anyone can issue a “takedown” for free, the system of moderation won’t scale.

by hinkley

1/12/2025 at 2:13:39 AM

Domain names are fine but they shouldn't be forced onto anyone. Nothing about DID or any other flexible and open decentralized naming/identity protocol will prevent anyone from using domain names if they want to.

by macawfish

1/12/2025 at 1:07:16 AM

Time services can help with these sorts of things. They aren’t notarizing the message. You don’t trust the service to validate who wrote it or who sent it, you just trust that it saw these bytes at this time.

by hinkley

1/12/2025 at 2:07:04 AM

Something that maintains a mapping between a signature+domain and the earliest seen timestamp for that combination? I think at that point the time service becomes a viable aggregated index for readers who use to look for updates. I think this also solves the problem for lowering the cost of participation… since the index would only store a small amount of data per-post, and since indexes can be composed by the reader, it could scale cost effectively.

by catlifeonmars

1/12/2025 at 7:31:12 AM

I’ve only briefly worked with these but got a rundown from someone more broadly experienced with them. Essentially you treat trust as a checklist. I accept this message (and any subsequent transactions implied by its existence) if it comes from the right person, was emitted during the right time frame (whether I saw it during a separate time frame), and <insert other criteria here>. If I miss the message due to transmission errors or partitioning, I can still honor it later even though it now changes the consequences of some later message I can now determine to have arrived out of order.

by hinkley

1/12/2025 at 4:14:30 PM

I wonder if another way to think about this is as an authenticated vector clock. I think a merkle tree approach is probably too heavy weight as it’s not necessary to keep that information around. You kind of just need quorum (defined as appropriate to mitigate abuse) to update your local version of the vector clock, but unlike a merkle tree, you only need partial updates based on the subset of posters you care about and you basically only need to keep around the last few versions of each vector component.

by catlifeonmars

1/12/2025 at 6:16:43 PM

> Merkle Tree

I don’t recall if any of the signatures sign across any other signatures. I think in some cases it’s just a… Merkle List?

Merkle trees get weird if they’re done as signatures. With bitcoin everyone votes on the validity in a fairly narrow timeframe and the implausibility of spoofing a record and then spoofing the next n is what allows for the trust-in-the-blind to be practical (even if I don’t agree that the theory is sound).

For signatures, on data at rest, it gets complicated. Because at some point you’re trusting a payload that has signatures from expired certificates. You end up having/needing transitive trust, because the two signatures you care about are valid, and they signed the payload while the signatures they cared about were still valid. So now you need to look at signature timestamps, Cert validity range, Cert chain validity range, CRLs or OCSP, and you better make sure all your timestamps are in UTC…

It’s easier if the system has a maximum deliver-by date, and you just drop anything that shows up too out of band. I can use a piece of software that was created and signed over a year ago, but maybe I shouldn’t accept year-old messages.

I did a code signing system for avionics software, that supported chain of custody via countersigning (Merkle before anyone heard of bitcoin). The people who needed to understand it did but it broke some brains. I lost count of how many meetings we had where we had to explain the soundness of the transitivity. People were nervous, and frankly there weren’t enough people watching the watchers. Sometimes you only catch your own mistakes when teaching.

by hinkley

1/12/2025 at 7:20:42 AM

Hi, author here. Regarding backdating it is a valid concern. I did not mention in the article, but in my proposed architecture users could post links of others (consider that a retweet). For links that have reposts there could exist additional security checks implemented to check validity of post time.

Regarding hosting partial data: there should be an option to host just recent data for the past month or other time frames and not full DB of URLs. This would make decentralization better as each instance could have less storage requirements, but total information would be present on the network.

by brisky

1/12/2025 at 4:13:06 AM

Recent events also taught us that proof of work is a serious problem for the biosphere when serious money is involved and everybody scales up. Instead, it seems proof of stake is more what is required.

by imglorp

1/12/2025 at 7:15:49 AM

Yeah, a verifiable delay function is probably better for timestamping.

by wmf

1/12/2025 at 9:38:30 AM

AIUI, the "Decentralized" added to RSS here stands for:

- Propagation (via asynchronous notifications). Making it more like NNTP. Though perhaps that is not very different functionally from feed (RSS and Atom) aggregators: those just rely on pulling more than on pushing.

- A domain name per user. This can be problematic: you have to be a relatively tech-savvy person with a stable income and living in an accommodating enough country (no disconnection of financial systems, blocking of registrar websites, etc) to reliably maintain a personal domain name.

- Mandatory signatures. I would prefer OpenPGP over a fixed algorithm though: otherwise it lacks cryptographic agility, and reinvents parts of it (including key distribution). And perhaps to make that optional.

- Bitcoin blockchain.

I do not quite see how those help with decentralization, though propagation may help with discovery, which indeed tends to be problematic in decentralized and distributed systems. But that can be achieved with NNTP or aggregators. While the rest seems to hurt the "Simple" part of RSS.

by defanor

1/12/2025 at 10:15:14 AM

A number of countries actually offer free domain names to citizens. I agree with the rest though. I don't see what this adds to RSS, which already has most of these things given its served over HTTPS in most cases.

by James_K

1/12/2025 at 11:25:56 PM

cryptographic agility is a recipe for JWT shooting you in the foot. Age or Minisign strike good balances by making the cryptography decision for you.

by pluto_modadic

1/12/2025 at 12:14:52 AM

alot of the use cases for this would have been covered by protocol designs suggested by Floyd, Jacobson and Zhang in https://www.icir.org/floyd/papers/adapt-web.pdf

but it came right at a time when the industry had kind of just stopped listening to that whole group, and it was built on multicast, which was a dying horse.

but if we had that facility as a widely implemented open standard, things would be much different and arguably much better today.

by convolvatron

1/12/2025 at 7:02:08 AM

> built on multicast, which was a dying horse.

There's a fascinating research project Librecast [0], funded by the EU via NLnet, that may boost multicast right into modern tech stacks again.

[0] https://www.librecast.net/about.html

by rapnie

1/12/2025 at 2:01:49 PM

What is that used for? Was looking at the documentation but I'm still without understanding the use case they are trying to solve.

Isn't multicasting something already available with UDP or Point-to-Point connections without a single network envolved?

by nunobrito

1/12/2025 at 8:10:16 PM

by 'multicast' here one really means a facility that's provided by layer 3. So yes, we can build our own multicast overlays. But a generic facility had two big benefits. One is that the spanning distribution tree can be built with a knowledge of the actual topology, and copies can be made in the backbone where they belong (copies in the overlay often mean that the data can traverse the same link more than once).

The other big one is access. If we call agree on multicast semantics and addressing, and its built into everyone operating system, then we can all use that as a equal access facility to effectively publish to everyone, not just people who happen to be part of this particular club and are running this particular flavor of multicast.

by convolvatron

1/11/2025 at 11:37:22 PM

Is he reinventing USENET netnews?

by teddyh

1/12/2025 at 2:24:24 AM

Yes and no. I think the issue primarily is that I could never just generate a new newsgroup back when usenet was popular and get it to syndicate with other servers.

The other issue is who's going to host it? I need a port somehow (CGNAT be damned!).

by bb88

1/12/2025 at 1:08:29 AM

Spam started on Usenet. As did Internet censorship. You can’t just reinvent Usenet. Or we could all just use Usenet.

by hinkley

1/12/2025 at 4:12:28 AM

>Or we could all just use Usenet.

Usenet doesn't scale. The Eternal September taught us that.

To being Usenet back into the mainstream would require a major protocol upgrade, to say nothing of the seismic social shift.

by stackghost

1/12/2025 at 7:04:01 AM

That’s also my feeling. There’s a space for something that has some of the same goals as Usenet while also learning from the past.

I don’t think it’s a fruitful or useful comment to say something is “like Usenet” as a dismissal. So what if it is? It was useful as hell when it wasn’t terrible.

by hinkley

1/11/2025 at 11:41:42 PM

Nostr is kind of what you're looking for.

by fiatjaf

1/12/2025 at 3:49:02 AM

My thought as well.

ps When is your SC podcast coming back?

by doomroot

1/12/2025 at 4:08:02 AM

That is a really great list of requirements.

One area that is overlooked is commercialization. I believe, that the decentralized protocol needs to support some kind of paid subscription and/or micropayments.

WebMonetization ( https://webmonetization.org/docs/ ) is a good start, but they're not tackling the actual payment infrastructure setup.

by cyberax

1/12/2025 at 11:08:06 AM

The blog mentions the "discovery problem" 7 times but this project's particular technology architecture for syndication doesn't seem to actually address that.

The project's main differentiating factor seems to be not propagating the actual content to the nodes but instead save disk space by only distributing hashes of content.

However, having a "p2p" decentralized network of hashes doesn't solve the "discovery" problem. The blog lists the following bullet points of metadata but that's not enough to facilitate "content discovery":

>However it could be possible to build a scalable and fast decentralized infrastructure if instances only kept references to hosted content.

>Let’s define what could be the absolute minimum structure of decentralized content unit:

>- Reference to your content — a URL

>- User ID — A way to identify who posted the content (domain name)

>- Signature — A way to verify that the user is the actual owner

>- Content hash — A way to identify if content was changed after publishing

>- Post time — A way to know when the post was submitted to the platform

>It is not unreasonable to expect that all this information could fit into roughly 100 bytes.

Those minimal 5 fields of metadata (url+userid+sig+hash+time) are not enough to facilitate content discovery.

Content discovery of reducing the infinite internet down to a manageable subset requires a lot more metadata. That extra metadata requires scanning the actual content instead of the hashes. This extra metadata based on actual content (e.g. Google's "search index", Twitter's tweets & hashtags, etc) -- is one of the factors that acts as unescapable gravity pulling users towards centralization.

To the author, what algorithm did you have in mind for decentralized content discovery?

by jasode

1/12/2025 at 11:11:24 AM

Thanks for the comment, these concerns are valid. At the core the protocol supports only basic discovery - you can see who is posting right now and history of everyone who has ever posted. Regarding rich context discovery where content could be found by specific tags and key words this would be implemented by reader platforms that crawl the index

by brisky

1/12/2025 at 12:43:08 PM

Ipfs has a pub/sub mechanism.

As far as I can tell it is stuck in some sort of inefficient prototype stage. which is unfortunate because I think it is one of the neatest most compelling parts of the whole project. it is very cool to be able build protocols with no central server.

Here is my prototype of a video streaming service built on it. I abandoned the idea mainly because I am a poor programmer and could never muster the enthusiasm to get it past the prototype stage. but the idea of a a video streaming service that was actually serverless sounded cool at the time

http://nl1.outband.net/fossil/ipfs_stream/file?name=ipfs_str...

by somat

1/12/2025 at 12:48:50 PM

I think both RSDS and IPFS use libp2p pub/sub mechanism

by brisky

1/12/2025 at 7:47:31 AM

I think it's pretty clear they don't want us to have such a protocol. Google's attack on RSS is probably the clearest example of this, but there's also several more foundational issues that prevent multicasts and similar mechanisms from being effective.

by neuroelectron

1/13/2025 at 4:25:43 PM

Am I the only one concerned by this?

> In RSDS protocol DID public key is hosted on each domain and everyone is free to verify all the posts that were submitted to a decentralized system by that user.

DNS seems far too easy to hijack for me to rely on it for any kind of verification. TLS works because the server which an A(AAA) record points to has to have the private key, meaning that you have to take control of that to impersonate the server. I don’t see a similar protection here.

by bshacklett

1/11/2025 at 11:48:56 PM

Atproto supports deletes and partial syncs

by pfraze

1/12/2025 at 10:29:05 AM

Perhaps this is a little naïve of me, but I really don't understand what this does. Let's say you have website with an RSS feed, it seems to have everything listed here. I suppose pages don't have signatures, but you could easily include a signature scheme in your website. In fact I think this is possible with existing technologies using a link element with MIME type "application/pkcs7-signature".

by James_K

1/12/2025 at 10:55:57 AM

It is funny, how link to text with these words: "Everybody has to host their own content" points to medium.com, not to tautvilas.lt

by blacklion

1/12/2025 at 12:56:52 AM

I am working on something like this. If you are, too, please contact me! toomim@gmail.com.

by toomim

1/12/2025 at 1:32:42 AM

I'm working on something like this too! I emailed you.

by evbogue

1/12/2025 at 3:45:03 AM

Pity RSS is one-way. There's no standard way of comment or doing iteractions.

by est

1/12/2025 at 4:01:34 AM

Interaction/comment syndication would be very interesting. This is, I feel, what makes proprietary social media so addictive.

by uzyn

1/12/2025 at 7:09:30 AM

Someone on the NCSA Mosaic team had a similar idea, but after they left nobody remaining knew what to do with it or how it worked.

It took me 20 years to decide maybe they were right. A bunch if Reddits more tightly associated with a set of websites and users than with a centralized ad platform would be fairly good - if you had browser support for handling the syndicated comments. You could have one for your friends or colleagues, one for watchdog groups to discuss their fact checking or responses to a new campaign by a troublesome company.

by hinkley

1/12/2025 at 5:43:55 AM

Its an interesting point. I haven't even read the article yet, but have been reading the comments. Maybe they were the star of the show all along.

by sali0

1/12/2025 at 12:20:43 PM

This comment describes ActivityPub.

by Zak

1/12/2025 at 1:31:12 AM

>Everybody has to host their own content

Yeah, this won't work. Like at all. This idea has been tried over and over on various decentralized apps and the problem is as nodes go offline and online links quickly break...

No offense but this is a very half-assed post to gloss over what has been one of the basic problems in the space. It's a problem that inspired research in DHTs, various attempts at decentralized storage systems, and most recently -- we're getting some interesting hybrid approaches that seem like they will actually work.

>Domain names should be decentralized IDs (DIDs)

This is a hard problem by itself. All the decentralized name systems I've seen suck. People currently try use DHTs. I'm not sure that a DHT can provide reliability though and since the name is the root of the entire system it needs to be 100% reliable. In my own peer-to-peer work I side-step this problem entirely by having a fixed list of root servers. You don't have to try "decentralize" everything.

>Proof of work time IDs can be used as timestamps

Horribly inefficient for a social feed and orphans are going to screw you even more.

I think you've not thought about this very hard.

by Uptrenda

1/12/2025 at 2:15:44 AM

> In my own peer-to-peer work I side-step this problem entirely by having a fixed list of root servers. You don't have to try "decentralize" everything.

Not author, but that is what the global domain system is. There are a handful of root name servers that are baked into DNS resolvers.

by catlifeonmars

1/12/2025 at 9:18:11 AM

You're exactly right. It seems to work well enough for domains already so I kept the model.

by Uptrenda

1/13/2025 at 5:26:00 PM

> Keeping track of time and operations order is one of the most complicated challenges of a decentralized system.

Only in decentralized systems. In centralized ones, fake timestamps down to the bit all over the motherfucking space. So, basically, quasi, ultimately, so to speak, time and order don't matter in centralized systems, only the Dachshund does.

by spacedRepprEXP