Yes. But you have to do it in the way that Usenet was built and you may not like that.
I was intelca!cem on Usenet. Modem charges (phone calls) weren't free outside the local calling zone, and places like Berkeley (ucbvax) were often "hubs" that could use university resources to move data over longer distances. Every node had all the data, so system administrators had to come up with schemes for what to delete and when. Even though it was "open" people regularly railed against "the cabal" which where the folks who essentially controlled the namespace so you want a new comp.sys.<architecture> newsgroup, the cabal had to sign off on that or the other nodes wouldn't accept your postings to it. That fight spawned the "alt" tree and it allowed for an affirmative choice of "constrained" posts or "freeforall" posts. The "risky" groups distributed binaries, some of which were CSAM and some of which were pirated software. This lead to a lot of nodes not carrying them (easier than filtering out the bad stuff). For history, "pathing" was something like sun!ucbvax!intelca!cem which meant Sun sent things to ucbvax, and ucbvax would forward things to intelca. So if ucbvax decides not to "carry" a newsgroup, then no one down stream from them would get those newsgroups. (this lead to ISPs advertising "full" Usenet feeds in the 90's)
So what would a "modern" Usenet look like? Consider; Things that were expensive then (long distance and storage) are cheap now and things that were cheap then (class C subnets) but are expensive now. Usenet was hosted on netnews which was described by the netnew transport protocol or nntp. Just like the web is hosted on the hyper text transport protocol http. First and foremost a modern Usenet needs a modern foundational transport protocol. This would be IPV6 based (lots of available addresses) and end to end encrypted. You could implement this with carrier grade NAT or you could use tailscale. The latter is "easy" and a proof of concept would be easy to slap together. I think of this as engineering deliverable #1, a protocol for peer to peer newsgroup sharing with IPV6 addresses and end to end encryption.
From a cost perspective, that needs to be born by the user. Fortunately people already pay for their Internet access and computer(s), so really it just requires dedicating a computer to be your "node" in the hierarchy.
This gives us engineering deliverable #2, a repository of name to address mappings. While this could easily be provided by a bespoke DNS with its own set or root servers. The limitation and risk of third party DNS servers is well known. In 1990 I designed a system for NIS+ which allowed clients to start from zero trust and an introduction to a "server" they could trust, to allow for elaborating trust across a much larger network. It was implemented by a brilliant engineer (a guy name Kamal Anand) who created a cache service for keeping a credential cache of trusted servers. The advantage of such a system is that "who you trust" is something you get to decide, and you can decide not to trust servers that others trust or to trust servers that others don't.
Engineering deliverable #3 would be a core "server" which exchanges information with other servers it trusts, posts that it has seen that others have not, and presenting posts it receives to others once it receives them. It has functions to list posts, and to work with a client to change visibility rules (read/unread subscribed/unsubscribed etc). I will admit that one of the areas where looking at how gmail was implemented most impressed me was the idea that messages were the "general ledger" and folders, state, Etc. were just labels on the transactions. This works really well because delivery can be one linear write to a single blob, with individual "messages" and their attachements by addressable content into that blob. So the indexes stay small, the messages could be written to write-once optical media if you wanted.
Engineering deliverable #4 is a client, that authenticates to "your" server from any connection point on the Internet (remember I said tailscale makes this super easy), that runs on any of your devices (phone, tablet, laptop), and allows you to interact with the service; list, read, write, delete, classify posts.
I believe the key to financial success here is that the only "cost" to host this service is equivalent to running a dozen DNS server instances. All the hardware, storage, and connectivity costs are born by the client through systems they already pay for. As a client you could choose to trust a mapping server or maybe trust the server your friend is using with a choice to trust whomever that server trusts.
Finally, you would have to accept that in all likelyhood you and your buddies would be the only "cool kids" on this service for the first couple of years of its existence. Lots of muscle memory out there for existing sites (reddit et alia) and so a lot of "oh I read that on neuvoUsenet you probably didn't see it" experiences. Investing in outreach to specific groups would benefit things greatly.
The bottom line is that I consider this both a good idea and technically doable but it is a labor of love at present, it is by design difficult to monetize.
Likewise a DHT for "subreddit" or topic identity (or even a blockchain for that as naming is one of the few things that can be solved with a blockchain, though as always there are tradeoffs) shouldn't be too hard to build as a lot of the ingredients (Kademlia, GNS, IPFS, IPNS, etc) already exist.
I suspect the hardest part will be building the product experience around this whole system. The tech is mostly there. But if you want to enable non-technical people to use it, you have to make the system simple and clear to use, which a lot of technical people (myself included) struggle with.
> I suspect the hardest part will be building the product experience around this whole system.
I think this is exactly right. Not surprisingly, I wrote up a business plan for this can called out this point. The approach I have advocated in the plan was to appliancize the end node to minimize the back end configuration. Having done this at FreeGate and NetApp I definitely saw the benefit of "install this box, turn it on, and now you have access to this service." as a huge win, both for adoption and for support.
The key then is you install the app, or apt-get the application and securely attach with a button press on the local network. Pick a handle to give to your friends and you're off to the races.
Most of the business plan was around the economics of an open protocol and the ability to make money selling the service installed in a ready-made platform (selling software wrapped in hardware) ala current NAS offerings. Good returns on growth, less so on saturation.
So basically you want the clients to store their data themselves? I actually thought about such a thing as well, but that would have severe performance implications, or require a DHT-like storage where stuff gets chopped up and replicated on a bunch of other clients, meaning those other clients would not only have to offer extra storage, but also experience a lot of bandwidth usage.
Yes. Although I don't see the performance issue. Random read, and serial write access to a 2TB NVME flash drive is really really fast. And I want to point out that back when Dejanews posted archives of the entire Usenet over several years worth it was a couple hundred megabytes of text data.
Remember that since it is only topics/people "you" the client choose to follow and/or post about it really isn't that much data. There are likely a zillion subredits you never read, and will likely never read. So your client never needs to fetch that data. If you get a referral to some data on separate thread in a separate "group" only that thread need be fetched/indexed, you may choose then to subscribe to that "group" and fetch both history and new posts but still it would be small.
as I understand it, this is what nostr does. The relays are pretty dumb and make no promises on long term storage (I imagine it would be a few weeks-months).
I was intelca!cem on Usenet. Modem charges (phone calls) weren't free outside the local calling zone, and places like Berkeley (ucbvax) were often "hubs" that could use university resources to move data over longer distances. Every node had all the data, so system administrators had to come up with schemes for what to delete and when. Even though it was "open" people regularly railed against "the cabal" which where the folks who essentially controlled the namespace so you want a new comp.sys.<architecture> newsgroup, the cabal had to sign off on that or the other nodes wouldn't accept your postings to it. That fight spawned the "alt" tree and it allowed for an affirmative choice of "constrained" posts or "freeforall" posts. The "risky" groups distributed binaries, some of which were CSAM and some of which were pirated software. This lead to a lot of nodes not carrying them (easier than filtering out the bad stuff). For history, "pathing" was something like sun!ucbvax!intelca!cem which meant Sun sent things to ucbvax, and ucbvax would forward things to intelca. So if ucbvax decides not to "carry" a newsgroup, then no one down stream from them would get those newsgroups. (this lead to ISPs advertising "full" Usenet feeds in the 90's)
So what would a "modern" Usenet look like? Consider; Things that were expensive then (long distance and storage) are cheap now and things that were cheap then (class C subnets) but are expensive now. Usenet was hosted on netnews which was described by the netnew transport protocol or nntp. Just like the web is hosted on the hyper text transport protocol http. First and foremost a modern Usenet needs a modern foundational transport protocol. This would be IPV6 based (lots of available addresses) and end to end encrypted. You could implement this with carrier grade NAT or you could use tailscale. The latter is "easy" and a proof of concept would be easy to slap together. I think of this as engineering deliverable #1, a protocol for peer to peer newsgroup sharing with IPV6 addresses and end to end encryption.
From a cost perspective, that needs to be born by the user. Fortunately people already pay for their Internet access and computer(s), so really it just requires dedicating a computer to be your "node" in the hierarchy.
This gives us engineering deliverable #2, a repository of name to address mappings. While this could easily be provided by a bespoke DNS with its own set or root servers. The limitation and risk of third party DNS servers is well known. In 1990 I designed a system for NIS+ which allowed clients to start from zero trust and an introduction to a "server" they could trust, to allow for elaborating trust across a much larger network. It was implemented by a brilliant engineer (a guy name Kamal Anand) who created a cache service for keeping a credential cache of trusted servers. The advantage of such a system is that "who you trust" is something you get to decide, and you can decide not to trust servers that others trust or to trust servers that others don't.
Engineering deliverable #3 would be a core "server" which exchanges information with other servers it trusts, posts that it has seen that others have not, and presenting posts it receives to others once it receives them. It has functions to list posts, and to work with a client to change visibility rules (read/unread subscribed/unsubscribed etc). I will admit that one of the areas where looking at how gmail was implemented most impressed me was the idea that messages were the "general ledger" and folders, state, Etc. were just labels on the transactions. This works really well because delivery can be one linear write to a single blob, with individual "messages" and their attachements by addressable content into that blob. So the indexes stay small, the messages could be written to write-once optical media if you wanted.
Engineering deliverable #4 is a client, that authenticates to "your" server from any connection point on the Internet (remember I said tailscale makes this super easy), that runs on any of your devices (phone, tablet, laptop), and allows you to interact with the service; list, read, write, delete, classify posts.
I believe the key to financial success here is that the only "cost" to host this service is equivalent to running a dozen DNS server instances. All the hardware, storage, and connectivity costs are born by the client through systems they already pay for. As a client you could choose to trust a mapping server or maybe trust the server your friend is using with a choice to trust whomever that server trusts.
Finally, you would have to accept that in all likelyhood you and your buddies would be the only "cool kids" on this service for the first couple of years of its existence. Lots of muscle memory out there for existing sites (reddit et alia) and so a lot of "oh I read that on neuvoUsenet you probably didn't see it" experiences. Investing in outreach to specific groups would benefit things greatly.
The bottom line is that I consider this both a good idea and technically doable but it is a labor of love at present, it is by design difficult to monetize.