The simple answer is that atproto works like the web & search engines, where the apps aggregate from the distributed accounts. So the proper analogy here would be like yahoo going down in 1999.
Does Google Reader help you make sense of it? It’s more like each app is like its own Google Reader. And indeed you were able to access the same posts via other apps at that time of outage.
Not the original poster but I do have some ideas. Official Bluesky clients could randomly/round-robin access 3-4 different appview servers run by different organizations instead of one centralized server. Likewise there could be 3-4 relays instead of one. Upgrades could roll across the servers so they don't all get hit by bugs immediately.
This is why I'm hoping fiatjaf has a recommendation here. I have a feeling he might have a proposal that solves this. But doesn't solve all of it, just some of it.
Google and MSN Search were already available at this time. Also websites used to publish webrings and there was IRC and forums to ask people about things.
The comparison here is to something like TCP/IP. TCP/IP never goes down. TCP/IP is a protocol, the servers may go down and cause disruption, but the protocol doesn't really have the ability to "go down".
Nostr is also a protocol. The communication on top of Nostr is pretty resilient compared to other solutions though, so that's the main highlight here.
If tens of servers go down, then some people may start noticing a bit of inconvenience. If hundreds of servers go down, then some people may need to coordinate out of bound on what relays to use, but it still generally speaking works ok.
That's because TCP/IP is a protocol, not a (centralized or decentralized) server. A protocol cannot go down. It can trigger failures, it can be abused, but it cannot go down.
It's like saying "English never burns". Sure, you can't burn English but you can burn specific books, newspapers and so on.
Wasn't aware there are ~2k relays now. Have inter-relay sharing situation improved?
When I tried it long time ago, the idea was just a transposed Mastodon model that the client would just multi-post to dozen different servers(relays) automatically to be hopeful that the post would be available in at least one shared relays between the user and their followers. That didn't seem to scale well.
Getting clients to do the right thing is like herding cats, but there has been some progress. Early 2023 Mike Dilger came up with the "gossip model" (renamed "outbox model" for obvious reasons). Here's my write-up: https://habla.news/hodlbod/8YjqXm4SKY-TauwjOfLXS
The basic idea is that for microblogging use cases users advertise which relays their content is stored on, which clients follow (this implies that there are less-decentralized indexes that hold these pointers, but it does help distribute content to aligned relays instead of blast content everywhere).
Also, relays aside, one key difference vs ActivityPub is that no third party owns your identity, which means you can move from one relay to another freely, which is not true on Mastodon.
Thanks! Not to be critical - more like thinking out loud - and don't have solutions to following myself - but that sounds like it could 1) affect negatively to power concentrating into the top popular relays -> potentially leading to same kind of speech issues as semi-centralized ActivityPub, and 2) it won't solve need to maintain multiple firehose connections.
I've been wondering if the multi-firehose architecture is really where decentralized censorship resistant microblogging should be the way forward; I remember the Windows Mobile clients for 2ch.net(today 5ch.io) that would scrape thread deltas from bunch of subdomains under it was plenty fast on 128k(advertised) connection to get thousands of posts in late 2000s, and so I think an RSS style of systems getting delta updates from multiple domains could work without having to do the insanity of early Nostr, or massive liabilities for instance operators with Mastodon, especially if those multiple domains could be set up with relative ease.
Yeah, I don't exactly understand why you have to sign up every time to Mastodon servers and server operators to have to be responsible about users. It worked when it was urgently needed, which was brilliant, but the ID system had under baked spots.
Yeah, any time you need either an index or a caching layer you have to re-centralize one way or another. But decoupling those "services" from the data storage itself helps, and credible exit makes the gatekeepers far less powerful. An example: a few weeks ago nostr.band, one of nostr's main indexers/search services went away. Search is still somewhat impacted (evidence that we were centralized around it), but indexing (i.e. finding users' relay lists) is still covered by several other services.
Just want to add that the AT Protocol IETF working group has been formed, and the PLC directory independent organization and board has officially been established. I’m at the closing talk for this years Atmosphere Conference as I write this and it’s really an incredible community of devs.
I'm excited to see communities of developers working to build things that are meaningful and matter to regular people, which ATProto seems to have more of than some other ecosystems in decent tech land. And where else could you attend an awesome workshop on "Hospicing Social Media?"
the law in the UK doesn't require any of that. It didn't even required Apple to do it. Ofcom is praising Apple for doing it even though it was not required. Social Networks need to do it.
This UK law does not apply to OSes. It applies to online platforms. The author ran into this problem because using the iPhone required an Apple account, which could be used for something that the law applies to, but Apple didn't want to implement lazy verification and instead required verification up front.
That depends on if you live in a jurisdiction that lives or dies by free speech, and if it considers code speech[0]. Forcing you to implement age verification is effectively forcing you to speak things you don't want to say, which isn't free speech.
Apple has to do age verification because of dumb laws, but they decided to do age verification in a dumb way.
The author tried to go along with the age verification system with five different cards and failed five times. For an account that's older than the legal age that would need to be verified in the first place, mind you.
There are many ways to do age verification, most of them bad, but that's why most companies complying with these laws use multiple methods.
nevermind the apologist. his paycheck is paid by people that have capitulated to the same bullshit. and you know what they say about people learning lessons whom have a financial incentive not to.
Indeed. There has been zero political opposition to these laws. Apple isn’t going to pay the fines on our behalf, so we need to get organizing if we don’t like this.
ah, yeah; I guess organization looks like complete capitulation and then commenting on the effect elsewhere with a sturdy shrug "whatcha gonna do? we're all just so powerless". fighting the good fight.
ah, cool! great to have such a loyal ally that snark and cynicism wilts their enthusiasm to such an extent. how would we ever get rid of age verification laws without your "dropped at the first sign of someone not being nice to me" supportive commentary and shrugs?
Wasn’t that the Lit framework? It was okay. Like a slightly more irritating version of React.
I recall the property passing model being a nasty abstraction breaker. HTML attributes are all strings, so if you wanted to pass objects or functions to children you had to do that via “props” instead of “attributes.”
I also recall the tag names of web components being a pain. Always need a dash, always need to be registered.
None of these problems broke it; they just made it irritating by comparison. There wasn’t really much upside either. No real performance gain or superior feature, and you got fewer features and a smaller ecosystem.
The point of Lit is not to compete with React itself, but to build interoperable web components. If your app (Hi Beaker!) is only using one library/framework, and will only ever one one in eternity, then interoperability might not be a big concern. But if you're building components for multiple teams, mixing components from multiple teams, or ever deal with migrations, then interoperability might be hugely important.
Even so, Lit is widely used to build very complex apps (Beaker, as you know, Photoshop, Reddit, Home Assistant, Microsoft App Store, SpaceX things, ...).
Property bindings are just as ergonomic as attributes with the .foo= syntax, and tag name declaration has rarely come up as a big friction point, especially with the declarative @customElement() decorator. The rest is indeed like a faster less proprietary React in many ways.
Kind of? Lit does add some of the types of patterns I'm talking about but they add a lot more as well. I always avoided it due to the heavy use of typescript decorators required to get a decent DX, the framework is pretty opinionated on your build system in my experience.
I also didn't often see Lit being used in a way that stuck to the idea that the DOM should be your state. That could very well be because most web devs are coming to it with a background in react or similar, but when I did see Lit used it often involved a heavy use of in-memory state tracked inside of components and never making it into the DOM.
Lit is not opinionated about your build system You can write Lit components in plain JS, going back to ES2015.
Our decorators aren't required - you can use the static properties block. If you think the DX is better with decorators... that's why we support them!
And we support TypeScript's "experimental" decorators and standard TC39 decorators, which are supported in TypeScript, Babel, esbuild, and recently SWC and probably more.
Regarding state: Lit makes it easier to write web components. How you architect those web components and where they store their state is up to you. You can stick to attributes and DOM if that's what you want. Some component sets out there make heavy use of data-only elements: something of a DSL in the DOM, like XML.
It just turns out that most developer and most apps have an easier time of presenting state in JS, since JS has much richer facilities for that.
Dont get me wrong, I'm a pretty big believer in interop, but in practice I've rarely run into a situation where I need to mix components from multiple frameworks. Especially because React is so dominant.
Reactivity isn’t the problem. Reactivity is one of the few things that helps reduce the complexity of state management. GUI state is just a complex thing. Frontend development doesn’t get enough cred for how deeply difficult it is.
The simple answer is that atproto works like the web & search engines, where the apps aggregate from the distributed accounts. So the proper analogy here would be like yahoo going down in 1999.
reply