Once up on a time social media was called Usenet and worked offline in a dedicated client with a standard protocol. You only went online to download and send messages, but could then go offline and read them in an app of your choice.
Web2.0 discarded the protocol approach and turned your computer into a thin client that does little more than render webapps that require you to be permanently online.
If I'd wanted my user account tied to a server controlled by somebody else, I'd just use Twitter. Mastodon isn't solving any problems here.
The beauty of Nostr is that it turns the server into a dumb relay, the server controls and owns nothing and you can replace it with another one at anytime or broadcast to multiple at once to begin with. The user is in full control and everything is held together by public-key crypto.
The magic moment is importing your secret key into an alternate client and all your contacts, posts, and feed populate from the data stored in the relays.
Y'all, the more I think about ATProto, the more I think it's worth considering that it may be a VERY VERY bad idea.
Which is to say, all of the potentially intrusive human information gathering and spying that you can encounter on twitter, but now cryptographically signable and EASIER to move from place to place.
Perhaps the ability of your mastodons and emails to lose your info might be just as much a feature as a bug.
Is it practical for an individual with a $4 VPS to spin up a mastadon server + front end client for their own use and have it interact with existing servers? Curious how much friction there is that users end up in on someone else's machine
Mastodon is a fairly finicky beast, has a few dependencies that you also have to manage (redis, sidekiq, Postgres), and is surprisingly resource intensive.
So you can do it, but it’s not really designed for that use case.
I would be great if there were a full-featured single user ActivityPub server, but last time I looked, there wasn’t really.
Ah makes more sense thanks. iirc mastadon doesn't have much of an identity migration story, except for using domain control as proof of identity which is kinda neat. I still like DNS and TLDs as a pay-to-play distributed namespace if you're going to have a persistent identity that can travel from server to server, and change hands as property from one owner to another. (I think there is case law on domain names being property, somewhat unique in cyberspace)
It's also due to browser not doing anything useful with the additional tags, if I use <article>, <section> or <div> doesn't make any difference, my browser doesn't use that to generate a TOC or let me open an <article> in a new Tab. Even the, in theory, incredible useful <time> tag seems to be completely invisible to my browser and many other potentialy useful tags don't exist in the first place (e.g. <unit> would be useful).
Exactly this. I really wish browsers would use semantic html to make content more accessible for me, a sighted user! Why does my browser not give me a table of contents I can use to navigate a long page?
I think the parent has a good point: browsers don't do anything with these tags for sighted users, who are unfortunately the majority of developers. If they were to notice benefits to using semantic tags, maybe they'd use them more often.
It’s interesting, because if you imagine sites actually using these tags to be maximally compatible with reader mode and other accessibility modes, they’re setting themselves up perfectly to be viewed without ads.
I use reader mode by default in Safari because it’s essentially the ultimate ad blocker: it throws the whole site away and just shows me the article I want to read.
But this is in opposition to what the website owners want, which is to thwart reader mode and stuff as many ads in my way as they can.
It’s possible good accessibility is antithetical to the ad-driven web. It’s no wonder sites don’t bother with it.
Reader mode seems to still work if you have a div with article text in it. I would be interested to see a comparison of what works and what doesn’t if such a reference exists though!
Reader mode is based on a whole slew of heuristics including tag names, class names, things like link-to-text-content ratio, and other stuff I can't recall. IIRC it considers semantic tag names a stronger signal than e.g. class names, so having <article> in your page isn't necessary but helps a lot.
Yes, I think that is what browser should spend money on instead of inventing new syntax. Google Chrome still doesn't support alternate stylesheets. But I refuse to not use them simply because a rich company can't be bothered to implement decades old standards.
Not true. Using semantic HTML and relying on its implicit ARIA roles allows the browser to construct an accurate AOM tree (Accessibility Object Model) which makes it possible for screen readers and other human interface software to create TOCs and other landmark-based navigation.
> Not true. Using semantic HTML and relying on its implicit ARIA roles allows the browser to construct an accurate AOM tree (Accessibility Object Model) which makes it possible for screen readers and other human interface software to create TOCs and other landmark-based navigation.
Sure, it allows the browser to do that. GP is complaining that even though browsers are allowed to do all that, they typically don't.
We just don't have enough tags that we can really take advantage of on a semantic or programmatic level, and that has lead to other tags getting overloaded and thus losing meaning.
Why don't we just have markup for a table of contents in 2025?
That'd open a whole new can of worms. Browsers are already gargantuan pieces of software even with the relatively primitive tags we have today. We don't need to further argue with each other what the <toc> tag should look and behave like, deal with unforeseen edge cases and someone's special use cases which end up requiring them to implement the whole thing with <ol>s and <li>s anyway.
Then let the edge cases use <ol> and <li>, and in some sense all those website style simplifiers that comes built-in with Safari will just have to deal with those edge cases. Similarly we have a built-in date picker, and if you don't think it's good enough then build a better one.
If you want a specific behavior for <time> then write a browser plugin which e.g. converts the <time> content to your local time.
But if you are a developer you should see value in <article> and <section> keeping your markup much much nicer which in turn should make your tests much easier to write.
The most annoying part is that the type checking exists outside the regular runtime. I constantly run into situation where the type checker is happy, but the thing explodes at runtime or the type checker keeps complaining about working code. And different type checkers will even complain about different things regularly too. It's a whole lot of work making every part of the system happy and the result still feels extremely brittle.
> In my opinion is the most innovative way of communicating that I've seen in the last 20 years.
The sad irony of this is that this is really not all that innovative, it's just reinventing the 45 year old Usenet with public-crypto. The server-independence was present in Usenet right from the start, that's why Dejanews/GoogleGroups could exist, and why Usenet wasn't provided by a specific server, but by your ISP. The modern Internet has completely regressed in that regard, getting rid of protocol specific clients, and moving everything to the browser and HTTP that don't allow that kind of distribution, that's why Nostr feels fresh again.
The fundamental difference is that with Mastodon, or any Fediverse service, the server still has full control over the user. It's basically no different from regular Facebook or Twitter, just with some optional federation on top that can be switched off at any time (and often is).
On Nostr the server is just a dumb relay, it controls and owns nothing. User identities are proper public key pairs. If a relay goes evil, you can just use another one or use multiple at once to begin with, since the location of the messages is irrelevant, everything is held together by public keys.
You are probably browsing with zoom, that seems to screw up the up rendering and makes the background and text look different. It should be just black&white random pixel noise for both background and foreground, without motion the text becomes invisible, as it blends with the background.
The Oculus Go was discontinued June 2020, the shop was locked down for any further updates or new games December 2020, that's just six months apart. They did "support" it with security updates until 2022, but it's pretty dead when no new games can be sold.
That was the promise with the original Rift, not the Quest. The Quest2 required a Facebook account from day one and never worked with an Oculus account, unlike Quest1. They relaxed the requirement in 2022 to only require a Meta account and converted all old accounts to Meta accounts later on (and if you didn't login to 'ok' that change they deleted your account completely including all the games).
If you created the account early in the Quest2's life, or hit the wrong button in the UI, your Meta account will end up linked to your Facebook account.
You might be able to unlink the Facebook account from your Meta account at https://accountscenter.meta.com/accounts, though I don't know if you can still reach the page.
AI isn't competition for Google, AI is technology. Not only is Google using AI themselves, they are pretty damn near the top of the AI game.
It's also questionable how this is relevant for past crimes of Google. It's completely hypothetical speculation about the future. Could an AI company rise and dethrone classic Google? Yeah. Could Google themselves be the AI company that does it? Probably, especially when they can continue due abuse their monopoly across multiple fields.
There is also the issue that current AI companies are still just bleeding money, none of them have figured out how to make money.
Web2.0 discarded the protocol approach and turned your computer into a thin client that does little more than render webapps that require you to be permanently online.
reply