Hacker Newsnew | past | comments | ask | show | jobs | submit | knowtheory's commentslogin

I wouldn't write ATProto off as just microblogging, there are a bunch of interesting (and exciting depending on your POV) apps out there that _aren't_ microblogging apps. To name a few:

* https://stream.place

* https://tangled.org

* https://www.germnetwork.com/

* https://slices.network/

* https://smokesignal.events/

* https://www.graze.social/


I'll check them later. Thank you for the list.


There's still some more work to do to make the developer experience simple enough that it's a no-brainer for people to pick ATProto up in anger.

But there's a lot of work developing on that front, and the next 6-12 months will be super exciting to watch.

The longer story is that most people don't understand that ATProto is more than just Bluesky, and the usecases are wayyyyyy broader. That's going to take more time to play out in the market.


Absolutely. In fact I’d love for my startup to run our own atproto instance separately from Bluesky, but it still looks like quite a lift. Lmk if you have some recommendations.

Basically our thing would give that ecosystem the ability to have personal pages that can look like Patreon, YouTube, Instagram and others


Are you trying to run a parallel network, or build on top of the existing one? "run our own atproto instance separately from Bluesky" sounds like you want a fully parallel network, but that should be pretty rare to need or want, so I'm not sure that's what you actually mean. An "atproto instance" isn't exactly a thing.


I’d prefer running our own thing separate from bluesky. We’d give people something like username.page.app and they’d make posts there. If people wanna follow on bluesky they can, and we provide a username that’s just the url.

I know we can do all this by just posting to Bluesky. But I want to give usernames, host the data on our end, and I’d prefer using the protocol but not be directly associated or dependent on Bluesky.


Okay, so this sounds like you'd want to run an appview + pds. (and possibly a relay, depending on some details.) Except for one thing:

> or dependent on Bluesky.

If you want to take this to an extreme, and are uncomfortable with how did:plc has not yet moved into its own org, then you'd want to also run your own plc server, etc. The problem with doing this is:

> If people wanna follow on bluesky they can

You lose this. Because you're now not running on the main atproto system, but instead a fully parallel one of your own.

Anyway, you could start on this by running a PDS via the reference implementation here: https://github.com/bluesky-social/pds and then building your own appview (application).

You could also take a look at Blacksky's implementation https://github.com/blacksky-algorithms/rsky and if you end up using it, consider throwing them a few dollars. Alternative implementations are super important!


Thank you for the detailed answer! Totally comfortable with the did implementation. Just trying to separate from their brand and just use the standard :)

We already built our own platform independently from Bluesky, so we have a timeline in the wrong post and everything. I’m just trying to give our users into opera ability. So that when they make a post on our platform, people can also follow your Bluesky and see on their timeline. Am I correct to assume then that we would not require our own app view?


You're welcome, yeah then that's a lot easier.

> Am I correct to assume then that we would not require our own app view?

Well, given that you have built a platform, and you then want to interact with the atproto eocsystem, that means you'd be making your platform an appview, in a sense. An appview is just a service that reads the underlying data from the network and does something useful with it.


There's hope for an independent but synchronized PLC directory: https://tangled.org/@microcosm.blue/Allegedly


You mean you want to host the personal repositories (PDS) for your users?


Ideally yes!


It depends how much you want to replicate. All you really need is the Application Data Server (or AppView) to aggregate the records you are interested in, serve them to your client app, and write them to people’s repos. I’ve been tinkering with the ‘personal website on AT’ idea space for a bit, tons of cool possibilities (and several people already have implemented cool AT integrations in their sites!). Happy to chat ab it.


HMU! I’m “shokunin.” on discord, leshokunin on TG / Twitter.

I’d prefer running our own thing separate from bluesky. We’d give people something like username.page.app and they’d make posts there. If people wanna follow on bluesky they can, and we provide a username that’s just the url.

I know we can do all this by just posting to Bluesky. But I want to give usernames, host the data on our end, and I’d prefer using the protocol but not be directly associated or dependent on Bluesky.


I'd argue that ATProto is the next iteration of open internet. It's what an internet where accounts/identity and verifiable content attribution are built in, and nobody using the technology needs to think about any of that.

There's a space here where we can move from nobody having smart phones or hosting digital presences -> everyone having digital presence provided by Facebook/Instagram, and icloud/google accounts -> Accounts w/ something like ATProto where its your stuff, you get to decide where you keep it, and you get to decide who gets access to it.


Yeah, that's falling directly into Facebook's talking points. It's a web extension, anybody can inspect the source. It doesn't do what Facebook is claiming. The NYU team bends over backwards to ensure that no personally identifying information about other users gets captured.

The privacy leak that Facebook is so concerned about is actually the identity of advertisers on their platform.

https://twitter.com/issielapowsky/status/1422879438765797380


So Facebook, who just paid a 5 billion dollar fine to the FTC for allowing exactly what these researchers are doing, should adopt a policy of examining the source code of every update to any extension used for scraping data to determine whether it's allowed or not? Is that the other option?


> The privacy leak that Facebook is so concerned about is actually the identity of advertisers on their platform.

Yeah? That also seems like a completely legitimate concern.


But it's public info?

> When Facebook said Ad Observer was collecting data from users who had not authorized her to do so, the company wasn't referring to private users' accounts. It was referring to advertisers' accounts, including the names and profile pictures of public Pages that run political ads and the contents of those ads.

It's all on https://www.facebook.com/ads/library/. Scraping just lets them analyze it.


The comment that I'm replying to argued that facebook is concerned about the privacy of advertisers, and I argued that this concern is legitimate. If you don't agree that facebook is concerned about the privacy of advertisers, maybe you should reply to the comment that actually made this claim?


I don't agree with your claim. I'm arguing the concern is not legitimate.


But was that data still collected without consent?


I'd say installing an extension is a pretty big sign of consent. It's named clearly and clearly describes what it does in the first sentence of the description:

> A browser extension to share data about your social feed with researchers and journalists to increase transparency.

I'd call that type of data gathering quite consensual.


You're also granting the extension access to your friends' data, given that it can see everything that you can. Your friends consented to show that data to you, but not to the extension developer. Your friends' consent is not transitive.


When I was a regular FB user I understood when I share stuff with friends that it might be visible to their browser extensions. Ubt I feel your comment is sort of misdirection as the purpose of the browser extension was to collect information on ads in peoples feed. Advertisers might show up in your feed, but that doesn't mean they're your friends, even if you consented to receive ads by signing up with a petition organizer or political campaign.


It's very strange to call a proposal (negative income tax) made by economists as orthodox and central to 20th century conservative politics as Milton Freedman as _utopian_.

But even beyond the aspirations of a UBI/negative income tax, the real problem with any such proposal will be implementation and policy details which most UBI proponents don't talk about much if at all.

Will UBI be counted as income? How will this interact with other programs such as SNAP, healthcare subsidies, HUD housing subsidies, or any number of state operated programs? Will they be mutually exclusive?

Will existing policies or laws need to be modified in order to accommodate such a proposal?


> real problem with any such proposal will be implementation and policy details which most UBI proponents don't talk about much if at all.

They do talk about this, you just haven't been listening. Yang proposes a voluntary switch between needs-based welfare and UBI combined with a national VAT.

https://ubicalculator.com/ provides a good amount of detail about how various UBI plans will be funded.


Their crunchbase page says private for profit: https://www.crunchbase.com/organization/scite#section-overvi...


Hey Jared, in fact, if you click into the video on the Open Steno page, the stenographer & developer you're talking about is Stan Sakai, who is involved in Open Steno:

https://twitter.com/stanographer


Yes! That's him


Right, and it's still sad to say that they capitalized on that fact so incredibly poorly that the following came to pass:

https://twitter.com/jacobian/status/1012781017940316161

> The Lawrence Journal-World, where Django was created, is now a Wordpress site. http://www2.ljworld.com/news/2018/jun/26/redesign-ljworld/


Or they realized that there isn't much value in building their own CMS.


I worked at the Journal-World for around five years, starting in 2006.

Upper management at the time was interesting. A lot of people who didn't necessarily understand the internet or technology in general, but knew they didn't, and were willing to hire and trust people who did. That was the magic sauce that led to Django, and a lot of the other innovative stuff. The owner of the paper, for example, had his secretary print out his emails and bring them into his office; then he'd write his replies on a manual typewriter, and hand them back to be typed into a computer. But he'd also managed to ride the wave of first the cable TV/internet boom (by setting up a cable division) and then the web boom (by hiring a team of people to build a first-class news site and giving them more or less free reign to do it right).

And that was how you got the heyday of the Journal-World. All sorts of interesting experiments in using the web to enhance journalism, close collaboration between the newsroom and the tech team, and a ton of cool things accomplished and a bunch of industry awards, etc. I actually had a byline at one point, on a feature that's now gone because Django got retired (a data-journalism project tracking the impact of flu during the H1N1 scare).

And other news organizations were happy to pay for the software to do their own version of that. We had both hosted and on-prem versions of it, and recommendations for hiring developers and training them to work on it, and as far as I could tell they seemed pretty happy to have something that had been designed at and by a newspaper (as opposed to other news CMS products, which often have their first encounter with a journalist at the time of production deployment).

But all good things come to an end. There were some management shakeups, and a lot of the tech team (myself included) left for greener pastures. A little while after that, I heard the CMS division was being shut down and everyone in it laid off; then I heard another company had made an offer to acquire it. As far as I know, Ellington (the Django-based news CMS) is still available today as a supported commercial product. But the Journal-World no longer uses it or, to my knowledge, maintains an in-house technology team like it used to.


Oh thanks for pointing this out. I'll hit the Muckrock team up and see if we can get the text button in there.


Really? For a 6 white paper?

Fine:

Tilde built a server monitoring daemon with Rust and it's low resource and doesn't crash. Tilde thinks the Rust community & its resources make it easier to teach to new team members.


Yes, really. It's not the length, it's the density of interesting content.


Yeah man, learning to scan a paper for interesting content is definitely a skill worth developing!


I suggested it because it'd help other people not waste the time scanning the paper for the same minuscule amount of not-that-interesting content. But at least it gave you the opportunity to contribute your interesting comment. Yeah, man!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: