Hacker Newsnew | past | comments | ask | show | jobs | submit | snackbroken's commentslogin

First, children also have a right to free speech. It is perhaps even more important than for adults, as children are not empowered to do anything but speak.

Second, it's turn-key authoritarianism. E.g. "show me the IDs of everyone who has talked about being gay" or "show me a list of the 10,000 people who are part of <community> that's embarrassing me politically" or "which of my enemies like to watch embarrassing pornography?".

Even if you honestly do delete the data you collect today, it's trivial to flip a switch tomorrow and start keeping everything forever. Training people to accept "papers, please" with this excuse is just boiling the frog. Further, even if you never actually do keep these records long term, the simple fact that you are collecting them has a chilling effect because people understand that the risk is there and they know they are being watched.


> First, children also have a right to free speech.

Maybe I'm wrong (not reading all the regulations that are coming up) but the scope of these regulations is not to ban speech but rather to prevent people under a certain age to access a narrow subset of the websites that exist on the web. That to me looks like a significant difference.

As for your other two points, I can't really argue against those because they are obviously valid but also very hypothetical and so in that context sure, everything is possible I suppose.

That said something has to be done at some point because it's obvious that these platforms are having profound impact on society as a whole. And I don't care about the kids, I'm talking in general.


> narrow subset of the websites on the web

Under most of these laws, most websites with user-generated content qualify.

I'd be a lot more fine with it if it was just algorithms designed for addiction (defining that in law is tricky), but AFAIK a simple forum where kids can talk to each other about familial abuse or whatever would also qualify.


> but AFAIK a simple forum where kids can talk to each other about familial abuse or whatever would also qualify.

I'm currently scrolling through this list https://en.wikipedia.org/wiki/Social_media_age_verification_... and it seems to me these are primarily focused on "social media" but missing from these short summaries is how social media is defined which is obviously an important detail.

Seems to me that an "easy" solution would be to implement some sort of size cap this way you could easily leave old school forums out.

It would no be a perfect solution, but it's probably better than including every site with user generated content.


> I'd be a lot more fine with it if it was just algorithms designed for addiction (defining that in law is tricky)

An alternative to playing whac-a-mole with all the innovative bad behavior companies cook up is to address the incentives directly: ads are the primary driving force behind the suck. If we are already on board with restricting speech for the greater good, that's where we should start. Options include (from most to least heavy-handed/effective):

1) Outlaw endorsing a product or service in exchange for compensation. I.e. ban ads altogether.

2) Outlaw unsolicited advertisements, including "bundling" of ads with something the recipient values. I.e. only allow ads in the form of catalogues, trade shows, industry newsletters, yellow pages. Extreme care has to be taken here to ensure only actual opt-in advertisements are allowed and to avoid a GDPR situation where marketers with a rapist mentality can endlessly nag you to opt in or make consent forms confusing/coercive.

3) Outlaw personalized advertising and the collection/use of personal information[1] for any purpose other than what is strictly necessary[2] to deliver the product or service your customer has requested. I.e. GDPR, but without a "consent" loophole.

These options are far from exhaustive and out of the three presented, only the first two are likely to have the effect of killing predatory services that aren't worth paying for.

[1] Any information about an individual or small group of individuals, regardless of whether or not that information is tied to a unique identifier (e.g. an IP address, a user ID, or a session token), and regardless of whether or not you can tie such an identifier to a flesh-and-blood person ("We don't know that 'adf0386jsdl7vcs' is Steve at so-and-so address" is not a valid excuse). Aggregate population-level statistics are usually, but not necessarily, in the clear.

[2] "Our business model is only viable if we do this" does not rise to the level of strictly necessary. "We physically can not deliver your package unless you tell us where to" does, barely.


Doesn't interning usually refer to when you only consider identical copies, as opposed to dictionary compression where you allow for concatenations? E.g.

  Interning:
  1: "foo"
  2: "bar"

  my_string = "foo" // stored as ref->1
  my_other_string = "foobarbaz" // not found & too long to get interned, stored as "foobarbaz"

  Dictionary compression:
  1: "foo"
  2: "bar"
  
  my_string = "foo" // stored as ref->1
  my_other_string = "foobarbaz" // stored as ref->1,ref->2,"baz" (or ref->1,ref->2,ref->3 and "baz" is added to the dict)


  my_string = "foo" // stored as ref->1
Only if you explicitly intern the string. Interning can be expensive because

- the runtime has to check whether the string already is interned.

- you’re permanently creating the string, so if it isn’t reused later, that means your memory usage needlessly goes up.

Both get more expensive the more strings you intern.

I think interning is a hack that very rarely, if ever should be used. It likely won’t work well for large strings, as these tend to be unique, and use cases where it helps for shirt strings often are better handled by using enums or symbols, or by using a custom set of strings. If you do the latter, you have more control over Emory usage; you can do such things such as removing the least recently used strings, or ditching the entire cache when you’re done needing it (prime example: parsing a large XML file with many repeated nodes)


> you’re permanently creating the string, so if it isn’t reused later, that means your memory usage needlessly goes up.

Nowadays lots of runtime will GC interned strings to avoid this, this can mean churn in the interning table but avoids mis-estimations bloating the process too much.


Makes me think of an iolist in erlang/elixir.


The diagram looks correct for me when I disable CSS on the page or edit it's font-family to be "monospace". Seems like Geist Mono might just be borked.


What's their angle here? Courts have been over the "Is scraping websites that don't want to be scraped OK?" question plenty of times. From

> SerpApi deceptively takes content that Google licenses from others (like images that appear in Knowledge Panels, real-time data in Search features and much more), and then resells it for a fee. In doing so, it willfully disregards the rights and directives of websites and providers whose content appears in Search.

it sounds like they are somehow suing on behalf of whoever they are licensing content from, but does that even give Google standing?

I guess I'm asking if they actually are hoping to win or just going for a "the process is the punishment"+"we have more money and lawyers than you" approach.


> I don’t believe I’ve ever seen shape utilized in generated ASCII art, and I think that’s because it’s not really obvious how to consider shape when building an ASCII renderer.

Acerola worked a bit on this in 2024[1], using edge detection to layer correctly oriented |/-\ over the usual brightness-only pass. I think either technique has cases where one looks better than the other.

[1]https://www.youtube.com/watch?v=gg40RWiaHRY


I can imagine there's room for "style", here, too. Just like how traditional 2d computer art varies from having thick borders and sharp delineations between colour regions, through https://en.wikipedia.org/wiki/Chiaroscuro style that seems to achieve soft edges despite high contrast, etc.


Because root is not the ultimate authority of what goes on in the phone; the hardware is, and the hardware contains a TPM (Treacherous Platform Module). The TPM has secret cryptographic keys it never shares with anyone, neither root nor an unrooted OS. When the phone starts, the TPM checks if the OS has been modified from what the manufacturer supplies or not.

The bank's app can then ask the OS to sign documents using the TPM's secret keys, and the OS forwards such requests to the TPM. The TPM refuses such requests from modified OS but obliges requests from an unmodified OS. The bank's servers refuse to accept documents not signed by the TPM.

Root can't pretend to be a TPM and make up some secret keys to sign documents with because the TPM's signature is itself signed by Google, so the bank can tell the difference between root's signature and a treacherous signature.


And is there no way to make the TPM think that the OS is unmodified?


To avoid confusion, the actual name is Trusted Platform Module.


Consider not having your browser configured with prefers-color-scheme: dark.


It looks pretty, but fails at basic usability.

After reading the top-left block of text titled "Optimizing Webkit & Safari for Spedometer 3.0", what the fuck am I supposed to read next? Am I meant to go recursively column by column, or try to scrutinize pixels to determine which of the blocks are further up than the others, skipping haphazardly left and right across the page? A visual aid: https://imgur.com/a/0wHMmBG

Columnar layout is FUNDAMENTALLY BROKEN on media that doesn't have two fixed-size axes. Web layouts leave one axis free to expand as far as necessary to fit the content, so there is no sane algorithm for laying out arbitrary content this way. Either you end up with items ordered confusingly, or you end up having to scroll up and down repeatedly across the whole damn page, which can be arbitrarily long. Either option is terrible. Don't even get me started on how poorly this interacts with "infinite scroll".


Well not all content is meant to be read in order. A layout like this is good for content where you want to incentivise users to read in whichever order you like. So if the order is confusing you, chances are there wasn't meant to be any order at all. E.g. if you search google images google guesses some relevant order for you, but it is layed out in a dense way so you can scan with your eyes and decide which thing is most relevant for you. Whether you scan the screen left-right, top-down, randomly, bottom up, or ehatever is totally your choice.

Using such a layout for a novel or something like this would be really bad UX. But using it for an image gallery? Or a series of random articles that aren't priorized? Why not?


> Columnar layout is FUNDAMENTALLY BROKEN on media that doesn't have two fixed-size axes.

You can use plain old CSS columns (which don't have the automated "masonry" packing behavior of this new Grid layout, they just display content sequentially) and scroll them horizontally. But horizontal scrolling is cumbersome with most input devices, so this new "packed" columnar layout is a good way of coping with the awkwardness of vertical scrolled fixed-width lanes.


> what the fuck am I supposed to read next?

What a weird comment. You read whatever you want next, ever read a newspaper? You scan it all and pick the article you are interested in, then read that. I don't understand these comments, they work perfectly well in real life (and fixed size is arbitrary, I can make a super wide or super long newspaper too, the axis size does not affect this sort of layout, see infinite scroll for example, as there is only a fixed amount of content on the screen at any given time).


> You scan it all and pick the article you are interested in

Okay. What order am I supposed to scan in so I don't lose my place and accidentally skip a block? Scanning column by column gets me cut off partial boxes that I'll have to remember to check again later, while scanning side to side forces me to keep track of each individual block I've already looked at, as opposed to a single pointer to "this is how far I've scanned". Alternatively, I can scan roughly left to right, top to bottom and just live with missing some blocks. That's not ideal either, because hopefully if you didn't think I'd like to look at all of them you wouldn't have included them on the page.

You're right that you can make a newspaper that's really inconvenient to read, but you wouldn't, because the failure case you'd end up with is CSS Grid Lanes.


This is so funny that I'm not even sure what to say. You can ask your exact questions about a newspaper but somehow 99% of people manage to read them just fine. I think it's just a you problem that you are looking for an exact algorithm on how to scan a page with multiple sizes of content; in reality, people just look over it all and keep track of what they have or haven't looked at all in their heads.


In a newspaper the answer is simple. You linearly scan the leftmost column to the bottom of the page, then the next column, then the next, and so on until you get to the end of the page. At no point do you ever need to keep track of anything other than "this is how far I've gotten" to make sure you haven't missed anything. Columnar layout make sense in newspapers because both axes are fixed in size, so all you ever do is one long linear scan with wraparound.


If one axis is fixed, and it is in the case of grid lanes (it's not a fully pannable infinite canvas like Figma after all), you just keep reading the content that's on the current screen, then you scroll. I really don't see how it's any different to, for example as I mentioned previously, a long newspaper with many pages; each "page" is one "screen" worth, analogously. It's like infinite scroll, either vertically or horizontally, where instead of just one item in the list, you have a few. And if we're being really pedantic, Figma users do perfectly fine keeping the context of the content in their minds even in an infinitely pannable canvas. And also, generally newspaper readers do not do what you say, scanning column by column, they instead glance their eyes over all of the headlines then pick which one looks good then they read the article attached to that, it is a free form process.

So again, I will contend that this is not a problem for the average reader. I really cannot see where the difficulty you seem to say lies.


> In a newspaper the answer is simple. You linearly scan the leftmost column to the bottom of the page […]

Is this really how you think people read newspapers?


Not exactly the 19th century, but in 1980 Softsoap bought up a year's worth of manufacturing capacity of hand pump mechanisms in order to block competitors from entering the consumer market for liquid hand soap.


I didn't know about this history. Looks like it actually worked out great for them.


Seconding Fullmetal Alchemist. I hear the remake (Fullmetal Alchemist: Brotherhood) is usually regarded as the better version. More suggestions: Neon Genesis Evangelion, Death Note, Sousou no Frieren, Cowboy Bebop, Nichijou, Tengen Toppa Gurren Lagann, Bakemonogatari. There's also quite a few good movies, anything by Studio Ghibli is great, and so are Akira, Perfect Blue, and Ghost in the Shell.


Some of those those aren't really going to appeal people unfamiliar with the conventions of the genre and some of the big personalities, e.g. Evangelion is a deconstruction of the once popular giant robot genre and Hideaki Anno's personal couch trip rolled into one.


Brotherhood follows the plot of the source material comic, which is regarded as having a better ending. The original series aired concurrently with the comic and had to diverge when it passed where the ongoing comic ran out of chapters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: