Hacker News new | past | comments | ask | show | jobs | submit | more ladyattis's comments login

Such devices are a dead end, imo. I think the next generation of BMI will likely be something you can put on your head like old time headphones with some being even less invasive than that maybe as mere 'pebbles' you just put on your temples or where ever else is potentially optimal for interacting with the human brain.


>I am 100% blind, and guess what, I prefer the term blind because it is pretty descriptive and relatively short.

Have you ever listened to the tv show Avatar the last air bender? There's a character who's blind named Toph and she always goofs on her friends who aren't blind throughout the series. It's quite amusing since it's not done in a manner that insults her for being blind but that they sometimes say things which might be insensitive which she makes fun of in her own way.


>"OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme LateBinding of all things." – Alan Kay

This is probably why I love programming in Cuis, Pharo, and Squeak. The ability to just write my code in small chunk as I need it. Heck, the fact that block closures exist in the language spec makes it easier to even be kind of FP-like with the code as needed by just passing along function blocks where it fits best and then reverting to the OOP messaging where it's better suited.


>And by replacing ownership with either de facto or de jure rental of any and all private property.

My theory is that as profits decline due to the tendency of markets to reach equilibrium with respect to products/services supplied that capital owners will begin to reintroduce pre-capitalist norms and practices such as landlordism as these can sustain revenues to their liking as you renter won't have any ownership rights to contest their actions. It means they can raise rental rates anytime in the majority of cases and then just evict you from or repossess what was rented. Kind of like feudalism but without the fancy hats and titles.


The thing is that the law as written allows them to do just that. If they don't like your content on YouTube, they can punt it instantly. And it can be for ANY REASON. And that's not even including their first amendment right to refuse distributing or listing your content.


If they don't like your content on YouTube, they can punt it instantly. And it can be for ANY REASON

Certainly. But Section 230, at least from my reading, does not protect them for the promotion of content. I could be wrong about that. The Supreme Court will decide. Personally I'd find it delightful if the rage-engine got smashed with a legal hammer and my Youtube recommendations were as useful as they were fifteen years ago.


>Personally I'd find it delightful if the rage-engine got smashed with a legal hammer and my Youtube recommendations were as useful as they were fifteen years ago.

Why would it be safe for them to use an older recommendation system? It doesn't solve the problem, if their older system recommends a terrorism video, even if it only did so because that video came up chronologically, they're still liable.

I would think they would need to just stop allowing the general public to upload videos anymore and only permit trusted media companies and influencers (ones known to not create controversial content) to do so. Probably after being approved through a vetting process where their lawyers can look through at least some of the content first.


>Why would it be safe for them to use an older recommendation system? It doesn't solve the problem, if their older system recommends a terrorism video, even if it only did so because that video came up chronologically, they're still liable.

A system that keyword matches isn't making recommendations, it's just keyword matching based upon the user's request. The law actually cares about intent and how things function, not just hypothetical possibilities that can occur, i.e. the law cares about what does happen and why it happens that way. So it's pointless to characterize a non-recommendation system as a recommendation system as a means of end-running an argument.


If there are 150,000 results that match your keyword, which results show up first?


If the answer to that is "results that the search engine thinks is most relevant to you", then that's probably a recommendation engine. If the answer is "results that are most recent" or even "results that many people have watched", then that probably isn't a recommendation engine.

You're acting like any kind of algorithm is automatically a recommendation engine that should terminate Section 230 protections, but I don't think it's that simple.


The most recent, the oldest, the closest match? That doesn't make it a recommendation system. Maybe try and read my post and make an effort to understand it rather than just responding with the first thing that comes to mind, because it is as if you have not understood my post at all and you seem to have not made any effort thereto.


>The most recent, the oldest

Do you not recognize how lousy of a video sharing website this would be? Spammers are going to be constantly uploading marketing and other low-quality content with irrelevant keywords, while users that actually put work into making good quality videos will see their results pushed to the bottom quickly. How will you deal with that without implementing a system that can identify and recommend non-spam videos? Even the oldest versions of Youtube were boosting videos that got lots of likes.

>the closest match

How is deciding the "closest match" not considered a recommendation? They all have the user's keyword, what other criteria will you use?


>Do you not recognize how lousy of a video sharing website this would be? Spammers are going to be constantly uploading marketing and other low-quality content with irrelevant keywords, while users that actually put work into making good quality videos will see their results pushed to the bottom quickly. How will you deal with that without implementing a system that can identify and recommend non-spam videos? Even the oldest versions of Youtube were boosting videos that got lots of likes.

Not sure why that's my problem, I'm not the one making money by promoting reactionary videos to reactionaries.

>How is deciding the "closest match" not considered a recommendation? They all have the user's keyword, what other criteria will you use?

Because it's not a recommendation, some are better matches than others, thats' all. Some match the entire keyword, some just parts, some in different places... I don't understand what is difficult about this for you.


And what do you do when there's 10,000 exact keyword matches, how do you sort them? If it's newest the entire thing is just going to be spam accounts reposing the same video(s) on any major keyword. "top", or anything notable is also likely to be gamed and abused too, especially if you fuzz "top" sorting because then its not really neutral, you're deciding the order and therefore making a recommendation.


Then there might be a circumstance where it is promoting something. Your point? The law shouldn't make this illegal because then YouTube would have to have greater regard for what it surfaces? I'm not sure that's a bad thing, that's the entire point of the thread.


It obviously is a circumstance that is occurring every second of the day. Not some hypothetical 'might be'. The parent has a point.


My point wasn't the frequency of it but rather that it might be the case that some of YouTube's operations do work that way... so what? Is YouTube's convenience the point of law? No. So why does it matter?


>Not sure why that's my problem, I'm not the one making money by promoting reactionary videos to reactionaries.

The reason I think we should see it as our problem is because I think the solution companies arrive at is just to turn the internet into cable TV, where only approved media organizations are able to share content because of liability concerns.


I'm not sure why YouTube should be able to operate the service it does with the little content filtering it does. In what other industry would you be allowed to post child pornography because it's too difficult to make sure it doesn't get posted? No newspaper could take that excuse. Toys R Us couldn't say "oh jeez, we didn't realize that a corner of our store was being used by child pornographers to spread child pornography and also recruit children" and not be liable. I'm not sure why we think it's good to give an excuse to YouTube and Facebook for this and anything else anyone else would normally be liable for.


My daughter came across really bad stuff of kids YouTube.

YouTube takes it down. Not as fast as it’s put up. But fast.

I found it irritating because I wanted to know what my daughter had been exposed to, but couldn’t. Her history linked to removed videos.

The titles were nonsensical - mostly Unicode homoglyphs.

It wasn’t child pornography, but was definitely grooming material.


>No newspaper could take that excuse. Toys R Us couldn't say "oh jeez, we didn't realize that a corner of our store was being used by child pornographers to spread child pornography and also recruit children" and not be liable. I'm not sure why we think it's good to give an excuse to YouTube and Facebook for this and anything else anyone else would normally be liable for.

I'll admit, we may even be better off as a society of communication was less "democratized." There certainly would have been a lot less covid and election misinformation out there if every rando wasn't able to have their uninformed ideas broadcasted by giant platforms.


Exactly, I understand why section 230 is in place and what it achieved, but I do wonder what good it has actually done and whether or not we actually need it. perhaps we don't need to break up the big tech co's, and instead just make them as liable as any other business would be. in that sense, I don't think they could afford the conglomeration they have right now.


Is the intention of the algo promotion or matching user interest to videos. There's a big difference to saying: I want you to watch this.. and I think you want to watch this.

The later is just sorting by additional attributes (video length, keywords in content, likelihood of clicking->watching, keywords of past content watched, ...). Youtube doesnt care what you watch... as long as they match what you want to watch to a list of videos, you stay on the site. If they dont, then you leave. The actual content of the videos doesnt matter to youtube. In this way, the page that displays the feed is very similar to showing search engine results sorted by best match, where the keywords are pulled from your past videos.

If sorting is now promotion and prohibited by 230, then the internet is f'd. Search engines are going to be completely useless.


I am curious which clause of 230 you think would not cover recommendation systems?


Wouldn't that also make it incredibly difficult for a new startup to invent a better and less harmful recommendation system?


What would a non-harmful or less harmful (than what?) recommendation system look like? What's the end goal of a recommendation system?


Pick any definition you like. If recommendation systems come with existential legal risks for a small company, then only the biggest companies can afford to run them.

Or think of it this way: How is Mastodon supposed to take on larger social networks without recommending people to follow? Should every Mastodon server operator be legally liable for recommending someone harmful?


My definition would be that they're all bad and there is no good use for them because the end results are harmful - more spend/engagements I view in the same way I would "more smoking". Any algorithm or curation excluding perhaps based on latest or "most views" or something similar would be a recommendation as far as I'm concerned.

But I'm just not sure how or why online platforms get to have their cake and eat it too. If the NYTs publishes a story that eating Tidepods is healthy and encourages kids and parents to do so, they get sued. If Facebook creates an algorithm that causes the same or similar to happen they get a free pass. They either have to be a public speech platform where anyone can say anything as long as it isn't literally breaking the law, or they have to follow the same rules as other entities that curate content. If you want to say "why not both?" then that's fine but you have to apply that to all entities, not just online content.


You don't use the algorithmically generated and ranked HN homepage? You scroll through pages and pages of every new submission?

If you said something libelous about me on HN, I can sue HN for publishing and promoting the comment?

A platform that "allows everything that isn't breaking the law" is a platform that is 99% spam.


I don't think there should be blanket immunity for anything simply because an algorithm did it. Let's just imagine there wasn't. Imagine that you could, in principle, sue a website over their promotion of illegal content. I would think on a fact-specific basis, HN would have a very good defence against liability even absent blanket immunity.

You could imagine the kind of elements that might matter for a fact pattern that would emerge from a deposition: revenue and size of website, proportion of website revenue directed towards moderation, percentage of requests that identify illegal material that are responded to, manner of response, tools provided to users, the types of content actually hosted on the site, nature of the algorithm itself, discussions that were had internally about access to harmful content. HN is a text-based website (which also mitigates the harm claim), it gets maybe in the orbit of a few hundred submissions a day, the vast majority of possible harm is when a submission is connected to a topic likely to cause legal issues, and in my experience such topics are typically flagged within a few minutes and removed quickly. There's no mechanism to directly communicate with users, there is no mechanism to follow users, there's no mechanism to change what you see on the front page based on what you clicked before. Everyone is exposed to the same content.

By contrast, thinking about the companies that are actually the target of these lawsuits I was at the TASM (Terrorism and Social Media) 2022 conference -- some of my research is adjacent to this but I've never done any work on terrorism and my social media work involves alt-tech stuff, not the big social media platforms -- where the keynotes were harm policy leads for Europe for Twitter, Facebook, and YouTube, all of whom made it clear that their position was that it is incumbent on academics and government agencies to identify harmful content and work with social media, because every region has their own unique content challenges and it's not possible for Tech companies to handle those at a global scale. A question was asked of the panel that went something like "Meta was, as it admits, instrumental in spreading the violence-inciting messages that drove the anti-Rohingya pogroms in Myanmar. The defense is that Meta wasn't prepared for, or staffed for, detecting issues in a single small market, and things snowballed quickly. You could hire 100 full time content people in that country for the salary of a single person sitting on the panel or a single SWE, so how could resource constraints be the issue?" and the answer was "We're already devoting enough resources to this problem." I think that's an incredibly shitty answer to the question, and I think a deposition could surface exactly this kind of logic, and to me that would be a fact pattern that supports liability. I hope they get their pants sued off in every jurisdiction that allows it. It's clear an aversion to staffing is a huge part of the issue.

So from my perspective, I think the typical process for resolving civil liability + reasonable assumptions about how courts should interpret fact patterns is likely to get to outcomes I'm plenty happy with.

(In the two cases in front of SCOTUS right now it seems like the victims, who have basically no evidence connecting the perpetrators of the violence to the social media services used, would win: the argument seems to be of the form "some terrorist killed my family member, and some other terrorists got radicalized online, ipso facto social media is liable". I don't think that'd be a winning case with or without s.230)


> If you said something libelous about me on HN, I can sue HN for publishing and promoting the comment?

Why is it different if a "newspaper" does it if HN is curating the content algorithmically? I'm honestly not sure what the difference is.


If all that's allowed is "latest" or "most views", I will keep uploading my content to your platform and bot-voting/-viewing it to keep it at the top of everyone's feed.


These are solvable problems - for example requiring registration before posting. But I'm not moved at all by the technical problem because the technical problem isn't what is in question, it's the promotion and curation of content algorithmically.

Either way I think we're going to see a big swing back to authoritative sources because the very technical problems you mention will be taken advantage of by new tools and so the already meaningless content will not even be generated by humans. The Internet in the sense of "publishing content" will be meaningless and unprofitable [1].

[1] Obviously there will exist use cases where this is not the case


'registration', what does that mean exactly? Only people with government validated IDs are allowed to post in the internet in the US? This sounds strangely in conflict with the both the first amendment and the use of anonymous materials historically as is part of our national identity.

Really everything that you're saying doesn't have shit to do with authoritative sources, but authoritarian sources. If you're a big nice identified company, or you're a member of "the party" you get to post permitted information. If you're not, well, better learn how to post those pics of a your senator pulling some crap on the darknet.


> 'registration', what does that mean exactly? Only people with government validated IDs are allowed to post in the internet in the US?

You can just register anonymously like you do on HN. Though for social media sites or similar having "verified human" seems like not just a good idea but ultimately the direction we'll go.

> Really everything that you're saying doesn't have shit to do with authoritative sources, but authoritarian sources.

You are really jumping the gun here so I'm not going to respond to your points here since I wasn't making those.


What would it look like? It would look like a configurable search system with preloading of some choices.

You're watching Tie Your Mother Down, Queen, Rock in Rio 1985. Would you like to see (select as many as you want):

More videos by or about Queen

More videos from Rock in Rio 1985

More videos from 1985

More video about Mothers

More video tagged Live Concert

More video tagged Progressive Rock

More video tagged Rio de Janeiro


Mothers like mothers in porn? Mothers giving birth? Grandmothers? What about animal mothers? What about fathers in videos with mothers? Are the mothers giving parenting or medical advice? Who is policing the filters and deciding what constitutes a video about mothers?

And either way, why bother? Just don't recommend content there's no point except to drive engagement which is fundamentally no different than what Facebook (or whoever) is doing.


I'm not sure. If the incumbents were barred from doing a thing they currently have invested a lot of time and money and effort into or such thing was made legally riskier I think that would open the door to competition.


Sure. The fact you can't knowingly do business with known criminals and stolen money also makes it harder to start a new bank.


Google knowingly promoted extremist videos? Or did they take them down when made aware?


Knowingly: https://www.propublica.org/article/youtube-promised-to-label...

> YouTube decided against labeling 22 channels identified by ProPublica, but it's not entirely clear why.

https://www.propublica.org/article/how-china-uses-youtube-an...

> YouTube said the clips did not violate its community guidelines.

> The warehouse accounts on YouTube have attracted more than 480,000 views in total. People on YouTube, TikTok and other platforms have cited the testimonials to argue that all is well in Xinjiang — and received hundreds of thousands of additional views.

https://themarkup.org/google-the-giant/2021/04/08/google-you...

> [YouTube] even suggested videos for campaigns with terms that it clearly finds problematic, such as “great replacement.” YouTube slaps Wikipedia boxes on videos about the “the great replacement,” noting that it’s “a white nationalist far-right conspiracy theory.”

> Some of the hundreds of millions of videos that the company suggested for ad placements related to these hate terms contained overt racism and bigotry, including multiple videos featuring re-posted content from the neo-Nazi podcast The Daily Shoah, whose official channel was suspended by YouTube in 2019 for hate speech. Google’s top video suggestions for these hate terms returned many news videos and some anti-hate content—but also dozens of videos from channels that researchers labeled as espousing hate or White nationalist views.

> Even after [Google spokesperson Christopher Lawton] made that statement, 14 of the hate terms on our list—about one in six of them—remained available to search for videos for ad placements on Google Ads, including the anti-Black meme “we wuz kangz”; the neo-Nazi appropriated symbol “black sun”; “red ice tv,” a White nationalist media outlet that YouTube banned from its platform in 2019; and the White nationalist slogans “you will not replace us” and “diversity is a code word for anti-white.”


What does that have to do with the affirmative act they undertake of promoting certain materials? That's the issue - not that they punt thing, but that they promote things and that promoting things isn't the same as just hosting third party uploaded content. They take that third party content and show it to people to generate interest and advertising revenue. That's not the same thing as blindly hosting.


> The thing is that the law as written allows them to do just that. If they don't like your content on YouTube, they can punt it instantly. And it can be for ANY REASON. And that's not even including their first amendment right to refuse distributing or listing your content.

You're not wrong, but in addition to the leeway afforded to the rich and powerful by "the law", there is also substantial leeway afforded to every individual under "reality", and one option available is that it is technically possible to behave however one likes, including in a manner that is not compliant with "the law" or "the social contract", neither of which I or most anyone else was consulted on, despite living in a country governed by "democracy".

Interestingly, it seems like it is those who are classically "less intelligent" who are most likely to realize that this powerful exploit exists, buffoonery like January 6, anti-vaxx, and shooting power stations with an off the shelf rifle being prime examples of this.

I sometimes wonder if like corporations or most any other organization on the planet, it might be prudent to review our governmental and legal standard operating procedures from time to time to ensure they are working as intended (underlying, actual intent (as opposed to proclaimed intent) being another matter that more than a few people are starting to become rather dangerously curious about).


Perhaps it would be useful to separate these functionalities into two categories: User-initiated (searches) and passive (sidebar garbage, play next video trash, etc).

Giving the user the ability to search doesn't mean you're curating content with a recommendation engine.


Search is absolutely a recommendation engine. It’s sorting by relevancy whether or not it’s keywords, similar videos, or play next.


If you mean why is quantum mechanics probabilistic? That's because it's the best model we have to handle many of the momentary variations in physical systems. But I'll say that determinism could still be true even if we can't make predictions. But equally, I believe that determinism can be false even if the observable universe seems well ordered. Chaos doesn't just mean random things happening without a cause that could be found. It just means that not all chains of causality can link backwards and forwards perfectly in time (i.e. some chains of causality maybe emergent at best).


I think the fact that the oddest approaches work aren't exactly a measure of how the particles actually work in physics. I think it proves that math isn't the language of the universe no matter how much mathematicians and physicists want to say it is. It just proves we're good at modeling but not good enough to actually know what we're seeing/measuring (a natural limit to our knowledge). I don't know why this position is considered controversial or out of the mainstream when it seems to be the logical answer.


I don't see why it's logical to conclude that difficulties modeling hard problems means we can't model them using math. These difficulties are not uncommon in the history of science, and we've eventually solved them all before with math, and we should expect such problems to become more and more difficult as the low hanging fruit has been plucked. I see literally no reason to jump to the conclusion that the core problem is trying to use math at all.


>I don't see why it's logical to conclude that difficulties modeling hard problems means we can't model them using math.

That's not what I'm stating though. I'm stating that the math involved is a model but it isn't ever going to be identical to the thing being modeled. Meaning that math isn't the "language of nature" as we don't have a direct means to truly comprehend it (direct realism has a whole has been discarded by philosophers for a long time now).

>I see literally no reason to jump to the conclusion that the core problem is trying to use math at all.

Again, that's not what I said, please read my post again.


> I'm stating that the math involved is a model but it isn't ever going to be identical to the thing being modeled.

I don't know what "identical" means in this context. Either a mathematical model can capture all of the information in the system, or it can't. We know that we can reproduce a function on a long enough timeline just by observing its outputs via Solomonoff Induction.

The only escape hatch here is if reality has incomputable features. There's no evidence of this at this time. That's why it's confusing that you would go from "we have persistent hard problems" to "mathematical models can't exactly correspond to reality".


>That's why it's confusing that you would go from "we have persistent hard problems" to "mathematical models can't exactly correspond to reality".

That's not even what I said. Go back and read it again. Take none of your assumptions into it, just read it as is.


That is what you said. You literally said, "I think it proves that math isn't the language of the universe".


I think it's more of a social norm than a legal right since it would be on shareholders to pursue a lawsuit to prove that the board of directors at a company are sabotaging the business from being successful. Even then, it would be dependent on the given strategies to achieve profit both short and long term. It's not to say that corporate boards have zero incentives to maximum shareholder value since many times the boards are shareholders (whether minority or majority owners), but that if this a problem then it can't be truly solved by papering over it or mythologizing it.


I think second mover has a better chance to succeed, even from the first mover themselves (Apple had many ideas for touchscreen devices but most flopped until the 2000s). Iteration on a good enough device or platform will always give you more success than iterating on a false premise (ex. the metaverse).


It just demonstrates how patents are a form of rent seeking. You own the product of your labor but you should not be able to assume you own the accidental products that are related to your labor (ex. buying tools from a tool smith which then you use to make your own tool smithy).


In no way was that depiction accurate. Indeed, if an innocent farmer were, by no fault of his own, found liable for contamination, he could properly sue the others for contaminating his land. For this reason, Monsanto always said it would never sue just for accidental contamination, and never did.


There is no way that analogy holds in this case. The farmer didn't cultivate their own RoundUp to spray, among other problems with it.


What if the farmer was trying to clear the land with roundup and the accidental gmo modified crops prevented him from doing so. Could he sue for damages?


Perhaps! Though in this case it seems he engaged in the behavior over an extended period of time and did not consider the patented crops a problem.


No, if he didn't save seeds from crop plants on that land. That saving of seeds after spraying was the incriminating act.


How does the doctrine of first sale not apply here?

I could make up some convoluted argument but… the neighbor conducted the first sale so this should have extinguished any further patent claim. Of course, IANAL.


He is not reselling individual seeds. He is manufacturing new seeds. He is not allowed to do that, and particularly not allowed to do that with intent.


From wikipedia:

>A patent is a type of intellectual property that gives its owner the legal right to exclude others from making, using, or selling an invention for a limited period of time in exchange for publishing an enabling disclosure of the invention.

In this, letting the plant reproduce arguably falls under "making" (ie. manufacture) of the patented thing (the genetically modified plant).


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: