Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As I said on one of the other threads...

26 March 2019. The day the Internet died.

(at least in Europe)

But of course -- "the Internet interprets censorship as damage and routes around it" -- so what we're likely to see is a massive spike in people streaming video over encrypted tunnels into other countries.

That'd be interesting. It'd render GeoIP rather moot, among other things. I suspect the EU and Member States' response would be either "VPNs are banned" or "no service catering for EU users may talk through a VPN endpoint".



What's weird to me is: Europe isn't exactly a content powerhouse. Why are they so concerned with copyright protections?


Because politicians are stupid. If you look at the pros and cons of this law, Europe has nothing to gain. Not for the people, not for the companies.

This is a law that works to mainly serve the big copyright holders, and in a second degree, impacts the big tech multinationals (=read US companies) less than the smaller ones.

It makes no sense at all. Especially since all member states will have their own law. "Does our filter comply with Belgium law? Also with Luxemburg? And what about Slovenia?".

It's a big farce, that can only be approved by total morons that don't even bother to listen to people who actually know what they're talking about.


The point with the complexity is a very funny one. Actually those people in Brussels are almost exclusively humanitarian degree students who haven't built even a doghouse during their lifetimes.

They see overcomplexity not as a problem, but as a source of pride and a major bragging point. It is actually a massive clash of cultures even though they come from the same place as the people they are trying to govern.


This isn't a law that serves multinationals; quite the opposite.

The proportionality requirement in the text of Art. 13 is more onerous to larger corporations. If you're a tiny blog with a banner ad or two, you're not getting slapped off the internet for having a comments field, because it isn't proportional to require cost and complexity increases of multiple orders of magnitude to police your comments section. Unless someone comes up with Compliance.ly & Co. which does the work for you at a price-point that is reasonable, in which case we've just opened up a new industry which hopefully results in Content ID going the way of the Dodo.

After some litigation occurs in which the boundaries of proportionality are set, we'll be in a better position to analyze the impact of this law.


When reading comments like this it feels like: there's no ambitious startup in Europe to become one of the large companies. Because now a startup has less than 3 years to add this content filtering, which provided as a service or not, is going to cost €€€.

Do you think Spotify would be able to grow if it was created on March 27 2019 instead of 2008?

A successful Content filtering as a service (compliance.ly & co. In your example), assuming it gets adopted by all major websites, seems like it would shift the problem to an even bigger gatekeeper than YouTube, how is this a good thing?


What? Spotify has no uploading features, all their content comes from licenseholders.


Strictly speaking that’s not quite true. A user can upload playlist covers and a text description for that playlist. Both the image and text could fall under copyright.

In 2013/2014 Ministry of Sound sued Spotify over not removing playlists based on Ministry compilations, created by Spotify’s users. Ministry claimed that its compilations qualified for copyright protection due to the selection and arrangement involved. [1] [2]

[1] - https://www.theguardian.com/technology/2014/feb/27/spotify-m... [2] - https://www.theguardian.com/technology/2013/sep/04/ministry-...


All the content on YouTube is nominally licensed too. But what happens when someone submits someone else's music without permission?


I could publish someone else's music as my own, on Spotify.


>Because now a startup has less than 3 years to add this content filtering, which provided as a service or not, is going to cost €€€.

Not really? This isn't a flat 'you need to pay 10k a yr regardless of your size' imposition. Proportionality is important.

The articles, as written, are interesting because they already mention a ton of the balancing considerations. All of those are completely absent in these conversations.

Do you know why that's an issue? Because sometime soon people are going to start getting bullshit copyright trolling demand letters, and all this furor about how the internet is dead is going to convince them to close up shop or cave instead of saying 'nah, serve me your originating documents, this is a bogus claim'.

And that's how the internet will die.

>Do you think Spotify would be able to grow if it was created on March 27 2019 instead of 2008?

If the competitive landscape was the same? Yes. In fact, Spotify's arc is exactly what this law is attempting to encourage. As they grew, they became a quasi licensing clearinghouse instead of another Napster or Limewire. That's the entire point.

>how is this a good thing?

Because you don't end up with 1 compliance service, and you can litigate against the compliance service if they're inappropriately killing your content creation business. As it stands now, if you try to fight YouTube or the content delivery pipeline itself on the basis of their filters, you die. That's not necessarily the case if there's a healthy competitive filter ecosystem. Whether or not we get to that point is another question, though.


> Not really? This isn't a flat 'you need to pay 10k a yr regardless of your size' imposition. Proportionality is important.

The problem is the proportionality requirements are poorly designed. It would be one thing if requirements increased solely with revenue, but increasing with time or user count is purely destructive.

Plenty of small services will hit the time limit before they're big, and then the costs destroy them before they have a chance to be. And the fact that that's likely to happen will keep many people from even trying to begin with.

And user count doesn't mean anything if the profit per user is low. Many side projects have a million users, that doesn't mean it's making any money that could be used to spend on filters -- many of them are lucky to even pay for all of their own hosting costs.

> Do you know why that's an issue? Because sometime soon people are going to start getting bullshit copyright trolling demand letters, and all this furor about how the internet is dead is going to convince them to close up shop or cave instead of saying 'nah, serve me your originating documents, this is a bogus claim'.

That's a different problem. If there were real penalties for making false copyright claims then there wouldn't be so many fraudulent demand letters. I don't think as many people would be objecting to "copyright reform" if it did that.


>The problem is the proportionality requirements are poorly designed.

I don't think this is the issue. The requirements aren't set out in detail, and will largely be fleshed out by the courts. This is where the reality of Art. 13 will be set - in the rulings which follow.

Also, elements in a test don't react linearly in court judgements. Scaling from 100 users to 200 isn't going to suddenly mean that it's proportional for you to implement Content ID from scratch or that an applicable fine doubles.

The mental calculus I see here just doesn't take into account how courts work.

>That's a different problem. If there were real penalties for making false copyright claims then there wouldn't be so many fraudulent demand letters. I don't think as many people would be objecting to "copyright reform" if it did that.

I think most people can agree that the cut and dry abuse of copyright and copyright-adjacent systems should be penalized. But it is. Just not at the scale of individual content producers. If someone tried to extort you by placing false copystrikes on your work and you had proof, you would have a few torts or more general omnibus civil code provisions to use in most jurisdictions. But the cost and hassle of doing so might be higher than your expected return.

Justice doesn't scale linearly, which is a very, very big problem -- but not one that's unique to the Art 11/13 debate.


> The requirements aren't set out in detail, and will largely be fleshed out by the courts. This is where the reality of Art. 13 will be set - in the rulings which follow.

But that's part of the problem. It means a service you operate today is subject to a law that will be decided on tomorrow. So you either make the conservative choice, which is onerously expensive and may put you out of business immediately, or you risk being the case of first impression where the more cost effective choice you made is decided to be insufficient, and that too puts you out of business -- but only after you've dedicated years of your life to it.

> Also, elements in a test don't react linearly in court judgements. Scaling from 100 users to 200 isn't going to suddenly mean that it's proportional for you to implement Content ID from scratch or that an applicable fine doubles.

Users don't scale linearly either. Things have network effects. Side projects get posted to HN or similar and go from hundreds of users to hundreds of thousands in the course of an afternoon.

And again, just because you have a lot of users doesn't mean you make a lot of money. Your project may have had a million users for a decade, but if the revenue from those users is only just covering your hosting costs as it is, now you're out of business.

> I think most people can agree that the cut and dry abuse of copyright and copyright-adjacent systems should be penalized. But it is. Just not at the scale of individual content producers. If someone tried to extort you by placing false copystrikes on your work and you had proof, you would have a few torts or more general omnibus civil code provisions to use in most jurisdictions. But the cost and hassle of doing so might be higher than your expected return.

Which means that it isn't, because then nobody does that and there is no penalty for continuing to do it in practice. And the solution to that is quite straight forward -- make the penalty for a false claim sufficiently large, and the process for having it enforced sufficiently simple, that it justifies the victim in spending that amount of time to enforce the penalty.

Moreover, even the existing penalties are quite useless because the biggest problem isn't overtly fraudulent claims, it's the extremely high volume of false positives the claimants have no real incentive to reduce.


>But that's part of the problem.

No, it isn't. Tech changes rapidly, and legislation quite simply isn't going to be able to encode a specific contextual mutating standard. Law isn't wrong to offload that analysis to an institution that is in the thick of it, with access to expert testimony and amicus information to inform it. You WANT the EFF and other advocates being able to weigh in on how the balancing factors should work and you want the courts to listen.

>Side projects get posted to HN or similar and go from hundreds of users to hundreds of thousands in the course of an afternoon.

Yes, and then 95% of those go back down to pre-spike levels of interest. If they's the odd exception which has a massive sustained uptick for their service which promoted copyright protected works, now they can think about licensing and formalizing their processes to protect all stakeholders now that they're a success.

Just because Napster was once small doesn't mean their business model was going to be exempt from attention forever.

> And the solution to that is quite straight forward -- make the penalty for a false claim sufficiently large, and the process for having it enforced sufficiently simple, that it justifies the victim in spending that amount of time to enforce the penalty.

That's not simple. Courts do not afford less due process to larger penalties. The cost is in the complexity; who owns the rights, what did they know about their claim, how easy was the mistake to make, etc. Proving this to a court that has no starting knowledge of what's going on requires money to compile information, prepare briefs, etc.

We like to believe there's no Kolgomorov complexity associated with getting justice, but getting justice requires translating reality into consensus at some level of fidelity. That process is EXPENSIVE.

>the biggest problem isn't overtly fraudulent claims, it's the extremely high volume of false positives the claimants have no real incentive to reduce

Maybe on Youtube that's the case, but that's more of an issue with us having a system of private algorithmic arbitration, which is a seperate issue. The courts are too expensive to follow up on individual claims, and the only alternative is for content holders to sue youtube for big $$$ through content collectives (the threat of which is why we are where we are).


> Tech changes rapidly, and legislation quite simply isn't going to be able to encode a specific contextual mutating standard. Law isn't wrong to offload that analysis to an institution that is in the thick of it, with access to expert testimony and amicus information to inform it. You WANT the EFF and other advocates being able to weigh in on how the balancing factors should work and you want the courts to listen.

That is separate from the problem that the "new law" created by the court is being imposed ex post facto on actions you've already taken.

It means you don't know what the law actually is yet when you're trying to comply with it. That kind of uncertainty leads people to make overly conservative choices that make beneficial projects uneconomical, or just causes them to give up because it's not worth investing years of your life in something you don't know the courts won't unexpectedly blow apart.

And if you want someone to take input from the EFF et al then why should we wait until it's already in court instead of doing that in the legislature before passing a bad law to begin with?

> Yes, and then 95% of those go back down to pre-spike levels of interest.

But the fact that they did have a million users for twelve months may get them hauled into court.

> If they's the odd exception which has a massive sustained uptick for their service which promoted copyright protected works, now they can think about licensing and formalizing their processes to protect all stakeholders now that they're a success.

Again, you're assuming that success comes with popularity. If you're losing money on every user you can't make it up on volume.

There are projects operated by individuals with a large number of users that operate at a net loss. If you say to those people that they have to implement Content ID because they have too many users, those projects are dead.

And the projects that actually are successful would have high revenue, so the only projects ensnared by a user count limit but not a revenue limit are the ones that are barely making it as it is.

> Courts do not afford less due process to larger penalties. The cost is in the complexity; who owns the rights, what did they know about their claim, how easy was the mistake to make, etc. Proving this to a court that has no starting knowledge of what's going on requires money to compile information, prepare briefs, etc.

Yes, exactly, so if that process is used then the penalty would need to be sufficient to justify the victim in going through that process.

But now let me ask you this. How is it that we're willing to impose a prior restraint without going through that process but not a penalty for false claims?


>It means you don't know what the law actually is yet when you're trying to comply with it.

Yes, this happens in all industries that have cases being litigated all the time. In some instances, areas of settled law are completely upended by new rulings that change the status quo and force people to spend money on complying with the new state of affairs.

Yes, it sucks, but this is business as normal. The tension between certainty and flexibility in the law is a longstanding one.

You want these elements decided at the court level because these elements change, and legislation needs to be good law for a looooong time, whereas a shitty ruling can be blown up in months (sometimes in days).

>But the fact that they did have a million users for twelve months may get them hauled into court.

If they had a million users on a platform that shares and promotes other people's copyrighted works without a license, I'd sure hope they figured out their IP strategy.

> If you say to those people that they have to implement Content ID because they have too many users, those projects are dead.

Why would they need to implement Content ID...? That's the nuclear option in the field.

Do you think a blog's comment section needs filtering unless it becomes a common vector for sharing copyrighted material? It doesn't.

The objective isn't to nuke small companies - it is to strike a fair balance between distribution and content creation. No one wants distribution dead.


> Yes, this happens in all industries that have cases being litigated all the time. In some instances, areas of settled law are completely upended by new rulings that change the status quo and force people to spend money on complying with the new state of affairs.

And court decisions that make major changes like that are rare, exactly because they result in widespread burdensome changes to existing behavior that would have been less burdensome if what was required had been better specified to begin with.

If you pass a law that requires such a court decision to happen before anybody knows how to comply with the law, what is anyone supposed to do in the meantime?

Especially when many of the questions are obvious, not bothering to answer them is just punting because they know the answers will be problematic.

> If they had a million users on a platform that shares and promotes other people's copyrighted works without a license, I'd sure hope they figured out their IP strategy.

Everything with user generated content is "a platform that shares and promotes other people's copyrighted works" and they're intended to be licensed from the user/creator. That the platform has no good way to know when what the user uploads is unlicensed is the whole problem.

And if they didn't have some way to do that when they were small then they don't have it when they first become big either. If you need a solution before you have a million users then you need a solution before you have a million users -- and then we're imposing the same burden on the little guy as on Google, if the little guy ever hopes to become Google without promptly getting sued into the ground.

I also reiterate that user count is unrelated to resource level. An individual can operate a platform with a million users and make no profit from it, but impose a laborious content filtering requirement and that platform is gone.

That is presumably the sort of thing they're trying to protect with language about non-profits, but this is where the ambiguity bites us again. If an individual operates a forum as a labor of love where the ads break even with the hosting costs, is that non-profit or not? What if some years there is a "profit" of $200/year? An individual who doesn't want to be bankrupted by lawsuits is not going to enjoy rolling the dice there.

> Why would they need to implement Content ID...?

We don't know what they would need.

> Do you think a blog's comment section needs filtering unless it becomes a common vector for sharing copyrighted material?

Are blog comments not copyrighted material?

How is the platform supposed to know what is being shared there without reading it all?

> The objective isn't to nuke small companies - it is to strike a fair balance between distribution and content creation. No one wants distribution dead.

The objective of DMCA 1201 wasn't to keep farmers from repairing their tractors.

The issue is the divergence between their stated objective and what they did.


Getting dragged through courts is going to kill numerous startups regardless of how legally right they are, because the investors will drop them and they'll go bankrupt.


> Not really? This isn't a flat 'you need to pay 10k a yr regardless of your size' imposition. Proportionality is important.

In practice, it will all be up to the judge:

1. Was your AI filter adequate enough to properly filter the content

2. If not, how high can the fine be?

There is 1 easy solution to all of this: incorporate outside of the EU.


There's another independent criterion that will cause lots of trouble/legal uncertainty:

1b. Regardless of (1), can you prove you made "best efforts" to acquire licenses for the content that was later found on your platform.

It's not specified who you should be seeking deals with, how you're supposed to know ahead of time what a user will upload, how you're supposed to identify the true rightsholders of an uploaded work, etc.

That criterion must even be fulfilled when you're less than 3 years old, by the way!


You are forgetting that parody is legal. So this means the AI will have to understand the difference, which not even humans can do.


>In practice, it will all be up to the judge

That's the case for any piece of legislation.

The test isn't 'if your AI was good enough'. For the majority of people the most important part is: 'is it proportional to even use AI at your size?'

To which the answer is no.

If you're running a stream or youtube channel of self-created content, the cost of moving dramatically exceeds the total cost of legal risk you're eating in staying put.


The problem for streamers is not the legal part, it's the filtering part.


Let's be precise then. Streamers are already getting abused by Content ID.

How does the EU legislation change how that works? It already exists.

Edit: Content ID already covers the requirements of Art. 13 under any reasonable reading of the legislation. Things aren't going to get worse because of the legislation. They'll get worse because of pressure from their content partners and because they refuse to spend on human support. Why spend when you can do nothing instead?

Your speculation doesn't make legal or business sense.


Since YouTube itself can be sued now, they will lean towards a stricter false positive filter. If you think Content ID is bad, then this will be way worse. Because letting through copyrighted material can be more costly than disallowing new content.

But hey, if you are outside of the EU, no problem. So guess what streamers will do.

This is not rocket science you know. This is just simple cause and consequence.

Stricter filters for EU citizens. And hey, maybe if we are lucky, YouTube decides EU isn't worth the effort anymore and decide to use the block filter.


The problem is that there's absolutely nothing in there that explains how to balance anything. There's nothing in favor of moderate regulation.

Also: https://torrentfreak.com/german-data-privacy-commissioner-so...


I agree that there's an obvious risk here, but this is a burden for the courts to bear.

The concern over data-use at filtering service companies is new to me and interesting but substantially mitigated if they are compliant with GDPR. I haven't seen this argument before, so I'll have to take a look. Thanks!


> If you're a tiny blog with a banner ad or two

I'm sure everyone is dreaming of having a "tiny blog"</irony>

Meanwhile in the real world, the European streamers and content creators, who make a living from their content, are looking on how to escape the EU so their content doesn't get filtered out.


In the real world, people are being fed misinformation about what's going on by people who didn't read the actual text of the article. That's the point.

I did. I've followed every public draft of the language as its developed.

The article does not do what people are claiming it does. The internet is not dead. Small content creators are not being wiped out. The big tech giants are not creating yet another regulatory moat.

There are plenty of real problems with Article 13 that deserve discussion and elaboration so that when the first cases come out, they get decided properly, but this isn't a nuclear bomb that blows up the net and makes it a corporate-only zone.


> I did. I've followed every public draft of the language as its developed

You clearly didn't.

From the text itself: "for less than three years and which have an annual turnover below EUR 10 million"

Do you see the "and" there? This means that ANY business that is older than 3 NEEDS to comply with filters.

I read the text, because it directly impacts my platform. The solution is: start a foreign corporation.

Your comments here, and in your other posts where you think that streamers have "legal" problems, clearly indicate that you have completely no clue what you are talking about.

Small content creators will be filtered out, and small platforms will need to comply to all the different laws of each EU country. This is crazy.


>You clearly didn't.

I did. I wrote at length about it in the previous thread, and provided links to the language of the articles as well as the elements that were ignored.

You need to read ALL of the language to understand how the proportionality requirement impacts the scope delimitation requirement you're listing.

If you don't do that, you end up with a broken understanding of how the gears fit together.

The legislation does have holes in it, but they aren't that 'small content creators will be filtered out'. People aren't going to litigate against small content creators in the first place. They're going to get smacked by Content ID, which is already ruining livelihoods, but which is a completely separate issue from the EU legislation.


[flagged]


...good.....comeback?


I keep on seeing you say how you're better informed than most people in this thread but I've yet to see you make any concrete points drawing from the actual text of the law.


Sorry, I'm confused. Did you not notice that almost every post of mine is referring to specific limiting provisions - that everyone else is ignoring in creating their doomsday scenarios - in the text?


>The article does not do what people are claiming it does.

Its about the implications, how it relates to the status quo online and how the digital economy works. What they're trying to enforce is just irrational and goes against the natural flow of things. They're nuts.


A lot of European media sees their content stolen and re-uploaded by anonymous users on YT, FB, etc... FB in particular have not responded to this and thus content creators lose a lot of views and money.

This video sums it up nicely: https://www.youtube.com/watch?v=t7tA3NNKF0Q


>and thus content creators lose a lot of views and money //

I think this sort of reasoning is largely fallacious. Just because people view your stuff doesn't mean that if you're successful in locking it down that they'll then pay to view it.

I feel the media companies know this and that's one reason they demand ever increasing copyright terms - to avoid older content eating in to current profits.


This isn't even about paying. People viewing videos you made on Youtube is already lost revenue, since none of the ads FB shows go to you. They only go to the person who stole your videos.

And be definition this can be seen as a loss since the viewing itself is the revenue generator.


This isn't even true with the copyright claim system, where the claimant gets all profits from youtube even if their content was only featured for 5 seconds in a 30 minute video.


Just to add: Just because people view your stuff for free doesn't mean that won't entice them to pay for it later, or that they haven't already paid for it.


However, the fact that there is a problem, does not legitimize doing something regardless of what the something is.

I haven't seen any support for the articles which actually shows the effects of the policy will be good, rather than arguments saying "it's meant to be good". Which is a fallacy that affects many politics which later end up having adverse effects.

But ultimately bureaucrats are happy whenever there is an excuse to increase bureaucratic power.

Edit: spelling

Further edit:

For the particular point you're putting out, to justify the EU policy you have to at least show that 1) those media outlets would receive all that traffic that those FB posts generated if the FB posts didn't exist in the first place, 2) that this outweighs costs from abusing that policy (claims over fair use, e.g. youtube copyright system) and content that simply will not get reshared, even if fair use and linking to the source material, out of fear of triggering the safeguards mechanism


Oh I agree that this law is shit, and a huge overreach. Like shooting a mosquito with a cannon.

I was just trying to put in perspective WHY the politicians feel the need to do this. It's mostly backlash against Facebook for years of content stealing.

Youtube and itś content ID system are actually what this law wants to introduce everywhere. While not perfect, it's still better than Facebook, which seems to be lawless on copyright.


I work at the European Parliament, and in 3 years of debate about the law not a single person or organisation has brought up people putting content from other platforms on FB as something that this supposedly addresses.

In fact, it's all about the music industry wanting higher licensing payments from YouTube: At least as much per play as e.g. Apple Music pays. They call the fact that they're not getting that today the "value gap" – THAT'S the undisputed reason/justification for this law (just google the term).

(Facebook, by the way, also has a content filter: https://www.facebook.com/help/publisher/330407020882707)


That may or may not be true, but Europe is more of a content powerhouse than a internet/tech powerhouse. Their content is a bigger money maker than their tech. So it is in their interest to punish tech and protect tech.

It's also why China has such lax IP laws. They are more of a manufacturing powerhouse than an IP powerhouse ( for now at least ) so they have little to gain with stringent IP laws. When their IP portfolio increases, you can bet that their government would be all about IP protection.

And going back even further, we had some of the laxest IP laws in the western world during the 1800s because we had so little IP to protect. Which allowed our businesses to take a ton of IP from IP-rich britain and europe.

It's greed and selfishness.


This whole stupidity started when google refused to pay european newspapers for their articles showing up in google news.

Google could have stopped all this be immediately kicking off all european newspapers from every google service they have and reinstating them only after they fill out and submit the form that they allow google to use their stuff without any pay.

Instead google only threatened to do this and european newspapers thought they have some power.

The only power they have is making and breaking european politicians, hence current mess.


> Google could have stopped all this be immediately kicking off all european newspapers from every google service they have and reinstating them only after they fill out and submit the form that they allow google to use their stuff without any pay.

They did exactly that when germany introduced the "Leistungsschutzrecht", which was pushed and lobbied for by all major german publishers. Needless to say they all agreed to offer their snippets for free when Google present them with their options.


I can't then imagine why on earth they think google or facebook will start paying them now.


The causality goes the other way, I believe

We are no content powerhouse precisely because we are so concerned about all these bureaucratic things. Instead of just distributing better content more efficiently, we prefer to make it illegal to be better than the status quo.

Can't speak for all of Europe, but Internet-related legislation here in Germany has been a disaster since the mid-1990s.


Which is especially interesting. Germany has probably the most investment in tech, and it has a very small movie industry, a nearly non-existent music industry, and hardly any internationally known writers.

Germany also probably has the strongest tech industry in Europe. Or at least as strong as France and the UK, it seems.


> Why are [European politicians] so concerned with copyright protections?

It's election year. Since the big publishers are all for the reform, any politicians opposing it must fear for bad press.


Big publishers aren't the only ones who can generate bad press. We need to make sure that nobody who voted for this gets re-elected.


>Why are they so concerned with copyright protections?

Is the way this legislation got through because of nasty lobbying? What if it was brought in to stem the tide of American tech companies destroying more European businesses by hiding taxes and dodging copyrights.

>Europe isn't exactly a content powerhouse.

Europe has plenty of 'content powerhouse' companies. They just don't wear their nationality on their chest when they sell to the US.


They can have the best intentions in the world, but shooting yourself in the foot is never a good idea even if the intention is to kill the insect nest below your feet.


It might have something to do with their concern for privacy protections.

Strengthening privacy protection makes the most popular model for sites to pay for content creation and operating costs--selling information about their visitors to advertisers--much less effective.

Maybe as part of that they want to make it more viable for sites to switch to a direct selling of content model?


This is what I think it is. News websites want to turn to paywalls now that GDPR is going to decimate targeted advertising once the lawsuits start coming down but to do that, they need to

1: prevent any free news websites from linking to their pay-walled website and paraphrasing/quoting the whole thing (most people won't pay for the original news source when they can read practically the same thing from a free website). Article 11 prevents this without compensation to the original source.

2: prevent any single user who has access to the pay-walled website from posting the entire article onto websites like hacker news and reddit which I see happen all the time (that and outline/archive links). Article 13 prevents this with automated filters that if fail, the news website can just sue the website and get compensated that way.


It does not matter where the content is produced, it is where it is consumed and Europe is a content consumption powerhouse.

This law intent is to prevent unlicensed content to be available to European consumer. That will probably mostly work, with the usual caveat and unintended consequences we all know about, here on HN.


It's easier for copyright owners in Europe to pain themselves to politicians as small and weak, needing protection from laws to avoid being eaten by the US giants


> But of course -- "the Internet interprets censorship as damage and routes around it"

A friendly reminder that this was said about TCP/IP. It does not apply to the application layer (WWW), neither in theory nor in practice.


> The day the Internet died.

No. The day the ”Upload Other People's Work" Internet died.

> what we're likely to see is a massive spike in people streaming video over encrypted tunnels

Or just creating their own content. Wouldn't that be awesome?


All the people I know who create their own content are utterly pissed off about this, because it totally screws them over when it comes to sharing their content. It means that every platform they use will be forced to either shut down or instate odious, false-positive-prone, likely career-ruining content filtering. The only folks it's good for are the large corporate content factories who are effectively almost totally exempt from dealing with this mess.


You miss the point. It doesn't just filter other people's work, everything has to pass through filters that do not exist and are impossible to create for anyone smaller than Google.

It will either be the end of any kind of user participation on the European internet, or everything that happens has to pass through Google's filter. Neither are good options for internet freedom.

Note that Google's Youtube filter already has a tendency to block people's own content when it resembles content of the big copyright holders. For example: someone playing a piece from Bach on the piano when Sony has also released a recording of that piece from Bach. Youtube will flag that, Sony is fine with that, and small content creators don't have the resources to fight it.

That situation will get a lot worse.


You need only look at the rampant abuse of Content ID to see what the likely result will be...

"Sorry, the video you uploaded 'Me playing Beethoven on the piano' contains BEETHOVEN'S 5TH SYMPHONY by BMG-EMI-XYZ Music Corp. You cannot upload this video."


Is that illegal though (wrongly blocking a video)? If it isn’t, YT can just shrug it off.


Wrongly blocking is not illegal. They may invoke the clause that says the platform owner can reject anything even without explanation, which is always buried somewhere in their T&C. But more importantly - even if wrongful blocking was illegal - if the ways to shut something down "just in case it might be illegal" are super fast and automatic, but the ways to challenge that outcome are super slow, then people will just give up in that process. Not all, but enough to matter.


It is not illegal for a filter to wrongly block something. I think the pirate party tried to insert such clause in the past.


Small time content creators have no way to challenge this. YT's terms and conditions probably allow YT to block it for any reason at all, and as long as you lack the resources to sue them, they can just shrug it off.

Measures like this only serve Big Content. And badly, in my opinion.


There is no consequence for filing unsubstantiated claims.

There is consequence for failing to honour substantiated ones.


It's probably fraud, but yeah, still no consequences for large media corp execs who are personally responsible for that fraud.


The DMCA has a similar provision (from memory it's something like "filing a false claim is considered perjury").

I've never, ever heard of a single charge being filed under that clause -- but I've heard of tons of instances of DMCA being abused. On this statement, I'd love to be proven wrong!


> A statement that the information in the notification is accurate, and under penalty of perjury, that the complaining party is authorized to act on behalf of the owner of an exclusive right that is allegedly infringed.

So basically useless. They claim to be acting "on behalf of the owner of an exclusive right that is allegedly infringed" and they are, even though the allegation is completely without merit.


Don't use platforms them. Host it yourself.


I think it is completely absurd to expect every content creator, with their diverse talents, to all _also_ posses the expertise needed to host their own content.

Reminds me of the Dropbox launch thread here on HN a decade ago where some sysadmin chimes in with "but this is so easy for the layman to do themselves with FTP and [other technologies laypeople have never heard of]" (not an actual quote).


But does it really have to be that difficult? It's absolutely possible to build a system where you can deploy a docker container with a single click and host your own content on that. Add federation into the mix, and you'll get some kind of meta-Youtube or -Bandcamp, comprised of thousands of individual instances, which do not even need to be on the same hosting service.

The blogosphere was similar to that, before everyone gave up and went to Facebook.


If the law were structured to make Dropbox illegal, then that particular neckbeard's observation would have been much more insightful. It seems likely that individual users are going to have to "get technical" in response to creeping authoritarianism from the EU and other powerful, unaccountable actors.


You have to host somewhere!!! Ever gotten an abuse email from the data center holding your physical hardware? I have. My tiny forum that has 50 participants max actually has a filter to prevent anyone from using the word "Elton".


It's simple, just get a static IP and host it on your home internet. Unless your ISP decides to shut you down. Then just setup your own ISP and datacenter and get your own peering agreements!

/s


> You have to host somewhere!!! Ever gotten an abuse email from the data center holding your physical hardware?

No

This law does nothing to change that in any case. Get a (US law) DMCA takedown, and ignore it, job done.


No, this law changes nothing. But also hosting your own changes nothing. You can ignore a DMCA takedown but maybe your host won't. And from my experience, hosts don't care that much -- they'd rather toss you than deal with legal issues.


Use a host that ignores the DMCA -- why would a bad american law apply to the majority of the internet?

Now that americans are realising that other countries exist, and make laws like the DMCA, maybe they'll stop doing it.


This would be an invalid flag. BMG-EMI-XYZ Music Corp does not own this work, they could only own a certain recording of a certain orchestra playing this piece. And we know that detecting that certain recording via music matching does not work, only checking the strong hash of it would work. Which would be trivial to circumvent by a single bit-flip.


> BMG-EMI-XYZ Music Corp does not own this work.

Hasn't stopped big companies from making false claims before. After all they are the ones responding (and likely rejecting) the appeal of the uploader. See: https://arstechnica.com/tech-policy/2018/09/sorry-sony-music...

> And we know that detecting that certain recording via music matching does not work, only checking the strong hash of it would work. Which would be trivial to circumvent by a single bit-flip.

So you're saying that even Google hasn't made upload filters work reliably? Who can if not the company behind Youtube?


Google filters might have a success rate of 10%. We know that Facebook was not able to detect all the instances of the New Zealand shooting live recording, which is a trivial instance of exact matching.

They would need to match all EU copyrighted work. There's not even a database of EU copyrighted work. Because our copyright law works differently than in the US. There's no exact OCR or proper fuzzy matching of video or audio possible. Maybe with success rates of 60%. This is too risky for a big content provider. Esp. dealing with an entity who has no idea what they are talking about (the EU parliament).


Content ID uses some kind of fuzzy matching... a "single bit flip" (as you put it) isn't enough to confuse it.

Yes it's true XYZ Music Corp would only own that performance (as it's Beethoven and the piece is long out of copyright). The problem is, the automatic filter is a fuzzy matcher: it compares the upload against every other performance of Beethoven's 5th it's been programmed to recognise.

Let's say our uploader has been learning from one of those performances. Their performance will sound very similar to another pianist's -- at least to the fuzzy-matcher.

And therein lies the problem: the uploader's piece is clearly copyright to them, but the magic upload filter can't tell the difference.

It's like uploading a silent theatre production (let's say some kind of homage to silent films) and the upload being flagged for violating the copyright in 4'33".


Does it matter when the average person has little to no recourse in the matter? Does it matter when e.g. YT will _punish_ you if you decide to contest it and lose per their arbitrary process?


Just have a look at what was passed today:

> Article 17/9: Where rightholders request to have access to their specific works or other subject matter disabled or those works or other subject matter removed, they shall duly justify the reasons for their requests. Complaints submitted under the mechanism provided for in the first subparagraph shall be processed without undue delay, and decisions to disable access to or remove uploaded content shall be subject to human review. Member States shall also ensure that out-of-court redress mechanisms are available for the settlement of disputes. Such mechanisms shall enable disputes to be settled impartially and shall not deprive the user of the legal protection afforded by national law, without prejudice to the rights of users to have recourse to efficient judicial remedies. In particular, Member States shall ensure that users have access to a court or another relevant judicial authority to assert the use of an exception or limitation to copyright and related rights.

So this also encourages to appeal in court against the current very opaque content upload policies. Certainly this is not strictly better than the current situation (where you can be arbitrarily banned), but definitely progress compared to the situation today, where platforms just act like they see fit.

> Article 17/7: The cooperation between online content-sharing service providers and rightholders shall not result in the prevention of the availability of works or other subject matter uploaded by users, which do not infringe copyright and related rights, including where such works or other subject matter are covered by an exception or limitation.

So overblocking will be costly as well, if enough suitable laws are signed into effect and people start complaining. And this really puts large scale commercial (remember non-profits are exempt) sites in a though spot: they either share revenue with content-creators/their organisations (which are mostly s*, but could be changed...) or they employ even more moderators (remember the small paragraph, where banning is to be done by humans ;)) – which all severly limits the current exploitation of the internet as a big chunk of empty space, where the strongest strongman is going to grab the biggest slice and employs an army of user-slaves.

> Article 17/10: For the purpose of the stakeholder dialogues, users' organisations shall have access to adequate information from online content-sharing service providers on the functioning of their practices with regard to paragraph 4.

I guess already today a lot of people would like to know, how Content-ID blocks their content, but Google can't and won't say (because it will show their dirty secrets...).

=> IMO: all in all, for the average person, the internet might develop back to where it was 20 years ago with select content-providers and quite a large proportion of actual people hosting fun stuff (and moderating their own boards...). If people are as IT-literate as they claim to be (although I doubt that for the large percentage of fortnite-playing #saveyourinternet-people) we might as well enter a real golden age of the internet.


The problem is the balance of power in this equation.

You call your lawyer and ask them to sue (as an example) Google.

I expect the response would be something to the effect of "are you mad, rich or both? Because this is going to take a long time and be very expensive."


So you are basically saying that Google stands above the law? Well, I think the copyright reform should be your smallest problem.


At which point you mark this as a false flag on youtube and youtube tells you to go to court over it.


How many small-time Patreon-funded content creators have the time and money to take on a major media company in court?

Just because you could doesn't mean it's feasible from a financial point of view.


Court fees are proportional to the cause, like 10%. At least in Germany, only that has to be payed up front; Paying the lawyers comes later. So the cost should be within income,and in cases where it isn't, procedures are in place to excuse the plaintiff. Filing should be relatively straight forward, safe any catch-21s that require a lawyer, that I not being a lawyer don't know, but even a lawyer could not foresee with certainty right now.

A real problem would be the usually long wait.

However, taking into account several more circumstances, either side might not be keen on a court case, and thus provide to avoid it. That hinges on morals and technical details.

The problem with copyright's blurry edges around the originality threshold hasn't changed at least. The Olympics organisation is famous for suing, and loosing often enough, over its trademarks, for example.

> take on a major media company in court

In court or outside? And why the media companies? Laws can be repealed by supreme courts on constitutional grounds. That's an even bigger judicial hurdle to consider. If lobbying or legislative orders are involved, it would be a superset of the problem, as the court is to an extend bound by the lawgivers interpretation of the law, disregarding any side effects that are implementation specific. That's the undefined behaviour of the law. The service nulled all your bits after you passed ownership? The content wasn't registered initially and you assumed it was licensed to null? Ohohoho, none of those side-effects were mandated.


You dont have to take youtube to court, you have to take the other side to court, which maliciously striked your video.

So yes, its still an invalid flag, but if you want your video up again, you have to sue somebody who is probably in another country


And do what with it? Most aggregaters won't host your content for you for fear of lawsuits and if you host it yourself, nobody will find your dinky website because nobody is going to pay you to link to it.


The problem is not uploading illegal content. The problem is that this legislation is technically impossible to fulfill, there does not exist foolproof content violation check mechanisms and there will none. This needs strong AI, and a not existing framework to recognize and handle "works". This is impossible without a court order.

Therefore every platform provider visible in the EU (like Wikipedia, Facebook, Youtube, every blog, newspaper comment sections, ...) needs to stop accepting user content, because they cannot guarantee that copyright violations will not occur. They cannot be filtered and not detected. Think e.g. of song texts in images. Will you OCR every image for a work? There does exist a foolproof method to bypass AI, it's called captcha. Even if you install comment or upload submission queues with manual labor ("manual filtering"), you cannot guarantee copyright violations, only courts can do that.

The politicians might have thought of an GEMA-like index to store hashes of protected content in some form or another, which could be distributed to content certain providers, but this doesn't affect the law, which is much broader and not fulfillable. Thus Web 2.0 is dead.

If I would be Facebook I would rather ignore said new laws and go to court over it. The existing framework is good enough, the best way to handle copyright violations.



Ooh, nice one. That's an even better example than Beethoven.

Even better still: there's a song which consists of 4 minutes and 33 seconds of silence. That's it - silence.

"Your latest video upload contains 5 seconds of stunned silence, which has been identified as an extract of 4'33". This extract is copyrighted. Your video has been deleted."


The creative elements of Cage's work are that it's performed as a piece by an orchestra, and presented as a work (eg a specific length, sold as other works are). Your any-[other]-length-of-silence doesn't infringe. If it did then Cage's piece would lack the artistic distinctiveness needed to make it a work for copyright purposes.

Just having a urinal doesn't infringe on Duchamp's "Fountain", not even if it's the same model, only if it is presented as artwork does it become a copy of Duchamp's "work".


You know that, the lawyers know that, but how do you expect the content filtering mechanism to know that???

True, for 4'33'' there is a simple rule that they probably follow - ignore silence :). But for Fountain (if it ever came up) it's hard to imagine that the difference between a protected copy and a non-protected similar image could really be automatically discovered.


This is it. A human being can identify the context. Taking Fountain as an example -- a photo of a urinal on a building merchant's website is an illustration of the product they're selling, not an infringement.

But the filter doesn't know about context, it just correlates two images... and you get "Comparison with copyrighted work 'Fountain', 75% match".

75% > 0%, so the filter says "non".


Isn't this:

> decisions to disable access to or remove uploaded content shall be subject to human review

intended to handle those cases? I'm not saying that it will be adequate.


> Or just creating their own content. Wouldn't that be awesome?

People on YouTube are creating content, lots of it. Will they still create it when some filter keeps blocking them?

If you want pirated stuff, just download torrents. They won't disappear with this new law.

The only thing that will appear are filters.


A couple that might be affected by the lack of a fair use exemption in Article 13:

- use of unlicensed samples in music. Goodbye, Soundcloud rap and EDM music scenes!

- use of images and video clips in memes. Goodbye, Tumblr and Reddit!


those sites aren't going to leave, they will just leave Europe.


> Or just creating their own content. Wouldn't that be awesome?

Until Disney/Comcast/Weyland-Yutani decides that their own your original content. Or the content-id'ing algorithm generates a false positive. Just think a little bit about how all of this will be implemented.


Yeah, i am really curious what approach will content hosting companies take.

Will they analyze each video if it is a illegal or legal one, checking everything... or just implement a simple, fast and cheap filter that will block most of the content, with no way to appeal the ruling, just like youtube is doing now...


I hope they block the entire EU. And yes, I live in the EU.

That way they need to shamefully roll back this law, and we're sure they don't try to pull off such a farce in the (near) future.


Or we post a huge load of original content of the EU pm's making sure we the people own the copyrights. Making their digital life a hell...


you could potentially email MEPs copyrighted work and they would be obligated to filter their emails


While I do agree with your point on this, I do not think the Youtube's Content ID is cheap at all.


I seriously doubt Google would licence it to potential competitors either.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: