I think you are overestimating the capabilities, and potential "savings", of P2P here. For one, OpenConnect already does what you're suggesting at the "city" level since it's sitting in ISP DCs. Second, there are minimal gains in the last mile via P2P, if at all, considering the numerous types of devices streaming. How do you handle TVs that are already memory constrained by the streaming app? Do mobile devices constantly upload and eat up your bandwidth cap? What if you're streaming from a neighbor who turns off their computer? Do you need to refetch data from some distant host? Is that really a better experience? You also run into privacy/security concerns. How do you reconcile hosts that _cannot_ leverage P2P? Do you now need to support a "P2P" mode and legacy/vintage non-P2P mode? This doesn't sound good for the end user.
As mentioned in my previous comment, there are a lot of things that would have to solved and deployed in order for P2P to be 100% feasible. I didn't expect to receive a list of things to be solved right now!
But you do bring up good points, as the current infrastructure (everywhere) is not setup for P2P. In most modern countries (sans US), ISP networks are actually pretty good and cheap, and works fine P2P. Otherwise there are other ways of distributing as well, mesh networks is one way.
All these questions you are outlining are definitely solvable though, just like when these questions arised when we built our current centralized infrastructure. Problem is that P2P networks are not nearly as funded as centralized infrastructure, leading to less people working on actually solving these problems.
> What if you’re streaming from a neighbor who turns off their computer?
The default is to download 100% from a CDN somewhere, and P2P that default to the same behavior can’t be a net negative outside of sub 0.1% extra communication overhead. Don’t want to use your upload bandwidth? That’s fine your simply stuck using the same CDN network that’s currently overloaded.
So none of the above are actual issues, a device doesn’t need to stay connected or have the full movie to be useful for P2P. If I start watching some movie while you’re also watching that same movie you can stream whatever is in your buffer to me and that’s a net gain in terms of back haul capacity. I don’t need to depend on anything from you other than what you already sent to me. I can then download from the service bit’s you don’t have and stream them to you.
You're grossly oversimplifying the complexity involved with "streaming" a video. Specifically in the context of a service like Disney+ or Netflix. Additionally how do you "find" that content near you? How do you ensure it is available? How do you ensure it is correct? Who owns it? How do you ensure copyright/drm/licenses are respected? What do you do when it disappears? The gains are incredibly slim, if at all, for the engineering challenges it introduces.
You also confuse "CDN somewhere" with OpenConnect box literally inside your ISP's DC. It is probably faster to get it this way than it would be to P2P it from your neighbor since, at the end of the day, that P2P traffic _must_ go through your ISP and their ISP. It will not, by definition, peer at the local hub. You are _NOT_ on a local network with your building/neighbors/etc. Your communication routes to your ISP and then out to anywhere else, even if that somewhere else is on the other side of your wall.
Even if it was possible. Even if it was slightly faster. Do you really think studio execs are going to be okay with customers hosting/serving their content off their machines? Even _IF_ this was "technically" a good idea, this is a non-starter from the business standpoint.
First, OpenConnect is just a CDN run by Netflix, they could call it bunny protocol it’s just a name. But, they don’t have unlimited boxes at every ISP, in almost every case you can get P2P connections between specific users with lower network overhead than those users connecting to one of the ISP’s data centers.
Anyway, the CDN knows the connections that are on the network because they are what are connecting to it. Segregating based on large scale network architecture is a solved problem, if you’re confused read up on how CDN’s work. What happens inside each ISP can then be managed either via automation based on ping times etc, or ISP specific rules.
In terms of P2P it’s trivial to include 99% of the data for a movie, but not enough data to actually play the movie. It’s codec specific, but that’s not a problem when you’re designing the service. Ensuring the correct users are part of the network is the basic authentication at the CDN node. That’s what’s keeping the list of active users.
As to data validation, the basic BitTorrent protocol handles most of what your concerned about. Clients have long been able to stream movies with minimal buffering by simply prioritizing traffic. Improving on that baseline is possible as you’re running the service not just accepting random connections and you want to be able to switch resolutions on the fly, but that’s really not a big deal.
PS: And yes, some Netflix content deals would create issues. But, that’s irrelevant to their own content and it’s just another negotiation when negotiating licensing, much like allowing content on a CDN in the first place.
> First, OpenConnect is just a CDN run by Netflix, they could call it bunny protocol it’s just a name. But, they don’t have unlimited boxes at every ISP, in almost every case you can get P2P connections between specific users with lower network overhead than those users connecting to one of the ISP’s data centers.
They have boxes in a lot of ISPs.
If “P2P” requires you to transit your last mile to your ISPs POP and then back down to another user, and Netflix requires you to transit to the ISP POP and back out again... has P2P gained you much? In most cases downstream throughput is much higher as well, making the in-ISP cache box far better for most.
P2P has its place but it’s hard to argue its better for video distribution.
Many ISP’s have significantly more and largely unused bandwidth between users vs the overall network. This is often done for simple redundancy as you want a minimum of two upload links if not more. However, it’s much simpler to run a wire between two different tiny grey buildings in a neighborhood than run a much longer wire to another section of your core network. Ideally that’s just a backup for your backup, but properly configured routers will still use it for local traffic.
Another common case is if you want X bandwidth from A to B you round up to hardware with some number more than X. This can result in network topology’s that seem very odd on the surface.
PS: I think you’re confusing what I am saying, this is not pure P2P it’s very much a hybrid model. Further Netflix was seriously considering it for a while in 2014, but stuck with a simpler model.
I don't understand your first paragraph. If I torrent a movie, how does the torrent client know any of the things you mentioned as well? The answer is that the protocol does, along with some tags as to what the movie the file represents, like Radarr and Sonarr do. This would be the same, just on a locked down streaming client.