I am surprised the author did not mention or uses Software Defined Networking (SDN), Openflow or P4 (programming language for programmable switches) or the mininet simulator. He must have skipped reading the scientific literature even though he is a computer science sophomore?
I programmed and build one of the very first ISP hardware and software systems in 1987-1997 when we connected the first submarine link between the US and Europe in Amsterdam.
Google switched 50% of the internet that they owned in 2012 to SDN and Openflow [1]. I'm sure they progressed to P4 and more recent SDN controllers since then. They build the Google Fiber ISP[5] with SDN. Cloudflare also uses SDN when last I checked. A majority of the internet has moved to SDN (there are many versions.
The author built his simulation on legacy systems mostly from the Telecom world, an alternate reality distinct from the real internet and acces providers we call ISPs. Telecom systems are about surveillance and monetizing the free internet.
You can query the US ISPs on the Nanog mailing list, there are similar social media for the European, Asian and other ISPs on other continents. Beware that those are biased to Telecom as well as Tier 1 network operators and less to ISP access providers.
I do not think we should continue with the current implementation of the internet. I think we should start deploying the true internet (decentralized, peer to peer) standard and expand it to the Enernet standards of the near future: every building a router (switch) and fiber optic and electricity cables to their peers; their closest neighbors. If every building has peer connections than you are connected all the way to the internet exchanges without need for Tech Bros, Government, Telecom, ISP or Tier 1 network oligopolies. True internet [3], true Enernet [4].
well.. openflow is pretty much dead, too inflexible, too slow. The whole control/user plane split is an attempt of the classical router vendors to keep their proprietary boxes. It adds complexity as it requires to synchronize the state of some controller with some data plane box.
P4 was a great idea, but there's not much hardware that supports it.
fd.io / vpp is an impressive stack for software-only routing. Like all SW-only solutions, it suffers from high power consumption and packet rate variability. At today's packet rates, you always have to ask 'how many CPU instructions / cycles are required to perform this or that function per packet'.
Thank you for the context. I did start out with mininet, then moved to containernet->containerlab. Mininet could not model subscriber session lifecycle in how I wanted it. P4/Openflow is on the radar, thanks for the pointer.
We only need solar energy at 1 dollarcent or eurocent (it will get much cheaper still!!) and a little batteries for the convenience of using electricity when the sun does not shine.
In the north and south you need more solar panels in the winter than in the summer by a factor of 50. But that pays it back in summer when you have a squanderable abundance of free and clean energy. We can store that surplus energy in purifying drinking water, melting iron ore or aluminum [5], melting reusable plastics or purifying silicon ingots.
Storing surplus heat or cold in the ground is another luxury, because it is more expensive than 1 dollarcent or eurocent solar running a heatpump.
Wind and hydro are also more costly than solar so they are another luxury with worse environmental costs than pure solar cells.
We need to build Enernet, a peer to peer electricity net and internet between all buildings with power routers. for around 100 dollar per building. You buy and sell your house surplus solar electricity to the neighborhood where it can be stored in car batteries. See my Fiberhood white paper [2].
I don't understand the wood argument. Isn't it widely accepted we need to do burns to manage forests? Wood is a short-term cycle of carbon. It releases when it burns but frees up space to capture it right after. When people live on rural plots and trees fall, should they burn for heat (and lessen needing other energy sources) or let it decompose and cause the same thing? It's not the same as extracting deeply embedded carbon sources that won't make it to the atmosphere if untouched (fossil fuels)
Wood and plant burning requires a longer nuanced answer than the Hacker News format allows. Humanity must not cut forests or grow plants unnecessarily. If you must use wood to build a house -(there are better and cheaper materials in terms of energy and climate change enhancing emissions, see for examples Amory Lovins book Reinventing Fire or his lectures on Youtube) - then first grow those trees in a place that has no natural forest. And then do not burn the wood after you demolish the house. Do not use wood from forest, humans should let the forest manage itself.
Same with clearing the underbush of Meditaranian and hotter climate forest to prevent forest fires. If humanity had not managed those forest (grazing animals, building roads, harvesting) in the first place than there would have been no buildup of excess material that sustain wildfires past its natural rate.
The trick to forest management is allow or create small, frequent burns that clean up dry, overgrown understories. Nature does this without our help and some species even depend on it. If we interfere with this, eventually there's a big fire instead that levels the place.
Morphle here. Thanks for mentioning our work [1]. We still seek funders and students we can teach.
We build on the shoulders of the generation that built Smalltalk but just retired. There is a huge amount of documentation, science papers and talks on how to implement it all.
We are starting to implement the final step: An autonomous European secure operating system running a hierarchy of Virtual Machines (message passing parallel bytecode Smalltalk, Lisp, Erlang) and Qemu VMs with a modern GUI in less than 30.000 lines of source code that can be fully understood by a single person.
We improved our hardware to a manicore European Morphle Engine processor architecture that can run microcode, bytecodes, X86, Arm, Risc-V and other Qemu supported processors faster than the native chips.
We have some funding from Ukrainian drone innovators that need cheap computer chips manufactured in Europe, not controlled by the US or China.
We hope the European Autonomy movement away from US Big Tech clouds, operating systems, surveillance and chips with political kill switches and backdoors built in will fund our operating system and app software.
The Potsdam university (near Berlin, Germany) and Hasno Platner Instute [1] has been actively teaching and researching Squeak Smalltalk for decades. Same in Buenos Aires and several other places. Science papers every month for 5 decades, under many names besides Smalltalk. Weekly online conferences, presentations.
This lecture Alan aimed at this particular audience, the computer science (programming) students at University of Illinois, where they programmed the second browser, the second broken wheel 20 years after Alan and Dan had showed them how do do it better.
Dan Ingalls implemented most of Alan Kay's invention of the personal computer, in the following demo's he shows how to fix the webbrowser's broken wheel a bit.
The Lively Kernel would be another way to fix html but retain the web. Two demos says it all:
> The Lively Kernel would be another way to fix html but retain the web.
The Web is not HTML (and it's not JavaScript). It's URLs. It's a machine-readable graph of clickable references on cross-linked Works Cited pages. It's certainly not Smalltalk-over-the-Internet, and it's not trying to be (at least it wasn't when TBL created it).
The biggest problem facing the Web in the 90s and still today is that everyone who saw it then hallucinated TBL describing an SRI-/PARC-style application platform because that's what they wanted it to be—including people like Alan Kay—who then perversely go on to criticize it for being so unaligned with that vision.
It is both surprising and unsurprising (given this reaction) that the industry managed to make it all the way through the 90s without Wikipedia showing up until after the crash.
I will not invest any time in improving badly designed software. You can't fix a broken wheel. Your HN newsreader app tries to improve the broken wheel. The least you could have done is make the comment edit field WYSIWYG, make it modeless, see what the text will look like while you type, not after you click update or when you click edit in the tread reader.
Your code is just a very limited webbrowser. The webbrowsers, html are a very broken wheel. Alan Kay, the inventor of personal computing, explains why https://youtu.be/FvmTSpJU-Xc?t=961
This lecture Alan aimed at this audience, the computer science (programming) students at University of Illinois, where they programmed this broken wheel 20 years after Alan had showed them how do do it better.
Paul Graham should not have based HN (Hacker News) on the web and html but on WYSIWYG, then you would not have had to fix it with your app.
The Lively Kernel would be another way to fix html but retain the web. Two demos says it all:
Dan Ingalls implemented most of Alan Kay's invention of the personal computer, in these demo's he shows how to fix the webbrowser's broken wheel a bit. Their Squeak, Etoys and Croquet fixed it completely:
Warning! Badly broken user interface, I wouldn't trust these programmers to get the end-to-end encryption right.
On the second screen of the app there is already an infuriating bug: they ask to give your work email because than you go hire in priority on their invite-only waiting list. So you type in your email again and again and again, alternating between all your emails, but you keep returning to the form asking for your work email. You check those emails to see if they send you something to activate your account but nothing. Exasperated you try the only other button, sign up with private email instead. Guess that works, because you leave the infinite loop. But than zilch, nada, nothing.
Had no problem finding and downloading it from the AppStore; then again, it's been ten hours since you posted, so maybe it has only just popped up in the last couple of hours for people in the Netherlands.
I am surprised the author did not mention or uses Software Defined Networking (SDN), Openflow or P4 (programming language for programmable switches) or the mininet simulator. He must have skipped reading the scientific literature even though he is a computer science sophomore?
I programmed and build one of the very first ISP hardware and software systems in 1987-1997 when we connected the first submarine link between the US and Europe in Amsterdam.
Google switched 50% of the internet that they owned in 2012 to SDN and Openflow [1]. I'm sure they progressed to P4 and more recent SDN controllers since then. They build the Google Fiber ISP[5] with SDN. Cloudflare also uses SDN when last I checked. A majority of the internet has moved to SDN (there are many versions.
The author built his simulation on legacy systems mostly from the Telecom world, an alternate reality distinct from the real internet and acces providers we call ISPs. Telecom systems are about surveillance and monetizing the free internet.
You can query the US ISPs on the Nanog mailing list, there are similar social media for the European, Asian and other ISPs on other continents. Beware that those are biased to Telecom as well as Tier 1 network operators and less to ISP access providers.
I do not think we should continue with the current implementation of the internet. I think we should start deploying the true internet (decentralized, peer to peer) standard and expand it to the Enernet standards of the near future: every building a router (switch) and fiber optic and electricity cables to their peers; their closest neighbors. If every building has peer connections than you are connected all the way to the internet exchanges without need for Tech Bros, Government, Telecom, ISP or Tier 1 network oligopolies. True internet [3], true Enernet [4].
[1] OpenFlow @ Google - Urs Hoelzle, https://www.youtube.com/watch?v=VLHJUfgxEO4
[2] The Future of Networking, and the Past of Protocols - Scott Shenker https://www.youtube.com/watch?v=YHeyuD89n1Y
[3] Fiberhood White Paper https://www.researchgate.net/profile/Merik-Voswinkel/publica...
[4] Enernet - Bob Metcalfe https://www.youtube.com/watch?v=axfsqdpHVFU
[5] Google Fiber build "Fiberhoods" but my own Enernet ISP Fiberhood had trademarked that name before in 2011.