It's possible to shake off OpenVPN and TLS without shaking off use of tun/tap. I use a tun/tap based "VPN" (overlay) and I quite like it. One could even add NaCl to it if they were so inclined.
I sometimes think Apple removed /dev/tap from iOS because it provided too much potential freedom.
Not every user needs in-kernel performance for their daily routine but there's certainly an argument that every user could use a decent "VPN" that could run on all their computers.
Alas, OpenVPN mindshare is rather strong, to the detriment of existing or future userspace alternatives.
I know that capstone, keystone and unicorn are marketed to the "security community" but these projects are some of the most promising I've seen for programming in general, e.g., for the few people who still might want to write small, simple (and fast) programs that run on multiple architectures.
Silly question: Historically, did anyone ever attempt to create the APL primitives as separate utilities? The crazy idea that keeps recurring in my mind is that one could have a UNIX userland made of APL primitives. k's mmap approach, avoiding I/O, is preferable. But even if one used a named pipe or UNIX socket to do IPC, perhaps it could still be fast enough to be useful or fun. Feel free to dismiss this idea, but please kindly explain why it would not work.
As jxy noted, APL2 actually was used that way in the mainframe days (I actually had the pleasure to use it, and it WAS a pleasure compared to any other facility offered on that system).
However, it's hard to replicate unix userland as K primitives - Unix pipes work well because every participant streams along implicitly; however, one of the subtle and often invisible truths of APL and K programming is that everything sort of assumes the length of every vector/list/axis/dictionary is accessible. It's not something that cannot be removed, it's just something that AFAIK has never been removed -- and the implications of removing it are likely to make APL or K less concise. Streaming in APL and K tends to be explicit.
K and APL are ridiculously useful, and once you've actually grokked the idea, it's hard to look at conventional programming languages (Java, C, C#, even Python) and not feel disgusted at the useless extreme verbosity and fragile building blocks. Some people use them as their shell, and have no issue delegating to "standard" unix userland. In fact, an older version of K IIRC had its "create directory" operation implemented as {system "mkdir ",quote x} or something like that. It might not look that way to an outsider, but extreme pragmatism is strong with the K community.
Except my lack of creativity using k at this point.
Thanks for this.
As a k noob, I wonder: is thinking of solutions using iteration and control structures a bad habit, at least until I have command of the rest of the language?
Currently I'm using recordio to save my sessions; crude, but it works well enough:
The problem is that unix pipe can be blocked and you can see it as a lazy partial evaluation, so grep a HUGEFILE|head returns when 10 lines are found. In APL the semantics is not builtin, though things like ⍴↑↓ can be lazy, it's the internal optimization, rather than a language specification.
In some way, IBM APL2 [1] was trying to achieve that with all of their APs (associated processors, APL's own way to do IPC) and built-in workspaces. You can use APL as your login shell, since it really has all the things a normal shell/CLI provides and more. Mainframe APL had all the account management capabilities for people to login remotely through their phone lines, almost the same time as Unix systems start flourish.
Unix and later linux/BSD's took off largely based on their open source and easy to duplicate and deploy (c compiler everywhere), but APL had it's corporative baggage and remained a niche (very much like kdb+ wouldn't go open/libre, and IBM is still charging a fortune for APL2).
Sybil attacks do not work in small communities that members may choose to form where the members already know each other.
If a system forces all users to be part of some large, Borg-like, distributed hash table, or ledger, then by my definition it's not "fully decentralized".
Indeed, if you don't plan on writing distributed systems that work for more than a few people, you don't need to worry about Sybil attacks. However, the nice thing about the internet is that it connects billions of people, so here we are.
I think there's a lot of historical evidence over the last few thousand years that people naturally form small communities, or at least small groups within large communities.
Today, people can, in theory, choose from among billions of peers to form these small groups. And the groups can if they so choose connect with each other, via a network of networks.
This internet "connects billions of people". True. But your company's LAN probably does not connect that many.
If a user started creating numerous fake identities on the LAN, then it's likely she would be detected.
Is it possible to create distributed "LANs" over the internet?
(rhetorical question)
Another commenter questioned why a distributed Web needs "lack of trust".
People in small groups can and do trust each other. No computers are needed to make this happen.
Fortunately the two approaches are not mutually exclusive. There are no rules about how the "distributed Web" must be constructed. As the old saying goes, there's more than one way to do it.
I think this more and more every year. The amount of technologic obsolescenced is growing with each passing day - and some of it, for no good reason. Getting rid of analog jacks that go directly before analog devices (like headphones) is just stupid. Putting DACs and amplifier circuitry inside of cables and headphones for the purpose of listening to high quality audio is just stupid.
AWS is fine as long as... you do not want to control it with shell scripts and without a large scripting language.
(No Perl, Python, Ruby, Go, etc.)
I tried this when I first experimented with AWS after I read the story behind it, i.e., the directive Bezos allegedly gave to disparate groups within Amazon to make their data stores accessible to each other.
The AWS documentation claimed everything could be controlled via HTTP. Great. I know HTTP. Sign me up.
I have no trouble interacting with servers via HTTP using the Bourne shell and UNIX utilities, without using large scripting languages. I have been doing so for many years.
But after a few hours trying to get AWS to work using UNIX it was such a PITA I gave up. And I do not give up easily.
But it turned out there were small errors in the documentation, so even if one followed their specification to the letter, things still would not work.
The Amazon developers in the help forums would just say use the Java programs they had written.
Of course AMZN had a "web interface" from Day 1. But I have little interest in another hosting company with a web GUI.
At the time all Amazon offered for anyone interested in the command line was Java. Installing OpenJDK and a hefty set of "Java command line tools" just to send HTTP requests? This did not inspire confidence.
Then came Python. Everyone loves AWS. How can anyone criticize it?
I concluded that if AWS was well-designed (according to Bezos alleged directive) then it would be possible to interact with it without having to use a large scripting language and various libraries.
I guess I am either too stupid or I set the bar too high.
AWS, as I understood it back then (before the massive growth), is a wonderful idea but I am not sure the implementation was/is as wonderful as the idea.
I sometimes think Apple removed /dev/tap from iOS because it provided too much potential freedom.
Not every user needs in-kernel performance for their daily routine but there's certainly an argument that every user could use a decent "VPN" that could run on all their computers.
Alas, OpenVPN mindshare is rather strong, to the detriment of existing or future userspace alternatives.