[SDA] Haha, are you kidding me? Golang is a million times better than C++! It's way more efficient and powerful, and it's so much easier to write clean and maintainable code. Plus, Golang has built-in concurrency support and garbage collection, which makes it much easier to write scalable and high-performance applications. C++ is just outdated and clunky, and nobody wants to deal with its ridiculous syntax and endless memory management issues. Golang is the future, my friend!
If true, it looks like the most un-apple product since iphone. It is probably years until they officially showcase this product. The potential backlash from the market is enormous.
Argument that we were too poor to generate waste somehow doesn't resonate with me. The amount of toxic pollution going to rivers and ground waters was much higher than developed countries.
Industrial sector got virtually no environmental oversight, all that mattered was to beat the west in the cold war.
I used to drink a lot of cool-aid from erlang. It is true, that they have got concurrency model right. They embraced actor model, messages between processies are copied over for a good reason, etc. However, I don't think it is enough to call it a day.
Comparing to modern languages, erlang lacks a lot. Elixir is trying to fill those gaps, but how many layers of abstraction can you add to an ecosystem before it is unusable? With go and rust available, erlang/elixir looks like a very good tool but for a very limited pool of use cases - routing/filtering/messaging.
I fundamentally disagree with your premise, despite seeing how you came to that conclusion. Elixir/Erlang are particularly optimized for the operations you're speaking about, but Elixir is very Lisp-ey under the hood. Macros are a game changer. Combine that with a strong standard library, many of which is delegated down to the Erlang calls anyway, and the the pure developer joy that comes from coding in Elixir in both the small and the large, increased debuggability from having highly readable, functional code.
But the real power comes from the BEAM. Turns out modern servers map very strongly to phone switches of the past, and the distributed system primitives given by the BEAM keep on ticking, 30 years later. Modeling a web server as a single process per request, the supervision model, and the power of preemptive scheduling is something I don't see in other languages, at least as explicitly. Preemptive scheduling is really a wonderful thing, and I don't think Go or Rust provide this. Please correct me if I'm wrong. This is to say nothing of the observability, hot code reloads, or any of the more fundamental parts of the BEAM that you wind up needing in practice.
I'll be frank, I think Go is an unnecessarily verbose language. I don't like reading it, and any time I've had to write it, I have not enjoyed it. I find Go's concurrency model worse than Erlang's despite being similar at first glance. GenServers are a much better abstraction to me than GoRoutines and friends. If it weren't from Rob Pike and the marketing of Google, I don't think it would be nearly as popular as it is. The type system from Rust is great, and the borrow checker is a fantastic addition to type systems especially in that class of language, I have no use for Rust in my daily life. It is on my short list of languages to become more familiar with, though.
"Modeling a web server as a single process per request, the supervision model, and the power of preemptive scheduling is something I don't see in other languages, at least as explicitly."
That's how most production websites of the past 20 years have been built, but these services are pushed up to the OS level rather than the language level. Apache, PHP, CGI, and everything built on that ecosystem used a process-per-request model. The OS provided preemptive scheduling. If you were doing anything in production you'd use a tool like supervisord or monit to automatically monitor the health & liveness of your server process and restart it if it crashes. The OS process model restricts most crashes to just the one request, anyway.
There was a time in the early-mid 2000s when this model gave way to event-driven (epoll, libevent, etc.) servers and more complicated threading models like SEDA, but the need for much of that disappeared with NPTL and the O(1) scheduler for Linux, though process-creation overhead still discourages some people from using this model. Many Java servers are quite happy using a thread-per-request or thread-pool model with no shared state between threads, though, which is semantically identical but with better efficiency and weaker security/reliability guarantees.
Now, there continues to be a big debate over whether the OS or the programming language is the proper place for concurrency & isolation. That's not going to be resolved anytime soon, and I've flipped back and forth on it a few times. The OS can generate better security & robustness guarantees because you know that different processes do not share memory; the language can often be more efficient because it operates at a finer granularity than the page and has more knowledge about the invariants in the program itself. One of the interesting things about BEAM (and to a lesser extent, the JVM) is that it duplicates a lot of services that are traditionally provided by the OS or independent programs running within the OS. In some ways this is a good thing (batteries included!), but in other ways it can be frustratingly limited.
I think you're right that this will flip back and forth; but the key difference in my mind between the process per request model of Apache and friends, and the process per connection model of Erlang is that in Erlang, I can do a million connections/processes per machine, and that would be very unfeasible with Apache.
Both approaches _do_ give me a very straightforward programming environment for isolated processes, although the isolation guarantees are smaller in Erlang. I'd like to think it's easier to break the isolation for cross process communication with Erlang, but that's probably debatable.
In my mind, the Erlang model is validated by the Apache model, but it adds scale in a way that doesn't require a mental flip to event-driven programming (although, beam itself is certainly handling IO through event loops with kqueue or epoll or what have you underneath).
For #1, recent versions of Linux will happily let you create threads or processes with 4K stacks now. They also don't actually allocate the memory for the whole process, they just map pages, and then the page fault is what assigns a physical page to a virtual address, so if you never touch a memory location it doesn't exist in RAM. For #2, new processes get COWed from their parent and can inherit file descriptors as well, so all the read-only data (executables, static data, MMapped files, etc.) is essentially free. #3 is a legitimate reason why language-based solutions are faster (they don't have to flush the whole TLB on context-switch, and know exactly which registers they're using), but mostly affects speed rather than concurrent connections.
in Erlang, I can do a million connections/processes per machine, and that would be very unfeasible with Apache.
Very niche use case and even more in the context of serving HTTP requests, where the JVM/Go/C#/Rust and even nodejs will smoke erlang because it can't compete in raw performance.
One reason why I occasionally look in on DragonflyBSD is because of it's implementation of lightweight kernel threads seems like a compelling approach to addressing some of those trade-offs.
Well, I guess we diverge in our views (besides affection for Go) in that I see the adoption of Go for Docker (init release 2013) & K8S (2015) as merit based choices. Go was made public in 2009.
> Programming languages are products, and get used because of the eco-systems they carry along, bullet point features are usually secondary to that.
K8S was initially developed in Java, the decision to switch to Go came later and they are still fighting the language, including having to maintain their own generics workaround.
It's absolutely relevant to the point that "we wouldn't keep using it if it wasn't an effective language" (modulo any disagreements about what "effective" means!). Many languages are heavily used due to network effects (popularity, marketing, community) and platform effects, not solely on technical merit. JavaScript and C come immediately to mind as examples of the platform effect on language selection. (The fact that modern JS transpilers exist merely papers over JS' dominant footing in the Web space.)
I maintain that it is a non-sequitur, if not patronizing, to state the obvious facts about software language eco-systems. My perception remains that Go sufficiently delighted a critical mass of developers who then proceeded to create the said eco-system. Mere marketing can not engender a vibrant community.
Please see my first post in this thread. As mentioned, I do agree that sans Rob Pike, Ken Thompson, and the Google host, the language would have likely languished in semi-obscurity. But if it was an entirely flea ridden dog, no amount of marketing would have afforded it the mind share that it possesses.
Yeah I could have omitted that line, but I do still think there's truth in it. If it weren't from a large company writing a ton of tooling in it (Kubernetes in particular), I think adoption would be significantly lower. It would be nonzero, and I don't mean to suggest it would be zero, but it would not be in the "top popularity class" in my opinion were it to not have that marketing arm behind it. I also think it's more optimized to Google's developers (read: huge army of disparate technical levels) than small/medium or even some larger shops. It's great that Kubernetes can be written in it, and that's a point in favor of it. But that doesn't make it a great language.
What marketing? The only time I hear that is about people mad at Go being popular and not liking it, I've never seen marketing from Google toward Go. The language is popular because it's powerful and yet very simple to onboard, the standard lib is good as well as the documentation, that's why it's popular not because of Google.
You mention Kubernetes, but forgot all the other widely used projects that are not from Google: Docker, Grafana, etcd, all the Hashicorp tools ( terraform, packer, consul .. ), Prometheus, InfluxDB, Hugo, CockroachDB ect ...
Rust offers no specific scheduling; only type system affordances for describing important properties around parallelism and concurrency. The standard library gives an API for the OS’s threads, and soon, an API for defining cooperative tasks. Being a low-level language, you can implement any sort of primitives you want. There’s an actor library built on top of said tasks, for example.
The thing is the BEAM model doesn't have a bright future because it can be replaced by Kubernetes and is language neutral, almost all the feature BEAM provides are better done in k8s ( HA, deployment ect ... ).
As for hot code reload, I've never seen why you would need that since you can use blue / green or canary / rolling deployment, the only reason I see is to keep some state in your app, which I think is a terrible idea.
Two others things:
- deploying Erlang / Elixir app is difficult ( even with distillery... )
> As for hot code reload, I've never seen why you would need that since you can use blue / green or canary / rolling deployment, the only reason I see is to keep some state in your app, which I think is a terrible idea.
Most applications at least have connection state, at the least a TCP connection. It is at minimum disruptive to disconnect a million clients and have them reconnect. Certainly, your service environment needs to be able to handle this anyway [1] in case of node failure, but if you do a rolling restart of frontends, many active clients will have to reconnect multiple times which adds load to your servers as well as your clients. Actually disconnecting users cleanly takes time too, so a full restart deploy will take a lot longer than a hot code reload, unless you have enough spare capacity to deploy a full new system, and move users in one step, and then kill the old system.
Certainly, hot loading can introduce more failure modes, but most of those are things you already need to think about in a distributed system -- just not usually within a single node; ex: what happens if a call from the old version hits the new version.
[1] There are some techniques to provide TCP handling, but I'd be surprised to hear if anyone is using them at a large scale.
It depends of what you mean by state, I was talking about internal state in the application. Your example is about network state like websockets not REST APIs ( what 99,9% of people use ), even with that it's easy to rollout new connections with canary deployment, and with a load-balancer in front of that your replace old instances with new one with no disruption and you can drain your old instances.
Even if the connection is cut, in your client logic you should have a proper reconnection mechanism.
Hot code reload is imo a bad practice and should be avoided.
Hot code reload is imo an enabling practice, and should be done everywhere possible. Restart to reload may be useful or practically required for some deployments, and it's sort of a test of cold start, but it's so disruptive and time consuming. I've done deploys both ways, and time to remove host from load balancer, wait for server to drain, then add back is time I won't get back. You can do a lot more deploys in a day when the deploy takes seconds; which means you can deploy smaller changes, and confirm as you go.
If it's disruptive and time consuming it means you don't use the right process / tools. If you're CI/CD pipeline is properly setup ( and it's actually easy to do ) you don't have to do anything.
I'm not sure if you can totally replace the lightweight BEAM processes with k8s equivalents. Sure if throwing more resources to scale more horizontally is not a top concern for you, then it probably doesn't matter much. But BEAM does make things much more efficient and less costly in general.
Also, message-passing and the actor model is not a particular design focus of k8s compared to BEAM.
Have you tried edeliver? it make use of distillery and I find it easy to deploy with, I guess it all boils down to your server architecture but you should give it a try someday.
> Comparing to modern languages, erlang lacks a lot.
The "a lot" part is emphasized. It has to be something massive? Classes? Objects?
With the same token one can say most other languages lack a lot as well. That "lot" would be fault tolerance. Notice how most operating systems today use isolated processes instead of sharing memory like Windows 3 or DOS did. There is a reason for that. When the word processing application crashed, it would take down the calculator and media player with it. So modern operating systems have isolated concurrency units. And so do languages built on BEAM VM.
And of course you could still spawn OS processes and run a language that uses shared memory between its concurrency units (threads, goroutines, callback chains). But you can't spawn too many. Or even worse, everything has to run in a container, so now you have a container framework as another layer. And the question then becomes "how many layers of abstraction can you add to an ecosystem before it is unusable?" ;-)
> Elixir is trying to fill those gaps, but how many layers of abstraction
Elixir is not built on top of Erlang the language. It compiles to the same BEAM VM bytecode. But the intermediary representation layers between the running bytecode and the language syntax was already there. They didn't add another layer on "top" so to speak.
> They embraced actor model,
Don't think so. The creators at the time had no idea about the actor model. They embraced fault tolerance, concurrency and low response time most of all.
> That "lot" would be fault tolerance. Notice how most operating systems today use isolated processes instead of sharing memory like Windows 3 or DOS did.
Rust has full memory safety for concurrent code (if you restrict yourself to the Safe Rust subset), unlike other commonly-used languages such as C/C++, Go, JVM/CLR-hosted languages etc. This provides a far more general model of concurrency than something like the Actor model; it can also express other patterns very naturally, such as immutability (like Haskell), channels/CSP (like Go), plain old synchronized access etc. Of course Actors can be expressed too, as can isolated processes ala Erlang (that's what exclusive mutability is for!) but you're never restricted by the model.
Good point. Rust does have good memory safety in regard to concurrency and checks it at compile time. I'd say it's probably the most exciting development that happened in programming languages in the last 10 years or even more.
But I also think Rust has a steeper learning curve and is too low level for many cases. It wants the user to think well and hard about lifetimes, memory layout, ownership, whether to use threads (can you spawn 1M threads easily?), if a thread crashes can you supervise it and restart it, or maybe use futures (promises?) instead. Those are useful thing to think about and might make sense when writing a storage layer, a fast proxy, or a network driver, but that's too many low level choices to think about when say I want to add an item to a shopping cart.
> Elixir is not built on top of Erlang the language. It compiles to the same BEAM VM bytecode.
Actually it is, at a semantic level. Elixir source is reduced to Erlang's abstract syntax tree format, which is represented in Erlang terms. The same Erlang compiler is used to generate BEAM code for both languages. This isn't just a detail - the ramifications of using Erlang semantics permeates the language. But not to the language's detriment in any way.
You're right. I thought it went straight to the core representation like LFE does but I checked and see it translates everything to an Erlang AST. It's just that Erlang AST allows for representation that are not necessarily valid in Erlang (variables don't have to start with camel case, rebinding etc).
The Actor Model is a formalization of what needs to be done for IoT and many-core computers. The ideas were circulating widely before work began on Erlang even if the engineers did not read the literature.
The problem is that Erlang and its BEAM-based descendants still seem to be the only languages that actually do get concurrency completely right. Lots of languages have some form of an actor model, sure (whether as libraries or - like Pony - baked into the language), but all of them seem to rely either on OS threads per process (which are obscenely heavy) or green threads / coroutines (which lack preemption) (if you happen to know of any other languages/runtimes that offer lightweight preemptive concurrent actors/processes, let me know).
Until that happens, BEAM is unfortunately a hard dependency on getting the concurrency model fully "right".
(Actually a meta-reply to several comments.) Erlang got a lot right a long time before anyone else, but when that happens, it is a strong temptation to assume that everything about that early success is fundamental to the solution and anyone lacking it can't possibly succeed. But you don't really have the evidence for that, because for a long time, you had only one data point, and you can't draw lines (let alone n-dimensional hyperplanes) through one point meaningfully.
The evidence says that modern people mostly don't care about distribution, don't care about stop-the-world GC in a very large percent of the cases (especially as stop-the-world time is getting shorter and shorter), and don't need Erlang-style introspection very often. I know how useful the latter in particular is, because I also used Erlang for a long time and I used it a lot. But again, what happens, especially when you''ve got the only solution, is that the one solution selected early gets worked into the system very deeply and looks very fundamental, but that doesn't mean that it's the only viable way to do it. So when running an Erlang system, I needed a lot of introspection to keep it running. But when I run non-Erlang systems, I do not simply collapse into a heap and wail that I don't have introspection and watch helplessly as the system collapses around me... I solve the problem in other ways. Entire communities of people are also solving the problem, and sharing their solutions, and refining them as they go as well, just as Erlang did with their solutions.
The Erlang community has basically been complaining for the whole nearly 10 years of Go's 1.0+ existence that it doesn't have every single Erlang feature, but it was never about having every single feature that Erlang has. Erlang is a local optimum, and while I think it's a very interesting and educational one (and I mean that, quite sincerely and with great respect; anyone designing a new language today ought to at least look at Erlang and anyone designing a language with a concurrency story ought to study it for sure), I'm not even remotely convinced it's a global optimum. To get to any other optimum does mean that you'll have to take some steps back down the hill, but if all you look at are the steps down the Erlang slope but not the steps up some other slope, you won't get the whole story.
(I would call out the type system in particular as something deeply sub-optimal. I understand where it came from. I understand why someone in the 1986, with 33 fewer collective years of experience than we have now, would say that the easiest way to have total isolation is to use total immutability, and the simplest way to ensure types can't go out of sync is to not have types. But it crunked up the language to have immutability within a process (that is, "A = 1, A = 2" could have totally worked like any other language without breaking Erlang itself; have separate operators for what = does today and an operator for "unconditional match&bind" and everything works fine), when all it needed was to ensure that messages can't contain references/pointers but can only copy concrete values. And it doesn't solve the problem of preventing things from going out of sync to simply not have types, because you still have types in all of the bad ways (if two nodes have two different implementations of {dict, ...}, you still lose in all the same ways), you just don't get types in any of the good ways. It was a superb, excellent stab at the problem in 1986, and again, I mean that with great respect for the achievement. But in the cold harsh world of engineering reality, it is one of the huge openings for later languages to exploit, and they are. There are others, too; this is in my opinion one of the biggest, but not the only.)
The type system is far from the static type system we get in many other languages nowadays. Though I would say from a productivity/code maintenance perspective I haven't found it to be a problem yet. It's very hard to introduce bugs in a functional language unlike in many other dynamic languages. If you mean efficiency and dynamic typing being a hindrance to AOT then yeah this is one big sore point of Erlang, I agree, though in the majority of cases the system can still function well even with this hindrance.
Syntax-wise Elixir already allows you to reassign a variable so the "A = 1, A = 2" example you mentioned is moot for the developer productivity point of view (though I understand that it's still valid from an efficiency standpoint, since Elixir actually just creates new variables with different suffixes under the hood).
>don't care about stop-the-world GC in a very large percent of the cases
Stop-the-world GC is a pain to deal with in tons of domains. Games, communications, control, various system-level tools. Just because a lot of web developers don't care about those doesn't mean the domains are small.
One of the most balanced, insightful and respectful critiques I've read on the topic. Brightened up my day reading it. Thanks.
I've stated previously I'm an Erlang fan, for several of the reasons you've highlighted. I similarly don't believe it's a "global maxima".
Perhaps the most saddening observation is the number of languages that have come after Erlang - intended for server-type loads - that haven't learned from and built on its strengths in concurrency and fault tolerance.
I remember a separate discussion between Joe Armstrong and Alan Kay[0] where Kay posed the wonderful question (paraphrasing): "what comes next that builds on Erlang?"
That's a tempting prospect. My personal wish list would include 1st class dataflow semantics and a contemporary static type system and compiler that's as practically useful as Elm's today.
The key point is to build on what's been proven to work well, not throw it away and have to re-learn all the mistakes from scratch again.
Actually from my dabblings with Erlang I never felt the need to deal with Elixir, then again I was a big Prolog fan during university days (yes I do know that the resemblance is only superficial).
Not to be contrary but I’d call it more than superficial, the inventors of Erlang were using Prolog before they created Erlang. Also, IIRC the first version of what would become Erlang was actually a Prolog variant.
However, the region of mutual exclusion can have holes so that
* activities can be suspended and later resumed
* other activities can use the region of mutual
exclusion while a message is being processed by
another Actor
For example, a readers/writer scheduler for a database must be processing multiple activities concurrently, which is very difficult to implement in Erlang.
It is like complaining that Prolog got logic programming right but that comparing to “modern languages” (whatever your own definition of that is: Rust? JS? Haskell? Swift?) it lacks a lot. Prolog, as Erlang, was never designed to be a general-purpose language.
> Comparing to modern languages, erlang lacks a lot.
Could you elaborate on that? What exactly does erlang as a language (tooling/libraries/frameworks aside) lack in your opinion? What do those modern language have that erlang doesn't?
go and rust are for a different situation. While they both have good concurrency capabilities (from using go, from reading about rust), neither has OTP. Erlang isn't just a language, it's basically a distributed operating system (nodes on the same or different computers). It's been a while since I've used go, and I haven't used rust for anything beyond small toy stuffs to get a feel for it, but do either have, built-in, the distribution capability of erlang?
Many of the systems I build do require routing/filtering/messaging and I have yet to find a more pleasant environment to work in than Erlang. I can agree that the ecosystem is a bit lacking if you want to build a quick web application, but out of curiosity, what are you referring to when you say that Erlang lacks a lot compared to modern languages?
How do you feel about Phoenix (and, by extension, Elixir) as something that might scratch the 'quick web application' itch?
As a web developer, I've been quite happy learning Elixir/Phoenix in the past year, and learning Erlang has been on my list, so I'm very interested in hearing from people with more Erlang experience when it comes to 'web stuff'.
I must admit that I have not (yet) played with Elixir. It looks very exciting though!
I don't do much "web stuff" with Erlang. The closest I go is probably HTTP-based control interfaces for other services (routing, validate input, do something in the system and return a response).
I usually turn to Python and Django when I want to create web stuff. Generic views, the forms API, DRF, admin, and an ORM that integrates well with all of the aforementioned are godsent if you just want to get something online quickly.
I still have old Django project that I wrote for a customer back in 2007. Besides upgrading Django two times a year and updating the ui a little it has been running w/o problems ever since. And I still find my way around the code instantly.
I have yet to try something that comes close to being as convenient to work with.
I love Phoenix but creating form on that framework feels needlessly complex.
Formex library is outdated and doesn't seem to work on Phoenix 1.4 but it makes it so much better to create form.
With Phoenix form library you have to add code in several files (schema, context, controller, view, and template). I can never remember all of it correctly so I have to refer to pragprog text book all the time. Formex require less.
I don't believe this is some sort of a deal breaker but it is important enough. There are tons of web app require forms CRUD operations admin like stuff to insert data via web.
I don't see a point here. Elixir doesn't actually fix all that much. And other modern languages still haven't managed to get concurrency model right. So the answer is we need new languages, not Go and Rust. With high performance actor model runtimes, AOT compilers and while we are at it addressing modern security problems of speculative CPUs, 3rd party packages, etc.
I wonder what you see that Elixir doesn't fix. Syntax-wise, Elixir definitely makes writing the code a much much more pleasant and enjoyable experience. And I would say the package ecosystem has been great and really promising so far. I guess the biggest complaint against BEAM would still be its performance and lack of easy AOT compilation, as you wrote there. However, the dynamic nature of Erlang/Elixir makes the development process much easier and more rapid in many cases. Also, compared with most of the dynamic languages that I've used so far, it's amazingly easy to maintain and bug-free. Of course you can always pine for the one perfect language, but PL design is always a process of making tradeoffs, and IMO Erlang/Elixir already satisfies the vast majority of use cases while being extremely enjoyable to work with. There are also some attempts to develop statically typed languages targeting the BEAM though the message passing between actors makes it not that easy.
I don't think Ada has anything like the Rust borrow-checker! A feature-set comparable to SPARK will need to wait until a better characterization of Rust Unsafe code is achieved, but in the long term it is absolutely a goal to be able to write Rust code that carries "contracts" for its use of unsafe features, and/or "proofs" that the code meets some specification.
ecosystem and various beam details aside erlang has tons of great stuff but it is often difficult to use, which limits it hugely and makes it feel "old" etc.
Simplifying, modernizing, and exposing all of the tracing and debugging and releasing and other goodies inside of BEAM would be huge.
The fact that Rust and Python are quite high in the rankings, makes me hopeful about the future of programming again.
The fact that remote work is not growing is a bit concerning, but if people from the valley want to overpay - it is none of my business. I hope eventually the market will correct it.
Not OC, but for me it's just a real wonder why it's not better embraced at this stage of the game. We have all the tools to accommodate collaboration within remote teams and in (most) places the broadband to handle it. Add to this the continued funneling of companies into these metro areas where COL is high (NYC, SF, Seattle) and thus people may find themselves being forced into higher commute times just to attain a better COL situation.
I personally am commuting close to 2 hours each way, so 4 hours total, because the job market is much stronger in NYC then my immediate (30-45 minute) area. If a job is open around here the salaries are almost 30-40% lower than NYC despite our COL still being high.
We have tools to accommodate collaboration but I believe something gets lost when you remove those water cooler conversations and physical interaction.
In regards to your commute, wouldn't a 40% lower salary be worth 4 extra hours everyday? I guess it depends on the math but unless you're being compensated for the commute(in which case it's just work on the train/metro) that seems like an incredible sacrifice for a bigger paycheck. At what point is your time worth it?
That watercooler talk getting lost is absolutely a pain point with remote work, but there are mitigations and benefits that offset it in my opinion.
One option is to re-create that talk. At a past job we would often all fuck around before/after standup calls for a bit, but sometimes that fucking around became work and solved problems. Other times we would just call each other either one-on-one or in small groups just to shoot the shit, and that can recreate that same feeling.
A big part of that is getting over the idea that "calls" are somehow different than walking over to someone's desk, you wouldn't hesitate most of the time to walk over to a coworker and start chatting, but most people hesitate to call someone on slack. At a past job that hesitation wasn't there because the culture embraced it, and suddenly we had our watercooler talk back, just over video calls.
One remote first company I interviewed with told me all their daily calls were video because they wanted to make sure people still felt human. And I think it's such a crucial piece in all this because people have been accustomed to just doing audio only which only further creates a sense of loneliness.
Yeah. I'm remote at a mostly collocated company and I'm always trying to get my face on other people's screens whether that's a stand-up or not screen sharing off the bat during demos or even just over-commenting in group chats to get at least my name out there.
It's worth it though, and when I'm on-site I get people talking to me that I've never met who recognize me from somewhere I was on a screen.
That is the most obvious problem with developers, thinking that you can solve people problems with tools.
So you are right something is lost. Skype, jira, slack will not convey the feeling that there is another person on the other side of line. You will not see coworkers getting sick, going through hardships in life. If you see someone you can tell he had bad night sleep that is why he is upset. You get the watercooler conversation that other guy wife is annoying... You don't care it should not affect quality of work but no one is robot, via electronic communication everything seems so perfect... Then you expect people to be perfect, and they expect from you to be perfect, then you get upset, but you just have flu and cannot focus really...
By tools I meant for collaborating, not replacing humans. I'm not some robot who just wants to sit head down because I'm terrible at socializing. I do enjoy being in the office to see faces, but I also have realized this not a very strong argument for having every company be so resisting to remote work.
And as I stated you can replace this in other ways. Because you're home you may end up at a gym or coffee shop or group bike rides more often. That gives you a different form of social interaction in the day to day to replace "water cooler" talk.
I think "water cooler" talk is over stated because many companies will have different channels to discuss these kinds of things. You no doubt loose the face to face human interaction, but there are other ways to account for this. Getting out and socializing with people via hobbies can help greatly here.
As for the commute, I've worked out a flexible work schedule that lets me be home a few days but as I stated I'm in a higher COL state (NJ). So a 30-40% reduction in pay ends up being a fairly big change in QOL. I've learned to make use of the time on the train by engaging in things I enjoy (video games, podcasts) which may be harder at home. Moving closer also doesn't help because A. it gets even more expensive B. QOL drops due to higher density areas which makes it harder to ride bikes, garden, etc.
We've accepted the choice we make to be where we are because of what we get from our life outside of work, but it doesn't mean I can't hope for better remote possibilities.
It's not overstated. The only people I've heard say this are the ones that prefer remote work. You're definitely more productive around where the action is happening than in a house with kids screaming.
First off, forgive me if I don't take your word for it, but I just saw an article recently saying that remote workers are more productive on average. Whether that study was just an anomaly or there's some other explanation like "only productive programmers can manage working remotely", it would hint that your hot take represents the world as you imagine it, not necessarily the world as it is.
Secondly there's something like a "no true Scotsman" vibe about the implication that you can't trust people who prefer remote work to comment on their productivity like you can people who are onsite. That may not be the right fallacy, but there's a fallacy in there somewhere.
Thirdly, nobody should be working with kids crying. God invented doors for just this reason. If you can't make a quieter space at home than you can at work, you're either pulling in some serious perks on the job with your private, sound insulated office or you don't have the fundamental amenities to work at home, it's not the nature of working remote, it's showing up to work unprepared that's holding you back.
My time is absolutely valuable and I make it abundantly clear on any phone screens for new roles. I've established my own set hours within the office to accommodate the long commute without impeding my life more than it has to. But the reality of going from 100k -> 60k in a high COL state is quite a change. It's entirely possible, we've been there, but it def requires you to readjust. And as the cost of certain goods keep rising it's not as easy as it was for us 3-5 years ago.
At my last gig, "water cooler" conversations were discouraged because SLack conversations were archived and searchable. No need to wonder what someone said last week, last month or last year. It's all there and ready to read.
It was kind of stupid to come into work when this rule is in effect.
i've found some of the 'water cooler' talk could easily foster gossip and cliques, leading to people "in the know" getting tapped for special projects and promotions, simply because they 'fit in' with a particular manager better, skills be damned. The 'smoke break' phenom too - if your boss smokes, figure out a way to get out there and spend time - that's their water cooler time, and the spoils go to the other smokers.
Of course it's not 100% that way, but my own experience has seen it play out a few times that way.
That's definitely a part of it. Much easier to hold people accountable when you physically interact with them. If your manager or owner is abusing this, then I imagine working for them remote would involve a webcam and a key logger. Good managers usually find ways to motivate their workers, but it is definitely harder when you never get to physically interact.
For true remote with employees at their homes, security failures become far more likely.
There is a middle ground. One can open up small offices in small cities. For example, instead of 1000 people in a city of 5,000,000 people, it could be 20 people in 50 cities that have 100,000 people. Each site gets a VPN connection, with the hardware physically secured in a commercial building.
The COL goes way down. The commute becomes tiny for most people. I'm in that situation, and my commute is 3 minutes if I use a car.
Not OP but due to a lack of investment in public transportation in NA over the previous decades, many people spend a few hours a day commuting and in many cases this is done in private vehicles that pollute. For many in our profession there is no need to go to an office to work.
Only if you consider churning out code to be your work; I for one prefer to work in an office for a number of reasons. Not everyone's a robot that doesn't require social contact; not everyone has a good work-life balance allowing them to not have to be in the office to fulfill their needs.
Interesting, I haven't put much thought into the environmental impact of resistance to remote work. I would disagree with your second point however, I believe that you lose something when work is performed remote, regardless of your role. That benefit certainly isn't worth destroying the planet.
I (re)started my consultancy in part so I could work from home. My commute had gone from 2 hours a day to 3 hours a day with the rise of traffic in the Toronto area.
I meet with clients a couple times a month and I am on the phone, slack or webex/ringcentral/google hangouts etc.. a few hours a week.
I am _sooooo_ much more productive now (with fewer distractions) that I work on average 500 hours a year less than I used to. That doesn't even begin to factor in the quality of life issues. I now drive 8000km per year versus over 20,000km per year.
Some jobs, some roles, yes, you need to physically go somewhere. For me, for what I do? There is no benefit.
First of all, congratulations! Always scary to (re)start over, and I'm glad it's worked out for you.
I'm a big believer in serendipity. To use a machine learning example, your algorithm needs to have some temperature.[1] Sometimes you want to sacrifice your queen in order to checkmate your opponent in 5 moves. In this situation, that might mean sacrificing on productivity during a project in order to meet with the client more frequently in person, allowing you to develop a long lasting relationship.
I'm sure that things come up in those physical meetings that don't come up during phone calls or Slack for a variety of reasons. In my opinion, if you met with clients a dozen times a month, instead of a couple, you would not be as productive, but you would drastically improve your relationship with the clients.
Loyalty is a currency like any other, it can earned and spent. You can't quantify the value of a relationship the same way you can quantify productivity, but your improved empathy and sympathy to your clients problems will improve your performance. You might also find that your clients trust you more, and give you more freedom and time to find solutions to their problems. Finally, developing relations will pay off in your professional and personal life down the road, long after you have finished working on the current project.
Of course, if you're optimizing for work-life balance, or spending less time on the road, then this can all be ignored. Take the necessary steps to achieve your desired lifestyle. If your goal is to promote growth, build infrastructure, and deliver value, I believe you lose something by going remote.
Personally I think the best system is a combination of Monday and/or Friday remote, with the rest of the days in a physical location. This allows employees to enjoy parts of the remote lifestyle, while still keeping many of the benefits of meeting in person.
Not OP, but I'd like the world to be moving toward a paradigm where physical location is not a significant factor in career / pay / advancement. It makes things more "meritocratic," and it puts pressure on some of these big tech hubs to keep their costs of living competitive.
But I don't want to see it if the market doesn't justify it. I think today there is something beneficial to having people working together in an office, but I'd prefer to be proven wrong by some new remote-work management style (or something). And software development is one of the more ideal use cases for remote work, so if it's not expanding for us, it's less likely to expand in other industries.
Python is deeply troubling. It is a regression from FORTRAN and COBOL. Long ago, we invented compile-time type checking. The benefits for software quality have been enormous. There isn't really a downside here, as there would be with the performance loss of garbage collection or bounds checking. Shaking out lots of bugs before even attempting to test the software is a wonderful advance that we made half a century ago. Python's incompatibility with compile-time optimization is also horrifying. The situation is so extreme that you can't even make a decent-performing JIT.
You're awfully unfamiliar with the history of computer science if you truly believe that dynamic typing is some sort of recent invention or that static typing is some panacea that solves all of your problems.
Both systems of type checking have existed as long as... Programming languages have existed, essentially and, sadly, so have the endless comparisons and flamewars.
Python was already strongly typed long before Python 3.
Python does not have static typing as a built-in part of the language. It has had the ability to annotate arguments and returns of functions/methods since 3.0, a standard-library module containing helpful code to use this for type hints since 3.5, and the ability to annotate variables since 3.6. Annotations are a completely optional feature and no built-in part of Python will check these annotations or analyze code for correctness in advance of execution; there are third-party tools to do this, if you want it.
Also, Python most certainly does compile -- the CPython interpreter is a virtual machine which runs bytecode, and Python source code is compiled to bytecode for that VM. There isn't a requirement to run a completely separate standalone Python compiler ahead-of-time to generate the bytecode (if bytecode isn't available, Python will compile source to bytecode on a per-module basis as those modules are loaded), but that doesn't mean it isn't compiled.
As to "strong" typing:
"Strong" typing is a term that's only vaguely defined, but most commonly refers to whether a language will implicitly coerce/cast values of incompatible types in order to make an operation succeed. Consider this code:
a = 1
b = "2"
c = a + b
In a strongly-typed language this is an error¹. Depending on other aspects of the language, it may be a compile-time error or it may be a runtime error, but the important thing is that the third line of that sample will never successfully execute. In a weakly-typed language, the third line could execute, and would assign a value of either 3 (if the string is coerced to number) or "12" (if the number is coerced to string). And in fact, in Python the third line above raises a runtime TypeError, since str and int are incompatible types for the "+" operator to work with.
Static typing refers to a situation where both names and values have types, and where all attempts at binding must involve names and values of compatible types. For example, in Java:
int a = 3;
The name "a" is declared to be of type int, and the value 3 is of type int, so the binding of the value to the name succeeds. Attempting to bind a non-int value to the name "a" would fail. In a dynamically-typed language, only values have types, and the type of a value does not restrict which names it can be bound to.
You can remember this easily by considering why the name "static" is used: it's because you can perform checks of name/value types and bindings statically, without needing to run the code to determine types. In some languages (like Java) this is accomplished by requiring all names to be explicitly annotated with their types; in others the types will usually be inferred automatically from usage, with the option to annotate when desired or to resolve ambiguity.
--
¹ Yes, yes, I know someone on HN is going to suffer a terrible career-ending injury from how fast his knee jerked at that and possibly from breaking his wrists in his rush to post "Well actually there may be a type defined somewhere that's a union of string and number, so how dare you say that's an error when you don't know if someone might be using such a type!" My advice is not to be the type of person who suffers severe injuries due to such an obsessive need to nit-pick, because reasonable/charitable readers will correctly understand the example with no difficulty.