The biggest win by far IMHO is a performant alternative for io intensive applications for async frameworks.
Not using an async framework tremendously simplifies your code.
Code complexity has an enormous projection on everything else, especially when talking on the business level. First and foremost the cost of development and time to market rises. And then comes the cost of maintenance.
If you want to run concurrent io requests, don't you still need to use an async framework and all the fork/join logic? I suppose the slow and naive way is now easier to let limp along as you won't run out of threads?
JVM itself would do async I/O as needed using whatever API the underlying OS provides. Your own I/O code would be straightforward and synchronous. Nothing will run out of threads.
The key point is concurrent calls. If you're calling out to multiple microservices or what have you (the case where you would actually hit 1M threads in the first place) you'll want to do that in a concurrent way with futures or fork/join and not in a synchronous style.
One could argue that automatic promise wrapping and an await keyword ends up giving you cleaner, more simple code (despite the coloring).
The reason I dislike async/await is that the code doesn't clearly express what it actually does. I also dislike asynchronicity built directly into the language. It's just one more piece of context you have to keep track of, and the more context you need in your head to understand the code, the greater the potential for bugs, imo. It is good that Java forces one to explicitly write out their intentions.
I really like the approach Java has chosen: keep the language itself as dumb as possible, but add all the niceties to the standard library and runtime.
They added the structured concurrency package because you still need some kind of async framework for concurrency.
Maybe the confusion is just the term "framework" as opposed to API? I don't mean a third party library is needed. The built in JDK alone is sufficient but you're still juggling promises/futures and the like.
The complexity of an async lib/framework such as i.e. projectreactor stems from that your code stops being linear. And so is the dataflow.
This is the core of the issue. You trade simplicity for performance.
Using futures you can still write simple linear code.
The problem is that thread based execution doesn’t scale well when the threads are mostly waiting for io, such as i.e. an http request.
That is, it was a problem until virtual threads arrived.
You don’t really need anything, they just make forking and subsequent joining+exception handling easier. You can just use the decades old thread api over virtual threads as is if you wish, but this “structured concurrency” concept, similar to goto vs structured control flow will make it much more productive and easier to reason about.
Again, my point is that you often won't want to be blocking. You'll still want to run multiple async calls concurrently, and to do that the syntax is not the familiar synchronous style. You still need to use more complex apis.
Because Java did not have a good async model for decades, most Java apps are thread-intensive (thread-per-client) and there's an enormous amount of code that can't easily be refactored into async. Light-weight threads help save all that thread-per-client code from the ash pile of history.
But in general it's best to write async code from the get-go because that forces the programmer to compress application state rather than inefficiently smear parts of it on the stack. Better state compression (because smaller stacks) -> smaller memory footprint per-client (or whatever) -> faster (because of higher cache hit ratios because of less cache thrashing because fewer memory accesses).
We are made of meat. It’s rarely worthwhile to sacrifice developer effort to conserve hardware, and if you do manage to break even today, you probably won’t in the future. Async Java was pretty painful and only penciled out because the cost of a million native threads was just ludicrously higher than everything else.
While this is definitely a good new, we need to be very careful. Why? Those new fancy threads are stored on heap and need to be garbage collected. If you buy into ads and believe that you can create for free millions of threads, then, well, it is not gonna work on production.
Before getting too excited I advise to watch Tomasz Nurkiewicz lecture on the subject - https://www.youtube.com/watch?v=n_XRUljffu0, it explains what are the trade-offs here.
1. Platform threads place a heavier burden on the GC. It's true that virtual threads are allocated on the heap, but platform threads are GC roots, which is worse. The GC easily deals with a gazillion heap objects; it's rather unhappy with lots of roots. The number of heap objects that virtual threads occupy is roughly the same as the number of heap objects that async code allocates anyway.
2. The Jetty experiment measured the wrong thing as they misunderstood the origin of the "million thread" scenario. What happens in a real application is that you have some number of threads with deep stacks servicing incoming requests -- say 50K concurrent sessions -- and then each of those fans out to, say, 19 micro services in parallel, each of those outgoing requests is done on a virtual thread with a very small stack, and that's how you get to 1M threads. I.e. when you have a high number of threads, only a small minority of them (5% in this example) have a deep stack.
3. I don't think anyone would claim anything is a silver bullet. All virtual threads do is let a server service the same throughput as asynchronous code does, but the code is much simpler and it is observable, i.e. easily debuggable and profitable, something that async code can't do.
Regarding #1, would not the stacks of the lightweight threads have to root have to root any object on it? Otherwise the GC would free objects out from under the virtual thread, right?
I could imagine that by having fewer physical threads running, the stop-the-world part of garbage collection could suspend the runtime more quickly. That could reduce the effect of GC-pauses.
Virtual thread stacks reference the objects that local variables on the stack reference, but they are not themselves GC roots. GC roots are special objects that the GC starts its scan of the heap from, and they tend to be particularly costly, at least for most of OpenJDK's GCs. Virtual threads are just ordinary heap objects that can reference other objects.
> I could imagine that by having fewer physical threads running, the stop-the-world part of garbage collection could suspend the runtime more quickly. That could reduce the effect of GC-pauses.
Precisely. Although it's worth mentioning that while that's true for G1, ZGC does not stop-the-world when scanning roots, including platform thread stacks (https://openjdk.org/jeps/376).
As long as virtual threads retain an entrypoint of control flow (e.g. return point from an I/O call), they will also be GC roots. They might not be very deep but they are GC roots.
When a call returns, locals and parameters back up the stack will be expected to be live. Since there's no way in general to create a reference to a stack using JVM instructions (unlike .NET), the stack of every live thread must be a GC root.
If you want some more detail, when a virtual thread is in the runnable state, it is reachable from the scheduler (which itself is a Java object, and not a GC root); when it is blocked on a lock or IO, then the lock object or the IO mechanism must retain a reference to it, or there would be no way to unblock it. The thread object has a reference to the stack, which is a heap object (actually, it could be made up of several heap objects).
A thread that is not strongly reachable can provably no longer make progress -- it must be blocked but there's no way to unblock it -- and will be collected even if it has not terminated. It may live forever in our hearts, but not in the heap.
Interesting to read. It's a technical distinction with a primarily implementation difference, which I don't yet understand (i.e. have not taken the time to read yet), but I infer from the fragment that I did read, that there is some degree of semi-magical hoop jumping going on to make the CPU stack live in a Java heap object to which a reference can be taken in Java code.
Objects are obviously rooted for blocked virtual threads that may resume - a formal understanding of them being GC roots - but the implementation appears to be by taking a reference to the heap object containing the stack at the moment of being blocked, presumably by a JVM native method or similar.
> Objects are obviously rooted for blocked virtual threads that may resume
If by "rooted" you mean reachable in the object graph when starting the traversal from the roots, then yes. If a blocked thread isn't reachable, there is no way to call its unpark method that resumes it.
> the heap object containing the stack at the moment of being blocked, presumably by a JVM native method or similar.
Yes, we implemented virtual threads on top of continuations that, in turn, are implemented inside the VM. Their stacks are reified as heap objects.
They have to be scanned in every collection and G1 scans them in a stop-the-world pause. Other references may not be scanned at all in most collections (partial), and when they are, G1 scans them concurrently. They're less of a problem with ZGC.
> Those new fancy threads are stored on heap and need to be garbage collected. If you buy into ads and believe that you can create for free millions of threads, then, well, it is not gonna work on production.
A typical request easily creates 100's of objects that needs to be garbage collected. Adding a single thread object on top of that means absolutely nothing. And how much garbage to you think reactive frameworks create? I can you tell you it is a lot more.
> Before getting too excited I advise to watch Tomasz Nurkiewicz lecture on the subjec
Don't waste your time. He is pretty clueless. I remember him complaining about lack of backpressure and composability. But this comes pretty much out of the box with loom.
The same way synchronous code always has: by blocking. Blocking queues with or without buffers offer sophisticated forms of backpressure (with different levels of "slack"). Backpressure in synchronous code is a problem that was solved long, long ago. It only required special handling in asynchronous code.
Based on that I found this reddit discussion where Ron Pressler (Loom project) commented:
"The posts read like they’re leading to a negative conclusion, but at the end of part 2 it turns out that the current Loom prototype does perform as well as async in their experiments ..."
You can even use a million native threads without an issue on Linux. But unless they are mostly idle you probably won't be able to serve a million concurrent clients, wether you use threads or not.
Yes, there's no silver bullet. Writing async code from the beginning is a much better approach to reducing memory footprint. But fibers/virtual threads/whatever-you-call-them will help scale many thread-per-client apps w/o having to pay for a rewrite as async code. That is worth a lot because the size of the thread-per-client Java codebases is enormous.
Async code is the wrong fit for Java, and it complicates everything tremendously. There also is no reason to believe async code should consume any less (or more, for that matter) memory than Loom-style concurrency.
If anything, I would assume that, after a few rounds of optimization, Loom will be significantly better at managing memory than a Java async framework.
Yes, async is a bad fit for Java. That's a problem with Java, not a problem with my statement.
> There also is no reason to believe async code should consume any less (or more, for that matter) memory than Loom-style concurrency.
I'm not familiar with Loom. I was referring to async vs. threaded. Async code does make the programmer make state explicit in objects rather than partially implicit as local variable values on the stack, and this is more compact than smearing part of the state on the stack.
That's pretty confusing without some more context, yeah. Not the best article in this regard.
> In fact, in very early Java versions, the JVM threads were multiplexed onto OS threads (also known as platform threads), in what were referred to as green threads because those earliest JVM implementations actually used only a single platform thread.
> However, this single platform thread practice died away around the Java 1.2 and Java 1.3 era (and slightly earlier on Sun’s Solaris OS). Modern Java versions running on mainstream OSs instead implement the rule that one Java thread equals exactly one OS thread.
JVM was originally designed to run as a single thread because of bunch of factors that were relevant at that time:
It was originally designed to run on embedded-ish platforms without any real OS. And in such an environment it makes perfect sense to do threading at a VM level (also implementing it that way is not that hard for bytecode interpreter, as additional bonus you don't have to think about issues like concurrent accesses to internal structures of the VM and stopping the world for GC).
The time when Java was designed more or less overlaps the time when first "mainstream" operating systems with OS-level threads (ie. Windows NT and Solaris) were also designed, so it could not exactly assume that underlying target supports OS-level threads. For client platforms you had Classic MacOS and 16bit windows both with multitasking models where the concept of thread does not really make sense and Windows 95 with NT-derived Win32 API with real threads. In server space you had various Unix flavors that migh or might not have OS-level threads but these that had threads had mutually incompatible APIs and then you have "Network Operating Systems" (eg. Netware) that were marketed in a way that presented absence of real multitasking as an "performance benefit".
In this 90's environment typical large application that was intended to be portable included somewhat extensive platform abstraction library that more often than not included implementation of something similar to green threads (with POSIX standartizing ucontext_t and friends as an portable-ish API to built such a thing on). You can probably find remnants of such an layer in Firefox code to this day (and probably in other large originally proprietary software packages that were then opensourced).
I found the StackOverflow accepted answer to be a very clear explanation, it summarizes and re-iterates what matters: "With Virtual Threads, multiple virtual threads can run on multiple native threads (n:m mapping)"
I really hope this is the end of complicated reactive frameworks! I love the old blocking spring controller paradigm, thread local and so on. It makes things much easier. Never liked webflux, so complicated and hard to debug. Simple things become a project!
I could not agree more. And that's coming from someone that converted to almost completely async/reactive/flux-y code. Reactive frameworks, almost by necessity, force developers to write less maintainable and readable code (objective fact, not an opinion). Lightweight threading is a significantly better approach to the same problem space.
NIO solved the problems with threads on the network in Java 1.5 (seminal cornerstone API that also had the concurrency package and rewrote the entire JVM memory model), but only in 1.7 the epoll solution became stable.
2004 -> 2011, 7 years!
Now they hopefully can work around the kernel for file descriptors (network and disk) saving 30% CPU globally on all Java servers.
Awhile back I saw an implementation of a network stack in user space... unfortunately the author has a beef with Java and the Java version was written quite poorly; but despite that all versions were significantly faster than kernel (go figure).
Really what you are proposing is that server Operating Systems and hardware should have a "general NIC" for mundane shared tasks, and a dedicated NIC for handing over to a process and saying GLHF.
> Virtual threads offer a more efficient alternative to platform threads, allowing developers to handle a large number of tasks with significantly lower overhead.
they should have just called them "Tasks" leaving the already overloaded term "virtual" out of the conversation.
We went through a lot of iterations on what they should be called. They were originally called Fibers, but since they were extremely thread like it was decided they should be actual threads, so they should be some type of thread. Once that decision was made the question then becomes which adjective to add to Thread, and things like LightWeight, Small, etc. were ruled out as smaller or more lightweight threads might be introduced in the future.
IMHO virtual is the perfect word. The JVM's entire schtick is abstracting (virtual) hardware (machine). A simple API pertaining to intent, that can be materialized depending on OS and hardware.
Calls are virtual, though under the hood the compiler may inline them.
Memory is managed, though through analysis some allocations will go on the stack.
Threads are now (no full grandfathering, but they tried) virtual, and the runtime may now optimize them differently on different OSes and hardware.
JavaFX is not really a part of Java and isn’t actually shipped with most builds of OpenJdk. Azul Zulu is the only build that I know of that you can get FX built in.
So with this, the last thing Go had going for it over Java is gone, right?
Java has an obviously better type system, while Java originated the billion dollar mistake, Java also at this point has much better practices around handling nulls than Go, Java's jars are more portable than go's binaries, Java's GC performs better, Java has a more mature ecosystem and more libraries... Java has better IDE support and comparable compile times.
I guess at this point the only major difference is that you can teach a 3 year old to write Go more easily, so ChatGPT produces correct Go more easily than Java, and the dependency management story differs a little (though I don't think you can really call a winner or loser on that one, it's just different)
As a language java's not that bad at this point. But it still has cultural issues.
Every java project I encounter tends to be overengineered, has tons of useless boilerplate code and ends up throwing mile-long stack traces as a result. Also, somehow maven manages to be even more unreliable than npm as a package manager. I still frequently encounter situations which seem to only get resolved by throwing away my .m2 folder.
Furthermore, since the language evolved so much in recent years, it's hard for newcomers to know what the best practices are. There's at least 6 different ways to iterate over map values. Which one should I pick?
> I still frequently encounter situations which seem to only get resolved by throwing away my .m2 folder.
I have my own issues with Maven, but never in my 16 years doing Java have I done that. What the heck leads you to believe doing that may fix something at all? .m2 is just a cache, it has pretty much zero impact on whether your stuff will build unless you had installed things there that are not available on a configured repository or something similar, which would of course be your mistake not the tool's.
Depending on what is in your cache, Maven will satisfy dependencies differently. You can go out of your way to nail down your dependencies (analogous to a Javascript lock file) but it goes against the grain of how Maven is designed. Maven encourages pretty sloppy dependency resolution by default, and that's what you'll see if you look inside the artifacts you depend on.
There may be other issues as well, because this was an issue that affected Maven throughout my years of using it, in contrast to SBT, which in my experience had this issue much less often (though it definitely did from time to time.)
Can probably happen like it can happen with any dependency management system - if you set wildcard versions (or no version at all?), different systems can have different versions of the dependency cached. Eventually this can lead to surprise build failures.
I stopped working with Java not because the language or the ecosystem. I stopped working with Java because Java developers and their culture of over-engineering everything defending it as "clean code" and "good practice".
But this seems a taboo topic in a culture infested with "good practice gurus".
Theoretically, choosing a JVM language with a good culture seems like a very attractive option. Scala has its own cultural issues, but maybe Kotlin gives you most of the nice parts of Scala without attracting the type astronauts? Maybe Clojure is an option as well, if you're willing to give up static typing? I'm leery of a language with the full power of Lisp macros, but from a distance, the Clojure culture seems to be very pragmatic.
This seems pretty uncontroversial in the Java space.
I think most Java developers will wince at clean code. Clean Code made sense in contrast to the pervading gang-of-four norms that preceded it, but I don't think many people would recommend the style today.
In my experience, there's a non-trivial number of "Java developers" stuck in the Clean Code or Design Pattern All The Things modes of development. They want every project to look like a J2EE demo.
Those types of Java devs can use a lot of interesting sounding terminology but they overengineer everything and commit ludicrous amounts of code that doesn't tackle the problems at hand.
Is your program really complete if you haven't implemented all of the design patterns in the GoF book? How will it even work without an AbstractFooFactoryFactory? How else can you make stack traces unreadable to the point of being useless?
If it's snapshots causing issues that clearing .m2 is fixing, try adding `-U` to your mvn execution instead, that forces snapshots to be updated.
Do you run `mvn install` for your local project under development, or any of its locally built dependent modules? Are all of those at a snapshot version? If not, I think you'd need to delete it from .m2 (or clear all of .m2) to get updates to it into .m2, since even for the local cache I don't think maven will override a cached version during install.
Gradle is a better build tool from a purely architectural point of view (though it is quite disliked, I believe mostly due to the ultra-complex android build system being built on top and giving it a bad name), but the Maven repo system is absolutely better than most contemporary repositories, proved by its comparably rare side-chain attacks.
It's not like the kotlin DSL comes without problems. Configuration time is most of the time about doubled compared to Groovy, type hinting and completion often end up being slower _and_ worse than Groovy, and if you're lucky, your IDE suddenly can't find a random extension in your classpath and your entire build.gradle.kts or plugin becomes a red, squiggly mess.
Unfortunately, Gradle is the best build tool I've used for complex systems. And it still fucking sucks.
I dislike it because of the ultra-complex build system that over-ambitious devops engineers ended up building in my shop. Our build system was the hardest part of our code base to reason about, which is pretty screwed up.
Well, that’s on them — were they developers in any language they could have also made a mess there. I have seen my fair share of random custom maven plugins as well, which have more bugs than features.
Pre-java 8 (Java.old) and post-Java.8 (Java++) are philosophically two different languages.
Now, 21 is coming and while the changes aren't quite as in your face, the change in philosophy is just as radical.
Java would be better regarded if less time was spent on "look at this cool new feature" and more spent on "this is how you should use Java now". Or even, "here's how to safely modernise your codebase".
Venkat Subramaniam does some good talks on this but I would prefer if the Java language team were leading it.
By the "billion dollar mistake", are you referring to null references [1]? But null references were introduced in 1965 in Algol, by Tony Hoare. They long predate Java.
I don't think it's about the existence of null references, but how Java is using them. Null references can be a useful, e.g. Kotlin has converted them to be useful (with nullable types).
Using nulls in Java is mostly a choice on the part of developers; even if you can't migrate to a JVM-targeted language like Scala, you can still adopt practices like null objects [0].
My experience in small companies told me that younger generation didn’t choose Go because Go is objectively better than Java, but because they actively hate Java for it being out of fashion.
I think Java is an ugly language riddled with roundabout ways of implementing modern features, but I can't fathom why anyone would pick Go over Java. It's slightly worse in almost every way other than writing shell scripts or massively parallel workloads from scratch.
In my experience, Java's problem is the same as C++'s when Java began to take over: you usually encounter it in projects stuck at ancient runtime versions, with ancient libraries, full of code nobody dares to touch in case something collapses. The same will happen to Go, and it'll happen to whatever popular language will displace Go as the trendy language to learn; it's just a consequence of keeping around legacy code and not spending the time and money on migrating as soon as possible.
I wrote Java code for 20+ years and I can tell you exactly why I prefer Go: it produces native binaries.
I mean, it’s also less verbose, easier to start a new project, faster to startup, has far fewer configuration knobs, has native dependency management, is far easier to build CI/CD for, compiles more quickly, has very few NPEs gotchas, has value types, and avoids idiomatic boilerplate.
Go is far from perfect, but it has clear advantages over Java and I would not go back.
But the thing I love the most is the lack of a JRE. If I build for a target, it runs there.
Go's language maintainers refusal to add list operations / comprehensions and you having to type it out by hand every time is something I hate everyday while I'm getting paid to write it.
Go generics already shipped. I wasn't around for Java 4 days so I don't really know when the main operations shipped but one language maintainer was categorically against it so I wouldn't hold high hopes.
Go generics are still in their infancy. Java generics enabled a lot of functionality that came later. But java generics and particularly type erasure are nothing to write home about, and Go maintainers were also said to be against them, so who knows if comprehensions or other features might pop up one day.
Well, I listed a number of things I prefer about Go, and the verbosity I was talking about was the classic ButtonFactoryFactory and other naming classics. It’s true however that I ended up throwing that stuff away from my Java code and started to enjoy life again.
But I can say honestly that I prefer Go’s error handling, which I find tends to result in errors which are actionable. I think it takes a lot more effort up front to get Java exceptions to work rationally.
If I could change Java then the thing I’d do is I’d make it necessary to declare all exceptions in a method, but get rid of caught exceptions. So you can see what’s coming, even if you don’t have to deal with it.
In terms of stream operations, well, I write a lot of typescript lately and just like in Java, I find that I end up having to fall back to regular loops quite often. I’m not convinced that stream operations are useful in as many use cases as people would like. For example, the moment something can throw an exception, things are going to get gnarly.
> If I could change Java then the thing I’d do is I’d make it necessary to declare all exceptions in a method, but get rid of ~~caught~~ checked exceptions
FactoryFactory is (or was) a well known parody of the multiple levels of abstraction Java developers were encouraged to make when performing even simple tasks. This started with an insistence that we should use getters and setters for field access, and it basically went downhill from there.
This kind of nonsense was endemic to the Java engineering culture while I was working in it.
Perhaps the worst example I saw was during the fluent API craze where someone in my team replaced a constructor call for a Button with five lines of fluent Builder pattern gibberish.
The point of mentioning FactoryFactory was to poke fun at the culture surrounding Java, which ended up being one of the reasons I stopped using it, because idiomatic java stopped making sense to me.
Last time I used GraalVM, there were huge holes in what language features were supported, compiling dynamic code was hit-or-miss, and also its cross-compilation story was not good (requires you to compile it on the target architecture). Perhaps my information is out-of-date now?
Yeah I find the argument for Graal to be very disingenuous. It’s certainly not an out of the box solution; I’d say it’s just another Java technology stack we would need to learn. And there are so many of those.
So I guess I also like Go’s batteries-included standard library.
A lot of people don't know what's best e.g. beginners. A lot of the Go and Node crowd take it own as a mission to fame by stepping on Java. It's worse when a lot of these personalities don't even code except make youtubes videos or blogs.
It is about iteration and interpretation for me. For Java, there is more to consider, there is more to build, there is more to run, and there is more to move around. With Go, things tend to be faster and smaller. It is not feature soup... yet.
Instant compilation, single binary deploys, integrated linter, extensive standard library, and less magical performance. It's way fewer decisions and way less ceremony before you start writing actual code.
And, IMHO, duck typed systems are better than the Java style OOP.
I see Java/Kotlin as a secret weapon for startups. Too often I read about startups that struggle with immature libraries, smaller eco systems and reinventing basic functionality. Problems that they would not have if they chose a mature technology.
These blog posts mentions a company that has to write their own database library, auth services or other basic functionality. Sometimes they fix so many issues with the core language that they become close partners with the core developers of the programming language. I can't help wonder if the competition also reads the post and just smiles before they go back to actually solving a business problem. To me it's a symptom of choosing the wrong stack, even though working on non-business related problems may be more rewarding to the individual.
> Think of the history of data access strategies to come out of Microsoft. ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET—All New! Are these technological imperatives? The result of an incompetent design group that needs to reinvent data access every goddamn year? (That’s probably it, actually.) But the end result is just cover fire. The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features.
It's true Java still smells like "corporate" and "slow". And when I say "slow", I don't mean the runtime speed, but the company speed ;)
Until not a long time ago I've been maintaining a bunch of Sun|Oracle|J9 Java 6 + JBoss 4.2.3 + AIX|Linux + PPC|x86_64 ... That was not fun, and the task of moving it to more modern platforms was insurmountable to us. No wonder Azul has support for Java 6 until 2027.
But it's true the platform has changed greatly in every way: The VM, the ecosystem... are so different now. And although as a language, it still feels strange to me (I just use Python), now that I manage a fleet of modern Java and Scala microservices, I don't get scared when I hear the name "Java" :)
It's not. Java programmers I've seen don't use the latest idioms; they use pre-8.0 Enterprise Edition-ware because that's what they've been taught. It's a hideous sight and it has a lot of inertia. That's why I prefer kotlin.
It’s all relative, though. One of the biggest complaints people have about the JS ecosystem is the lack of legacy: people do move from Promises to async quickly, when React switches to hooks everyone switches quickly, etc etc. For better or worse you can’t say the same about Java.
> people do move from Promises to async quickly, when React switches to hooks everyone switches quickly, etc etc
On Reddit or HN? Definitely in not the real world. Plenty of projects and even recently created 1s don't always use these new ways of working. Even on Reddit when I see a new project getting posted it doesn't have this trait you mention.
And lots of companies are still on Node 14/16 or even older :( only forced to move by e.g. AWS lambda runtime requirements...
And Node developers are still using ExpressJs and promote it all the time. Guess when Express 4 was released? 5 has been in beta forever with no updates. Yet, most promote it and ignore the other options. Lack of legacy? This thing is 10+ years old. It's barely maintained... what's the difference?
I was going to say, Java is fine, it's the architectures and mountains of code spread across loosely coupled architectures that does it for me. I wouldn't mind a modern day Java project, as long as it's free of the 20+ year old dogmatic practices. I'd have to unlearn a lot of those myself, probably.
Well yeah...? That's a big part of why I mostly use Java, so I don't have to constantly figure out basic stuff in whatever language/framework that's hot this week. The language is just a tool. It's not an ends in itself.
I can spend that time building things instead. If I have an idea I can implement it. Downstream dependencies are rock solid, and the language changes at a manageable pace.
Like I did some stuff in python the other week, and every other line I wrote had to stop several minutes and figure out basic syntax stuff. Just a pain in the ass. Like I probably could take the time and freshen up on python and be up to speed in a few weeks, but that's a few weeks I'm not moving toward my goals.
Can you compile so that JRE is bundled with the program?
I'm a C# developer and I also have a kind of resistance of java - having to install JRE and now the minefield Oracle made with the JRE, I'm hoping not only that I don't have to code in it, but that I don't have to use ANY Java program just to avoid the runtime or have to choose between different versions of it.
And then the .jar files and how you execute them in command line is different (java -jar), maybe it is simple, but it is different than a plain executable file.
You can use jlink to create a custom runtime you can distribute with your application, so that your users don't need to download a JRE/JDK. You'll still need to run this with the java command.
You can use also jpackage to create an executable file you can just double-click (.exe on Windows, whatever on mac and linux).
Yes, there is in fact not even a JRE for quite some years now.
Also, Oracle being a minefield is just bullshit - they are the ones that open-sourced the platform completely to the point that their paid version is only marginally different, but OpenJDK is the reference implemented. They are surprisingly good stewards of the language.
> Employee for Java SE Universal Subscription: is defined as (i) all of Your full-time, part-time, temporary employees, and (ii) all of the full-time employees, part-time employees and temporary employees of
Your agents, contractors, outsourcers, and consultants that support Your internal business operations. The quantity of the licenses required is determined by the number of Employees and not just the actual
number of employees that use the Programs.
https://www.oracle.com/us/corporate/pricing/price-lists/java...
Excuse me, but isn't that a minefield?
Another point I don't want to use java: Now I have to understand what is Java SE and if the runtime falls under it or the development tools or how my users use the software, whether we now how to license every user that won't even use that program and even any people that interacts with our business. Pure maddness.
Are you using Photoshop without a license as well or how is that relevant? This is about Oracle’s JDK you specifically have to install and have a paid license to (actually, they also provide a free version if you stay on the latest LTS release at all times), and is meant mostly for governments and such.
Will you uninstal linux because Red Hat has a paid support version as well?
Do you have to buy a license for a regular Linux kernel? No. The exact same is true for OpenJDK. Just download any, for example one that is packages by your distro, or there is sdkman for developers to let you quickly choose from multiple vendors and any version.
> To run your Java 8 application, a user needs the Java SE 8 Runtime Environment, which is available from Oracle under the Oracle Technology Network License Agreement for Oracle Java SE, which is free for personal use, development, testing, prototyping and some other important use cases covered in this FAQ
How is this free? Of course I don't need to pay for Linux kernel. Of course some products feature paid support. But how can I justify the quoted text that Oracle JRE is free?
I was going to say to the guy that decided there must be no Oracle Runtime at our company (some software doesn't work without it, I have no status if workaround has been found) - Hey, maybe Java SE is just the support/patches stuff and maybe we can use runtime? Until I stumble on that text - free for personal use, etc...
You're confusing Oracles JDK and JRE with OpenJDK and that JRE. Oracle takes the OpenJDK and recompiles it, whitelabels it, and licenses it under their own license. The OpenJDK which is where all the development occurs is true open source.
If you want to use the Oracle runtime you need to pay Oracle. But the code itself is open source and you can instead use the Azul, Amazon, Red Hat, BellSoft, etc.. runtimes.
Finally got to the answer. So it IS paid from one particular vendor. I understand there are free options. But that makes it a mine in a field if you are not knowledgable enough. It went this deep into threading to really get an ack that there is a big red O' mine in there.
And there is one piece that wont run without big red O... :(
> But that makes it a mine in a field if you are not knowledgable enough.
That seems like looking for a problem where one does not exist to be honest.
The peer comment regarding RedHat is spot on. Yes you can purchase a Linux distro from RedHat and pay lots of money.
That doesn't mean anyone will argue with a straight face that you can't run Linux for free!
It's the exact same scenario with Java. You could pay Oracle for a Oracle JDK if for some reason you reall want to, but approximately nobody does that.
The vast majority of Java is open source (Open JDK). You can get builds from Amazon (coretto), Microsoft, Red Hat, SAP, ... that do not stupid licensing requirements of Oracle.
> they are the ones that open-sourced the platform completely
While I agree with the bullshit claim, Sun was the company to open source Java. Which itself goes back to IBM blocking the community process in order to force an Apache licensed Java implementation, Sun releasing the OpenJDK made most people happy without killing its embedded cash cow.
completely is the important word — there were plenty of paid-only tools back then that were part of OracleJDK but not OpenJDK. Oracle made them open-source.
This is just the public perception. Being a full time Qt dev feels more and more like using Oracle software, where the software itself is FOSS and free to use, but the company behind it makes it look like it is not, with all the negative press.
If you wish to avoid JRE, you can use Graal VM and compile to native AOT executables. Just takes a few minutes to download and play-around. (Even timed it with another Java disbeliever here on HN)
You can, you just have to have a config file that lists files that might get reflected upon.
This is available for some of the more common libraries (there is definitely more work to do here), and you can also use an agent, run your code on a regular JVM and it will collect the runtime accesses and create that config file for you.
That isn't a general behaviour. You can get <5mb binary size for modest apps. Likely have one or two "bad" actors contributing to binary size. Try the Graal VM dashboard to identify which modules are bloating up the binary.
"Use GraalVM Dashboard to Optimize the Size of a Native Executable"
Single executable is useful for end-user. It is not very relevant on servers or for development.
On the other hand portability and debugging experience with jars are vastly better. Just consider a developer on Mac with Apple silicon cannot use the same executable as on Amd/Intel server, while jars are cpu-independent.
I think you can with 3rd party tools, as with Python, but why would you? If your app is literally a single executable with no additional resources, maybe it makes sense, but otherwise, just bundle the JVM like jetbrains does.
And as you mostly use Java on the backend, you're probably running Linux where a free and open jre is packaged, so just target that and not worry about it ever.
Assuming you have separate java services, do you really want to bundle a different "minimalistic jvm" on every container, or use containers that share the same base layers?
If you use 200x images based on, say, `FROM docker.io/library/eclipse-temurin:17-jdk`, you only use 230MB + all other JAR/overlay layer sizes.
Yeah. Java isn't particularly sexy. But it gets the job done. I'd even settle for Java 7 still, it was more than enough to get the job done, especially when you are a startup trying to get an MVP out the door that can still be refactored and maintained somewhat easily when you bring in more skilled developers.
You can tell when a programming language conversation has become toxic when negative qualities are assigned to the people who choose (or apparently don't choose) someone's favorite language.
In this case, it's an "irrational fear" of Java to have preferences beyond Java.
It's not just startups where this happens. In older companies you have older "leaders" that claim to have been hit by Java 5-10 years ago so don't trust it. Yet they've never tried it again since.
> Irrational Fear of Java is one of the most confusing things among startup stage companies.
This so much. It's exhausting to experience startups shoooting themselves in the foot by not using Java where it's best simply because it's not hip.
Golden rule for startups, spend your innovation capital on your product not on trendy (unproven, immatue) implementation technologies. Use boring (aka mature, high performance, best tooling) technology.
Startup stage companies work on smaller codebases and primarily need to go as fast as possible from nothing to a working prototype without the need to use many advanced features.... Java is a good language, but the community around it, whether it's experienced developers or libraries are built for enterprises, who work on huge codebases and their primary concerns are security, performance, scaling, testing, all of which are secondary in most startup environments and usually only start to matter years after the startup codebase is created...
> security, performance, scaling, testing, all of which are secondary in most startup environments
I've done quite a few startups by now and I can say that in all of them the initial quick-n-dirty MVP codebase lasted well into the era where things like "security, performance, scaling, testing" became a priority.
Unless you are absolutely committed to throwing away the MVP (and I don't see how you could because as soon as sales team sees it they start selling it and you won't have time to throw it away), your best early move is to use technologies that will take you to the long haul because you will be stuck with that initial code for a very long time.
(This doesn't mean architecting the whole system for scale you won't hit in years, mind you. Just to use technologies that will make the transition to a mature engineering team easier. Such as Java, given the topic of this conversation.)
In my experience the zero to something stage lasts way longer than people think and the cost of using immature and badly suited stacks like Typescript/etc bites way sooner than people think. They just keep pushing through it and telling themselves this is faster and everyone else is doing it so it can't be wrong.
Case and point I built a new service recently in Kotlin on JVM. It's a SQL translation gateway that maps an internal representation of simplified relational query model onto other databases so you can connect your own MySQL, PostgreSQL, Oracle, SQL Server, etc. Because Java has such a deep ecosystem in this area I was able to leverage JDBC and jOOQ to deliver support for all the require database types in only a few days of work with very clean code and full test coverage that tests compatibility with all the target databases.
Imagine trying to do that in Typescript. You don't have a standardised interface to database drivers so you would need to first choose that abstraction and implement it yourself + adapters for each driver you need to support (i.e re-inventing JDBC). Then you need to handle the different SQL dialects as they all have different quoting rules, different LIMIT/OFFSET syntax, different supported LIKE/ILIKE/etc, there is no library for handling this in Typescript, especially not with support for commercial DBs like Oracle, SQL Server and -definitely- not for more esoteric stuff like HANA and Informix, etc. Just writing a dialect agnostic query builder abstraction is weeks of work, actually handling all the dialects and emulating anything that isn't native for your targets is much longer.
So you might say, "but that is just libraries"... however that -is- the entire point. When you buy JVM you are buying the ecosystem of libraries primarily. The very nice runtime/GC/languages is really just icing. Same way that when you buy Python you are really buying numpy/pandas/PyTorch/etc, the language itself isn't the important part.
Or you could say this is just a problem uniquely suited for JVM and you wouldn't be wrong but isn't that also the point? Use the right tool for the job. JVM is to business logic and solving business problems what Python is to data science and machine learning.
The big epiphany I had recently was that when I build on JVM I don't write much code. I make a design, assemble some libraries in the correct way, iterate on it by refactoring until the abstraction is clean and then turning it on. There is very little mechanical coding involved because everything fits together very cleanly with only business logic being surfaced in actual "code".
I feel more like an architect, less like a code monkey and I can ship results way faster because I don't have to work on code that doesn't directly solve the problem. i.e there are much less yaks to shave.
Fun fact: JDBC was one of the (if not the most important) blocking APIs that people tried to convert or redesign / reimplement as async, and failed miserably.
Now, with the advent of virtual threads this should no longer be a problem! JDBC can block all day long, and your app will still scale without a problem.
Most business applications today have a web UI, using nodejs/JS/TS simplifies things by having to deal only with one language and ecosystem. Most business apps today stick to only one database and maybe an additional nosql database. What you did is very unusual for most apps and is something only huge companies would do, like SAP or big banks.
Irrational Fear is actually what is waged on the Java front. People doing mental gymnastics to avoid touching anything non-Java. Cargo-culting. I've witnessed Java devs proposing changing an existing Go project's build system to maven because "that's what they know well". Straight insanity.
People that avoid Java don't do it because of some stigma, but because they have been burned before. Remember that a typical Java project is not the latest version and uses a mix of dependencies that make using newer features impossible. Then, there is no point in updating, you might as well rewrite it. That is the reality.
There are legitimate reasons to chose Go over Java, starting with the build system and the amount of effort needed to produce a single binary. I use Java for 20 years and Go for ~2. I am not actively hating Java but I value my time, specially that was wasted on maven + co.
I've used java for over 20 years and managed to almost completely avoid it.
As with npm - I think automatic dependency management at the library level pulls in far too much of the world - most of which your code doesn't actually depend on because the one function you need in lib A, doesn't actually require lib B, and therefore lib B dependencies C&D etc etc etc.
Madness.
If you do dependency management at the source level ( using the compiler.... ) then life is generally much simpler ( reflection being the only aspect you need to manage ).
The key thing to remember is the transitive nightmare of dependencies that you let Maven manage is largely created by the way Maven works!
So if you don't do it that way, just you need to manage the dependencies yourself, but they are much much simplier!
In a bit more detail - at the top of your source code is:
import someorg.somepackge.class;
If you have the source code for someorg.somepackage in your sourcepath/classpath ( and the source of any transitive dependencies ) then the compiler will magically find all the transitive dependencies for class at the class level at compile time. [1]
This results in the minimal number of classes you have to ship and as a result the minimal number of transitive dependencies.
Now your build tool/script ( whatever tool you use ) will need to bring in those dependencies from a versions repo somewhere - but that can be your own source code repo ( vendoring I think it's called ).
Yes - you need need to keep that build config yourself - but frankly that's time well spent as it results in you have proper control over your dependencies, and they don't balloon out of control.
Occasions like some random third party has added dependency D to C which is brought in via an A->B->C chain and then you have some library version clash are much much less as a result.
That's not to say I don't occasionally use jar files in the classpath - but that's typically only if that library is self contained, single focus and small - and again you can put that in your build script.
In my view, Maven magic is part of the problem, not a solution - and to sum up why - it hides the cost of the dependencies - if people had to manually add the chain of true dependencies when they decided to use a apache utility class, they might think twice about whether that chain of dependencies is well designed or not.
[1] With the reflection proviso I mentioned above.
I understand that you mastered producing a single binary with your favorite build tool. In my experience if a language does not have an official way to build projects it does end well. Go, Rust and Zig are the prime example that a language should not be separated from its build system.
I would like to point out that the language compiler in Rust (rustc) is certainly separated from its modular build system (cargo).
Also, how so wonderfully and utterly balanced of you to give an old-school Maven pom.xml file with the maven-shade-plugin and exclusion lists, but omit the Cargo.toml and go.mod files.
Also, its as simple in Java as:
Gradlefile:
apply plugin: 'java'
command:
gradle build
Can you spot the difference ?
EDIT: Ok you were asking about creating single all-in-one fatjar ? Add the below to your Gradlefile:
Gradlefile:
plugins {
id 'com.github.johnrengelman.shadow' version '8.1.1'
}
apply plugin: 'com.github.johnrengelman.shadow'
> utterly balanced of you to give an old-school Maven pom.xml
Sorry I started to use Java a long time ago, not sure what is the "recommended" way of doing things today.
I guess I just need to know which random Github project is the new hotness today that does something that the other languages do out of the box. Thanks for supporting my point of view.
Again, to produce a single binary you need to know/care much less with Zig, Go and Rust than with Java.
The younger generations of coders are indeed more susceptible to hype, but if you go around dismissing things that are hyped, you're going to miss out on a lot of good things. I've got 15+ years of Java experience, and I think Go is a nice language and has some real strengths. From what I can see, the primary difference with Java isn't the build system or the binaries, it's the decision to strongly favor simplicity and readability over expressiveness. The payoff for this is that you can jump into any Go codebase and figure out what's going on fairly quickly. You don't need to pull down the source, fire up an IDE and start drawing object diagrams, you can figure out what it's doing by skimming the source on Github. I find this very useful.
I'm dismissing it because it's dumb, not because it's hyped. Rust also has a lot of hype and is a much better language. You don't have to like Clojure or Algol or APL or Haskell but these languages at least have some intellectual merit. Golang on the other hand... I'm at a loss for words for how deliberately bad it is.
This is really some shameful discourse. It's somehow worse than the usual fare of "nobody but beginners/idiots would choose Javascript if they had other options."
Google is a patch work family where the siblings constantly fight each other for their parents' favor. All the teams are competing for promotions and money that you get through "impact". Go and Dart are just like all the other Google products that get released with much fanfare and then quickly forgotten once their creator's no longer need them for their next brag sheet.
Have you seen the two languages, like.. ever? I’m sorry for the harsh language, but Go is literally much much more verbose than java, and not just recent Java that has since improved, even Java 8.
Citation needed; first off, verbosity is not the problem, verbosity is NEVER the problem; any significant codebase will end up with millions of LOC (and tokens) regardless of language choice.
Second, Java's infamous for having VeryLongClassNameFactoryBeans on the one hand, and deep class hierarchies on the other, with new classes being declared for everything.
Third, it depends. You can set up a HTTP server in two or three lines in Go without 3rd party dependencies. This [1] is the shortest Java versions I've found, and it contains a lot more magic.
I'm biased in favor of Go, but verbosity is not the issue there.
No, but nobody's going to use it. They're gonna Spring Boot all the things anyway, no matter how small the task, because "you might need it in the future". CRUD API with 4 endpoints? Spring Boot! Logging server? Spring Boot! That's where the magic is.
Whereas the brave Go developer is going to rewrite a half baked implementation of Spring Boot for every service that ends up growing a little bit more than expected, all on the default go server. Weeeeeeee
"Second, Java's infamous for having VeryLongClassNameFactoryBeans on the one hand, and deep class hierarchies on the other, with new classes being declared for everything."
I have always wondered why Java is hammered for this while Apple is celebrated for this.
> I have always wondered why Java is hammered for this while Apple is celebrated for this.
Indeed.
And it's good to remember that there is nothing in the Java language forcing one to name things verbosely, that's on the developer's personal choice. Just like on every other language.
We were talking about PL features, and now you bring up a style of writing said language (which by the way comes from very different times and is no longer the norm)?
Composition over inheritance has been a mantra in Java circles for a very long time now. The important difference is for example that Go can’t have proper error handling no matter how good the developer is, due to the language not having a proper abstraction over that.
That's a feature not a bug. Go intentionally forgoes abstractions of that sort. I personally prefer well written Java exception handling, but often Java exception handling code is a bolted on afterthought. Go's approach is to make error handling something you have to think about for virtually every line of code, rather than a superstructure built around the code. This can feel slow and painful to write, but it guides the developer toward considering each error case individually, potentially resulting in more nuanced responses to errors. It also makes reading the code dead simple.
Seems to me like it does work, based on many successful projects using it. There's nothing that forces you to handle the error at the place of origin, you can pass it on like you do with exceptions. You just can't bubble it up multiple levels.
Well technically, you could do async and green threads / co-routines / promises / futures/ etc. (all the same thing technically, it's all callbacks under the hood) on the JVM for many years. Frameworks like Spring, vert.x, and others have supported this for ages. The loom project just makes it a bit easier to use this and provides some JVM level support and optimization for this. With other JVM languages (Scala, Kotlin, Clojure, etc.) this was already quite easy so it's less dramatic if you were already using those languages. My kotlin co-routines will be using loom threads under the hood pretty soon but it's not going to massively change how I use them or what I do with them. Or make a huge difference in performance. My code isn't CPU bottle necked, typically. Like the vast majority of server software (which is IO or memory bottle necked typically).
Go of course has simplicity as its main advantage. That doesn't go away of course. But it's a double edged sword and it can be a bit overly verbose / limited for some stuff.
Maybe more interesting is the trend towards native compilation in the Java world. Graal is a pretty big deal. And with languages like Kotlin, there is also kotlin native and wasm as a relatively new option. That's the other advantage Go has had that is slowly going becoming less relevant: fast start up times and a simpler run-time (i.e. a statically compiled binary that you start).
ChatGPT is fluent in many languages. I've been generating some usable Kotlin code with it, for example. I'm sure it does Java just fine as well.
I would say that Go's error handling is a billion dollar mistake as well.
The JVM is very impressive and a great thing to build upon. The Java standard library is vast and the developers actually care about it (unlike the Python folks, who gave up on having a sane HTTP client library built-in and instead defer to the third-party requests library).
The Java language is not as great, lacking some quality-of-life features (properties and operator overloading, for example). It often suffers from design-by-committee (e.g. https://openjdk.org/jeps/430 ).
And then you get to the web frameworks. The most popular one is Spring, and it's a pain, with abuse of reflection and magic, bad docs, and other issues.
Depends if they are talking about Spring the framework, or Spring Boot, the "conventions bootstrapper"
Both of them do not have bad docs, maybe some sub project might have, but compared to RoR (sorry never used Django), Spring's projects docs are magnificent. Their problem could be navigation for someone new to it, or in the case of Spring Framework, just too much concepts.
So, Spring Framework is basically "make everything configurable extensible etc", its code if full of old java patterns for generic stuff (the infamous "AbstractSingletonProxyBeanFactory" , ye, not great), to not only have a very feature full dependency injection, but to allow stuff like AOP and setting up via xml and many other things. This is the base of spring so you will have to kinda deal with its concepts if you hit an issue.
Ruby on Rails was a reaction to things like Spring, it's philosophy is "convention over configuration" in contrast to Spring's unparalleled configuration options. And so Spring Boot is born as a reaction to Rails.
Spring Boot is at its core 2 things
* an annotation engine for Spring configuration, instead of using all those hellish factories or XML
* Sane Defaults for Spring Framework
There were also many other projects under its umbrella to streamline with those 2 things other components of the Spring ecosystems, from Http Apis to Repository patterns
Ok so reflection and magic:
* Both are full of reflection. That's how it keeps being very generic at its core.
* Magic is an issue of Boot, you just put some annotation and it would do who knows what. Still way less magic than Rails, as at least you see the annotations, and search for it in the docs. Framework isn't really "magic", as you would have to explicitly configure everything in either XML or through its classes. ( Spring Data does have weird magic, worse that Rails, generating full SQL queries from method names? C'mon)
Nowadays, modern java (server side) either uses Spring Boot (due to easily supporting most kind of infra you might use from persistence to messaging), something specific for their use case (quarkus makes it easy for small image and quick startup) or not use a DI framework at all, as most servers nowadays are fine, and language has plenty of features to not need some management of Injected. I actually see no value now to "minimal DI frameworks", because manually wiring is not actually hard except when you have a circular dependency, which you should not have anyway. (The codebases I had better time working with were either on Boot or had no managed DI)
Spring Boot and Framework new releases actually require Java17, with the objective of being able to start to clean up their codebase from old patterns that were kinda required in the old times.
> Both are full of reflection. That's how it keeps being very generic at its core.
Reflection is not a feature, but a kludge - to conceal how non-expressive the core language is. Add annotations to that, and you might start asking yourself, what value does exactly Java's static type system add here, compared to a dynamic language (say Javascript)? Most of your errors will remain undiscovered until runtime, so why bother?
Things are changing for the better with Java, of course, with the speed of a glacier, but I don't think we'll ever be able to get rid of Spring proxies, or annotations or reflection.
Unless your language is Coq or Idris, its type system will still be way way less expressive than what reflection can do — java’s type system is quite good, it only misses HKTs, and even where static types are insufficient, it has proper runtime types and error handling, so runtime exploration of dynamic data in a safe way is absolutely possible.
New features makes lot of reflection use cases unnecessary.
Spring already bumped base to java 17, and the plan is to improve the internals, but changing public APIs is a different story.
Still, other options appear everyday so you don't have to use Spring. But yeah, glaciar speed, but with that comes low churning rate and old stuff needing very low maintenance.
You know there’s official Spring docs, right? You’re complaining that Googling for something shows links to random blogs and Stackoverflow questions. That’s not exactly a phenomenon unique to Spring.
> I would say that Go's error handling is a billion dollar mistake as well.
I think you'll need to substantiate that one with some solid evidence. I don't see anything wrong with Go's error handling that cannot be explained by developer choice.
Having your business code riddled with error handling code is just bad design, and will inevitably result in not properly handling certain cases simply because the developer would prefer to write the actual business logic and can’t always stop doing that.
Exceptions (especially checked ones) allow for as fine or crude-grade error handling as needed. Don’t get me wrong, Java’s checked exceptions are not without problems, but compared to Go almost everything is better in this regard.
The linked article has zero references to golang, so it's unclear if the java community has other golang features they're planning on buying into.
This change improves Java/JVM capabilities and that's great. Java is a great language and it's nice that it is shaking off the stagnation/perception of stagnation it has acquired over the years. Golang also has really great capabilities and use cases. Having written tons of code in dozens of languages it seems to be folly to expect one language to rule them all. The features that you don't have are almost as important as the features you do have.
If Java is looking for additional golang cherrypicks I would love to see their start times improve (shortlived java processes are painful, Graal doesn't cover all uses). FFI even in Panama is much more boilerplate then CGO. Go's deployment story is so much cleaner and straightforward then Java. The amount of engineering hours saved through language/toolchain standardized formatting is just monumental. Those are just 3 things at random, their are a bunch of other great ideas to steal.
I also wouldn't throw any shade at Golang's vibrant library ecosystem. There are just so many great active projects written in pure go that make spinning up new services a joy.
golang doesn't really have other features that Java can take from, since Java is so rich, other than perhaps embedding. But it's been the case that the Java development process has been doing an excellent job at taking a better approach than what exists in adhoc developed languages, so it wouldn't surprise me that they would come up with a superior alternative.
Java's GC has better throughput, because Go's GC specifically optimises for latency. This may or may not be the right tradeoff for you specifically, but it is a perfectly reasonable tradeoff.
Jars are more portable at the cost of requiring a JVM installed on the target, whereas Go's statically linked binaries and great cross-compilation make portability mostly moot for server applications.
Also, with Java 21, the JVM brings runtime support for virtual threads, at the VM and standard library level, but there is no real language support, whereas Go has actual support with syntactic sugar around goroutines, channels, and select statements.
Tooling and the sheer number of libraries are definitely Java's biggest draws, but it's not enough to justify choosing Java over Go across the board.
As for GC - Java is superior in nearly all respects. But you are right that common programming libs+paradigms in Java produce too much garbage. This has changed with lean and memory-sensitive frameworks in Java nowadays - like Quarkus https://quarkus.io/
> So with this, the last thing Go had going for it over Java is gone, right?
Go still has composition over inheritance, which is vastly more flexible. Go favors explicit (verbose copy-paste, redundancy, use stdlib first, libraries second) over implicit (magic annotations, frameworks everywhere) and I like when I can understand what's happening without holding 10 files in my brain context. Go's memory usage doesn't trigger the OOM killer every 10 minutes. Go favors copy-pasting because what you copy is short and understandable, and function names don't have 10 words in it.
It's always going to be a personal choice, and even though virtual threads bring java closer to where go is, it's still not there for me.
It's not so much part of the language but it's an idiom that is said by one of its authors (https://youtube.com/watch?v=PAAkCSZUG1c&t=9m28s). "Code reuse" always has some non-compressible context: another import with a line in go.mod, another repo and dependency, which doc is going to be on another page/site, etc... It makes sense for big dependencies but not for small ones; the threshold where "small" becomes "big" being of course subjective.
As someone who recently developed in Java (for developing a Jenkins plugin) after not having done so for a long time (~15 years), I feel that Java nowadays is technologically quite interesting with many interesting libraries and tooling. IDE code navigation and debugging support is excellent.
At the same time, the ecosystem can feel messy and overwhelming. There are multiple @Nonnull annotation libraries and it's not clear what the difference is. There are many different testing libraries (some TDD, some BDD) but it seems everybody picks a different one. There are multiple advanced concurrency libraries and concepts, but again not much consensus and everybody uses a different one. Lots of deprecations all over the place.
It's even worse in the Jenkins-specific ecosystem where nearly every plugin uses some deprecated aspect of either Java or Jenkins, and hunting for what the "right" way is supposed to be is a bit of a daunting task.
There are like 20 different JDKs, not sure why.
On the other hand, startup time and memory usage still seems to be a big issue. Starting Jenkins takes forever even with JVM in client mode and with bytecode verification disabled. And when started it uses a huge amount of memory. To date I've never seen one non-hello-world Java app that isn't like that.
Hi there ;) Knowing you, I'm sure you've got it all figured out by now but for any others who are reading:
- Nonnull annotations: you can pick any. Tooling doesn't care and tends to just accept any annotation with a name like @Nullable or @Nonnull regardless of namespace. There was an attempt to standardize this years ago which failed for reasons that are hard to understand from the outside, something to do with lack of agreement on the exact semantics and use cases. No, me neither. Or just write in Kotlin where it's all integrated with the type system and the compiler understands / hides from you the Java annotation mess.
- Testing libraries: it's been quite a long time since I encountered anything except JUnit 4 or 5, but they interoperate so that's no big deal. JUnit 5 is great and very standard so you can't go wrong by picking it.
- Concurrency libraries/concepts: yes this is a problem but it's one found in other ecosystems, and it's one the new virtual threads are designed to solve. The hope is (and I guess we'll see) that everyone can forget about coroutines and reactive programming now, at least where performance matters, and go back to writing old fashioned ordinary threaded code. I guess they'll stick around in Jetpack Compose GUI programming.
- Lots of different JDKs. That is indeed new and unfortunate. They don't actually vary much, mostly in how long major versions remain supported. Amazon is a sort of default choice unless you're doing GUI work, in which case JetBrains Runtime has lots of desktop specific patches.
- Startup time/memory usage. There are simple things that can be done to improve this (see AppCDS). Some of it is cultural however, the people who write CI servers probably don't care about startup time. IntelliJ for example starts pretty fast in the latest versions, because they care to optimize it. For memory usage, be aware that the JVM will default to using most of your free system RAM even if it doesn't need to. It figures hey, the RAM is free, so why waste CPU time and energy on garbage collecting if I don't have to. If you planned to use the RAM for something else, or conclude that this is the "natural" level of RAM usage, it can be annoying however. You can give it a limit or in recent versions set a flag that'll cause it to regularly run GC when the app is idle, to give back memory to the OS.
Because Java is an industry standard, just like C, C++, Ada, JavaScript,... so many vendors want to provide their own take on JIT, AOT, GC implemetations, specially for niche areas like hard real time embedded deployments, where no other GC based language competes with Java, at least at the same deployment scale.
C# already had all the advantages of Go (more or less) and more, yet Go is still growing. This has nothing to do with the technical capabilities of the languages.
If you target the classic .NET Framework on Windows, you could just ship a single folder for a long time The .NET Framework is shipped with Windows (though not necessarily at the latest version), and the deployment strategy for your average desktop/console app is just "copy the entire bin folder".
That's true. Single file deployments were also supported in .NET Framework, only a bit obscure by registering a callback for the assembly loader and explicitly bundling every dependency as a resource in the build process.
C# doesn’t really have an equivalent for Go’s channels syntax though.
Also, Go doesn’t have inheritance, but does have interface forwarding and structural (as opposed to nominative) type-system - those both very significant factors that influence final program design.
Being not super familiar with Go, I think C#'s async/await is similar, isn't it? If you want more complex operations, you might want to look into System.IO.Pipelines.
Nah, totally different than channels. And Goroutines are proper managed threads with their own stacks.
As an aside pipelines are terriblely unergonomic. The public APIs are not fully developed and something simple, like IDK creating an actual processing pipeline, is funky as all hell. Creating a pipe wrapper feels dirty.
The buffer management is cool though and sequence seems like it should have just been made a first class slice type..
.NET’s Pipeline type is not intended for processing-pipelines (surely that would be Windows Workflow Foundation and SSIS?) - it’s meant to be an alternative (and performance-optimised) API for reading and writing to IO streams without faffing around with the differences between MemoryStream, FileStream, and NetworkStream - like if you’re implementing your own network protocol server and client.
The main problem it seems to solve is processing a text or byte range from a stream (be it an endless SSE/WebSocket stream of messages, or just from a huge (multigigabyte+) file on disk - in a non-blocking/async manner.
What does the company that owns Java offer? Their Java IDE is https://en.wikipedia.org/wiki/JDeveloper and nobody uses that. The latest release came out in 2019. Everyone is using IntelliJ IDEA or perhaps Eclipse.
What does the Python core community offer? IDLE is ugly, is barely an IDE, and didn’t have line numbers until a few years ago. Everyone is using PyCharm or VSCode.
MAUI is just another workload of the dotnet ecosystem. There is no technical reason for it not to work on Linux outside of someone actually doing so. This has nothing to do with C# at all.
...but the .NET runtime has a higher memory floor. It's a lot easier to write very small apps in Go. New versions are working on stripping in AOT builds but it's still very much a work in progress
Yet with Go you don't need to worry about comparing Java Stacks, changing your tooling, testing them etc, the binaries "just work" out the box in a near perfect condition.
Not really. We saw overall performance speed up when we limited the number of system threads for goroutines to 1. This was with Docker and Kubernetes (albeit several years ago). This was configured, if memory serves, by an environment variable.
So no, it's not as simple and perfect in my opinion.
Performance increasing when setting `GOMAXPROCS=1` sounds like an (interesting) edge case for the scheduler. If you ever encounter this again, it would be great to file an upstream issue about this. Go has plenty of built in observability tools (I imagine runtime/trace would be good here) so it'd be easy to get the developers the data they'd need.
I believe we tried, actually (this was at ZEIT, before it was Vercel, back when we used k8s to also deploy all of our user docker deployments - which of course wasn't the best idea but it worked for the time being).
I regrettably don't remember the outcome though, and don't use either of them anymore to even test them readily, especially not under the same load.
I suppose it depends on what you're writing. When I read "wide range of platforms" I involuntarily said "who cares" out loud. For those of us (fortunately or unfortunately) working in distributed systems, there is one target environment and maybe a different local environment depending on your setup.
Likewise for "worse" ecosystem. It really depends on your context.
Edit: A closing thought: One thing I find enjoyable about working with Go is that it is tuned for my particular context. I also enjoy writing Java (and more specifically Kotlin) when that context changes (e.g. a desktop application).
How does Java have a better type system? Java's generics are unsound, while Go's generics are sound. Java's generics do type erasure, while Go does not. Java's type system is not unified, it does not have a top type (an int is not an Object, int vs. Integer etc.).
There is a Manning book about Go: https://www.manning.com/books/100-go-mistakes-and-how-to-avo... . And these are not rare mistakes. Everyone makes them and some of those mistakes are repeated again and again in every Go project. (Esp the for-loop ones). I have found programming in Go needing the kind of alert, defensive mindset I adopt for C++ which is quite exhausting. Not so much for Java.
However, to be honest, Rust is probably the only the language where you can relax your "defect-analysis" mind thread while coding - with the exception of async Rust.
Anecdotally, but I feel like Java's type system allows me to write code that gives me more guarantees about the soundness and correctness of my codebase.
One example I can think of is Java's annotations vs Go's closest alternative, struct tags [0], which are just strings added to struct fields that specific libraries can act on; at best these are only checked at runtime, the compiler or type system will not help you with those.
No. Both type erasure and monomorphization have always been legitimate ways to compile generic code. IIRC, the issue with Java is that type erasure is sometimes a leaky abstraction.
What exactly do you mean by "leaky" here? Can you give an example?
My main issue with non-erased generics is that they actually break reasoning wrt. parametric polymorphism as soon as you allow for pattern matching or even just .isInstanceOf on type parameters. Suddenly a function which takes a List<A> can do entirely different things depending on what A is... and that takes away a powerful reasoning tool.
It’s leaky due to reflection, but type erasure is not a problem in itself. .NET did away with type erasure, but it had a cost in terms of language ecosystem of the platform - not erasing List<A> into List will bake the variance of List into the runtime, and thus other languages have to use the same model.
Are you writing everything with a so-called superoptimizer (because mind you, not even your hand-written assembly will be the fastest, hell, C++/Rust will likely beat it in the general case)? Because it has always been a tradeoff between programmer’s sanity-correctness-maintainability-productivity-performance, at the least. Java does very well on most of it, including performance — it will JIT compile code comparable to C, the reason for the performance discrepancy between the two is memory layout, which may or may not matter for the problem at hand.
In my experience, most (especially business) applications are absolutely not going through millions of data entries doing some local calculations over them, that part is delegated to a database. They either do some IO over them, or have much smaller element sizes.
Java has escape hatches even for handling some of the “millions-of-elements in a hot loop”, but for an AAA game engine it might not be the best choice, but for anything else? It is more than fair game.
Well I regularly do juggle millions of objects in Java. I'd estimate most Java application code is probably running at about a fifth the speed it could be. Modern computers are incredibly fast, but most of that speed is wasted copying data between representations and garbage collecting Stream API debris and boxed integers. Moreover, most Java applications using a database for storage do so in a dumb way that doesn't make good use of either the database or Java.
Well, as I mentioned there are escape hatches, but those come at a price in usually maintainability, e.g. using SoA, or external heap.
If you do have a million objects and it is indeed a performance bottleneck (as shown by the profiler) then it might be worthwhile to pursue these solutions, or even implementing them in another language. Noone claims that Java is for everything, but it is for most things.
Big part of why native interop is rare because it's slow and awful and generally not worth it.
I don't expect a lot of code to actually use these features, but the code that does will be the low level stuff that lets the higher level stuff really perform.
FFI makes cross-platform usage more questionable (have a look at python or node breaking a build on windows vs linux, this is very very rare in case of Java builds), and Java is more than fast enough for most use cases that it simply doesn’t need FFI for speed (like Python for example).
Also, this is a huge advantage of the system, you don’t shallowly depend on a ton of C libraries, you can be confident that your whole application to the last bit is properly abstracted (plus can be debugged and observed with the same great tooling).
Eh, Java leaves quite a lot of performance on the table compared to C or Fortran. It's not as bad as Python, but if you really want to go fast, it's not quite able to compete with fine tuned native code. The Vector API might help a bit, but I don't think it will go all the way.
We'll see how far value objects and the vector API go. The lack of operator overloading makes writing scientific/numerical code a bit awkward, though this could be mitigated by using Kotlin instead.
This is a pretty terrible comparison, suitable for those X-vs-Y websites. While on paper Go and Java have a lot in common, in practice their philosophies differs enough to produce different enough languages.
I've used both Java and Go and I can tell you, there is no right or wrong but they differ a lot in their ecosystem, tooling and in design patterns. I don't see myself going back to Java because of the virtual threads.
I mean having a nice feature does not remove all the other pain points. On your last point go mod is way better than anything available in Java. There're still printing out 600 pages book about Maven.
Java will always be a mess of layer of abstraction and magical / heavy framework.
What's the modern startup time of JARs? I haven't used Java in a long time, but long ago, the relatively slow startup time was a dealbraker for using java to write small tools.
Depends what the JARs do of course. Some desirable libraries are unfortunately slow. PicoCLI for argument parsing is one. It's got every feature you might ever want, but, it adds a couple hundred msec to startup.
There is a HotSpot feature called AppCDS. It improves startup time by about 30% in my experiments. However you have to turn it on. Just running `java -jar` won't do it.
Then there is GraalVM Native Image. It can compile JARs to native code ahead of time. They start as fast or faster than the equivalent C program, so it's the big hammer for CLI tools. However, it can't cross compile so you have to compile for the target system on the target system. There can also be compatibility issues with some libraries, though that's getting better rapidly.
So those are the options. Still, even without those extra features, startup time is usually good enough.
Doesn’t it generate every additional code at compile time? I didn’t notice it slowing down my startup times, it’s mostly every other lib (which you usually have, as what’s the point of a hello world cli app)
I've timed it, it's very slow because it uses reflection to generate the model. The annotation processor doesn't let you avoid this step unfortunately. Native images do.
But that step only has to run as part of building it. It doesn’t have to run at every invocation. How did you try to run it?
From the documentation:
> The picocli-codegen module includes an annotation processor that can build a model from the picocli annotations at compile time rather than at runtime.
> Enabling this annotation processor in your project is optional, but strongly recommended. Use this if you’re interested in
I use it but you have misunderstood what it does. It builds a model in memory and then you can use that to do other tasks at build time. It doesn't persist the model in a form that the app itself can use to start up faster.
Just tested on an M1 Macbook Pro and it took 148ms
I have a tool I created that creates a git branch based off a Jira ticket number. If I don't supply a ticket number then it errors out straight away, so I think most of that 148ms is the JVM starting up.
On a $50/mo Azure VM running Windows Server I’m able to run a jar via java.exe that prints a string to stdout and exits - in about 160ms end-to-end in a warm-ish environment (Prefetch, IO caching, etc)
Edit: Now it’s down to 80ms - I guess at this scale it’s hard to pin down.
It loads classes lazily, so your program can start up at `main` very fast, and you only pay for what you use. This might cause some application feature to start up a bit slower on first run, but the byte code format is very compact and can be parsed in a single pass, so I don’t think it would be significantly slower than ordinary machine code loading.
> It loads classes lazily, so your program can start up at `main` very fast, and you only pay for what you use
Right, but large Java programs use huge numbers of classes. They may load thousands of classes just during start-up.
> the byte code format is very compact and can be parsed in a single pass, so I don’t think it would be significantly slower than ordinary machine code loading
That's what I was wondering about. There used to be a flag in OpenJDK to disable bytecode verification to improve start performance, but it was deprecated in JDK 13 [0] and I think it's since been removed entirely. Its removal is understandable but I still wonder about the performance impact.
- compiling to a single binary (I guess jpackage fixes this)
- saner / less elaborate / more ergonomic interfaces to implement and use e.g. compare `io.Reader` (https://go.dev/tour/methods/21) to the Java equivalent
Several commercial JDK options have offered compiling to single binary for 20 years, even if out of reach for common folks. GraalVM and OpenJ9 now make it available as free beer as well.
Is that the modern Java equivalent of the single-method interface in Go? I can see its an abstract class - what do you do if you want to implement both Reader and Writer?
However, there's lots of code which just takes a Reader/Writer and then you're SOL.
(In general, the I/O interface/class hierarchy in Java is a bit of a mess because a lot of it was retrofitted to maintain binary compatibility. The equivalent stuff in Guava seems a bit saner, but of course not directly usable with all the 3rd party libraries you might want -- it has shims for the most part, though.)
I started with Java and I will most likely never go back. I mainly use Go now.
People want to learn/write Go. It's hard to find anyone that wants to learn/write Java. People probably prefer Kotlin over Java today. It's a much better language and you can keep using JVM libraries.
Go is very simple language that is just as powerful as Java, if not more powerful due to not having to rely on the JVM. That's why it's so popular as a replacement for Java.
How are Jars more portable than Go binaries when you need Java installed to run Jars?
Java obviously has a more mature ecosystem with more libraries due to age, but Go you don't need many libraries to begin with. The Go std lib will give you almost everything you need in most cases.
> Go is very simple language that is just as powerful as Java
I don’t think so, Go is very low on expressivity. Generics help, but I don’t see anything like JOOQ for go, just as an example. Also, no real alternative to Java’s stream api, which can at times make code much more readable than the 4 nested for loops with 4 different exits.
In 2010, I think arguing that relying on the JVM would be an acceptable gripe. However, it is 2023, almost all of your stated issues have been (IMHO quite elegantly) addressed WITH the added benefit (or maybe detriment?) of backwards compatibility in modern versions of Java and the _incredible_ engineering behind the JVM.
I would actually argue the inverse. Between the two languages, Go would be my second choice precisely because it does NOT have the JVM. Even though I have used GraalVM for AOT, with Docker / containerd, I would take the JVM any day. It's just night & day when operating something in production.
That being said, Go still has a lighter resource footprint but I found Go to be a better Python alternative than Java.
Here are some IMHO acceptable gripes with Java:
- Java represents strings using UTF-16 (although there are optimizations introduced in Java9+ already to use LATIN1 / ascii encoding if you don't need to use Unicode)
- Java makes it difficult to have steady state memory consumption (by design)
- Java's escape analysis is primitive
- Java is missing value types (coming soon!)
I think Go is great, but IMHO modern Java is just better at building backend systems.
This really depends on what you're doing. Having many libraries creates a lot of weird potential. You can absolutely subsist on less, but you can do more with more.
A while back I found myself wanting to render MathML into an image. So I dug up jeuclid which is absolutely antique and as far as I can tell not actively maintained. My project flat out would have ended there if I had to build my own math rendering engine to proceed.
Well yeah. You've still got jars though if you want architecture independence. Java's not really aiming at the desktop market though. It's a server language first and foremost. You're most likely setting up a machine (or VM or container) to cater to the Java code you're running.
But to be realistic, most Java code either runs in application servers (as WARs) or in containers, and in the latter case, even though your java jars are architecture independent, docker just isn't.
It’s not hard to find people who want to write Java. I do it all the time. We have been hiring and writing Java code for fifteen years and have not seen a decline in the interest.
I'm not talking about people that have been writing Java for years. I'm talking about new programmers. If you asked a new programmer out of college which job they would rather have I job programming Go applications or Java. I would bet 90% of them would pick Go.
Where I am it's hard to find quality people period. If anything targeting Kotlin or Scala developers usually yields slightly better developers but also further restricts an already small candidate pool.
It would be great to have a language that is a love child of Erlang and Rust. All I need is: repl, message passing, actor model, lightweight-threads and strong types. :)
I think all of the things you point out are probably true. But I still prefer Go.
And I'm probably not the kind of person you imagine. I'm in my 50s. I used Java for ~20 years. I spent a lot of time developing efficient programming practices in Java and teaching those to both junior developers and people with decades of experience.
I'm also not someone who takes a language switch lightly. It takes much, much longer to become reasonably competent in a programming language and I think that if you are going to choose a "workhorse language", you kind of need to see it as a decade long investment. You don't learn a language in a year.
I switched to Go mostly because it is a nicer language for what I do (servers). But I can fully understand why someone would flee Java. Java has a lot of baggage in the shape of legacy code. When people talk about languages they tend to talk about them as if we all live in a fantasy world where we can write new programs from scratch all the time. But that isn't reality for most programmers. Most programmers aren't starting projects from scratch - they work on stuff that has already existed for a while.
Java has been around for a long time. Which means that every version of the language, every fad, every architectural trend, everything that has happened in almost 3 decades of Java exists at the same time in codebases that are alive and kicking. When developers get a job, this is what they are faced with.
The same is true for C++. Yes, the most recent language spec is a nicer language than what you had 10 or 20 years ago. But that probably isn't the language you get to use when you join a company and start to work on their product.
The reason people choose different languages when they do start from scratch (do a startup) probably isn't entirely rational. I suspect a lot of people avoid languages where they have encountered large legacy codebases full of frustrating code that is done in a out of fashion way.
I chose to switch to Go, despite having 20 years invested in Java, because Go has the features I need in a language and isn't cluttered with much of what I don't need. It has nicer tooling that just works better for everyday things. It also has a healthier approach to how you design stuff: it isn't dominated by large frameworks. I don't have to teach Go programmers minimalism and have "experienced" programmers throw tantrums because they feel threatened when having to re-learn how to program Java.
To be honest, I shudder a bit when thinking about going back to Java. It has so much stuff to deal with when reading other people's code. It doesn't feel like it is worth my time. I can't go back to that. My time is too valuable to me.
(I probably should point out that I did decide that I had better find an alternative to Java when Oracle showed itself to be untrustworthy and litigious. But it took years to find both the opportunity to leave Java and the language to leave Java for)
that's a culture issue that is changing rapidly to avoid that. Also that was common due to previous limitations in the language.
regarding the "com.lol.myapp", i actually think this is a good think for package management, it avoids naming issues with packages for different vendors and forks. On the code itself you should only see these on the import lines on top of the file, which isn't really a big deal.
I think most people issues with java come from old legacy codebases that had over the top patterns like that due to limitations in the language and culture. Nowadays, with a proper conventions guide, java can be quite clean. Sure it won't ever be as clean as something new as Kotlin, as it tries hard to maintain backwards compatibility, so old ugly stuff will remain in the language (even if you don't use it, you might see old code that does), and new stuff designs are restricted by what already exists, but it still has its advantages over new shiny things like kotlin for example: new pattern matching, compile time and compatibility (kotlin "100% compatibility" doesn't actually cover everything, and compatibility is important for Big Co with loads of teams and loads of internal and external dependencies and tools)
> that's a culture issue that is changing rapidly to avoid that.
Define rapidly. It's now been two decades since I was first told "yes, that's a culture thing, but everyone knows it's crazy, and it's on the way out".
E.g., this masterpiece from Benji Smith [0], originally on the old "Joel On Software" forum, is from 2005.
Java will always have Java syntax, though. And its runtime will remain rather heavy compared to language implementations that compile down to machine code. Go's benefit's will probably always be there.
Java can be taught to 2-year olds so Go seems to have a slight disadvantage there. ;)
Very subjective, but having Java syntax is an advantage compared to goddamn Go — like I have seen my fair share of languages and can generally “read” a new language’s code sample just fine, but go was honestly quite hard the first time with their stupid type, “receiver” etc notation. And I have honestly no problem with Haskell/Lisp/C/etc notation at allz
Languages are very complex, you will not get a good picture of their overall strengths and
weaknesses if you limit yourself to a very zoomed-in view, like comparing a few features of
the languages themselves.
As an example, consider the digital camera market vs the cameras in smartphones.
The digital camera is a clear winner! So many features. The quality of photos taken is
objectively better in every metric too.
And yet the digital camera market is dying, completely killed by smartphones.
But why, the features are cleary better right?!
Because we're looking at it wrong, we're tunnel-visioned on the "camera" part. We need to
look at what actually matters.
Thankfully, that's simple to answer and is the same for every product in existence. All that
matters is the user experience.
Nobody wants extra features on their camera, nobody wants a camera in the first place, nobody
wants to take photos either. What people actually want is to preserve the moment of their first
born child taking their first steps. The technology is irrelevant as long as it lets the user
do what they really want.
Why would I want a digital camera when my smartphone has a decent enough one that is effectively
free and always available? A low quality camera on you is infinitely more valuable than the
professional camera at home.
Programming languages are no different, just that "users" in this case are programmers,
which seems to be confusing to some.
What is the developer experience of using Java vs Go? - that's the real question you need to
answer to get to the bottom of this.
Start a new project and write some code in both languages, compare the experiences. Be wary of
biases, if you have pre-existing experience in one of the languages it's going to shadow
your judgement (the curse of knowledge). Pay attention to aspects of good design - how many
pointless decisions do you have to get through before actually shipping code?
Go eliminates whole areas of pain points:
- Dependency management? Go modules.
- Tests? Built-in.
- Code formatting? Built-in autoformatter.
- Compilation time? As fast as it gets.
- Distribution? Single binary.
- Writing code? Minimal ceremony, just make a function.
- Performance? The idiomatic code you write will naturally perform well due to value types,
explicit pointers, a culture of straightforward code with no needless indirection. If
you need to optimize, the profiling tooling is great and the optimizations themselves
straightforward due to intuitive language and GC semantics - just reduce allocations.
- GC? A single implementation, good enough for 99% of cases. Two whole knobs available if
you really need to tune it. Performs well due to the language not getting in its way.
- Standard library? Excellent, good balance between batteries-included and bloat. The built-in
HTTP server is suitable for 80%+ of workloads.
- Concurrency? Core to the language - syntactically supported green threads. The entire
language and its ecosystem are built with it in mind.
- Linting? Community-made linter runner with a curated list of good linters.
- Found a bug in a library? No problem, the library is written in straightforward Go, the
same flavor of Go you've been writing. It's of course autoformatted as well.
There you go, a single go-to solution to each problem. The language's designers have gotten all
the pointless details out of the way for you. You get to focus on writing code.
Now compare the above points with Java.
Go was designed for developers, with the same philosophy Steve Jobs designed Apple
products for users. Java wasn't. Simple as.
Well a go binary only runs on the platform it was compiled for. A jar file runs on any platform that has a working JVM. I.e. most commonly used operating systems and CPU architectures you can name.
Of course whether that matters to you is another thing. Most people package either of those up using docker and run them on generic linux hosts.
> As soon as you are using Spring you aren't actually coding Java anymore
Can you expand on that. I'm pretty sure its still Java. There are some additional annotations that help with autowiring and object reuse, but probably affect the code less than Lombok.
> For the use cases I work on I'd prefer Go for its simplicity
Why don’t “enjoy” the simplicity of assembly then? (Not trying to be sarcastic, just I always felt that this logic is flawed. Especially that java is a very simple language, with very few concepts. If you don’t like, you absolutely don’t have to use metaprogramming like Spring)
What Go has going is that it's faster and more pleasant to develop in. Yet another bullet-point-feature bolted on only makes the Java ecosystem jungle worse in that aspect.
I find your other points a bit dubious. Java's type system suffers from extreme verbosity and little soundness, and exceptions have not been a successful error-handling story. I don't think you can call a jar that needs a system installation of some specific jre portable - at all. The ecosystem may be huge, but also nightmare in terms of interoperability and support of newer features. IDE click-driven-development is a crutch for the extreme verbosity and complexity of patterns in Java. Compile times are much worse in my limited experience.
When talking about languages a lot of people talk about languages as if all programmers deal with is starting new projects from scratch and being able to use the latest version of the language, using current practices. For most programmers this isn't the case. Most programmers work on projects that were not started by them, and have been around for a while. Or software that is built on, or has to be closely integrated with, legacy code.
Based on your writing, you sound like either a junior engineer, or someone who has had very little responsibility for making resource allocation and strategic decisions.
Are you kidding me? If it wasn't for large enterprise corporations Java would be long dead by now. Ever since Oracle took over Java has seen almost no improvements. The logical comparison is Java to C#, and C# has seen _a lot_ of improvement over the last 7 years or so. It just so happens that Java is sometimes unavoidable for Android development were Kotlin is not (yet) possible.
Java is, like its large corporate users, stuck in the past.
Version numbers are extremely arbitrary, and Java has a new number every 6 months.
If you compare Java 6 (2006) and Java 20 (2023), and then look at C# 3.0 (2007) versus C# 11 (2022), C# has gained many more features and improvements than Java did over the years.
Gaining features without bounds is not a positive in case of a language. C# is a very good language, but they really do copy C++ in adding everything under the Sun to it, and the complexity of managing it all can easily crumble under even good developers.
Java may be the other side of it, but I think it is a safer bet.
Including copying Java features like tiered compilation, default interface methods, compiler plugins, AOT compilation, being able to run UNIX systems, failing on having phone OS written in C#.
Anyone that misses C# on the JVM can use Kotlin or Scala.
And best of all, due to Microsoft's lack of investment on VS4Mac and VSCode versus VS proper, the best .NET IDE outside Windows runs on Java/Kotlin.
Some features were introduced in Java before they appeared in C#. But many C# features are yet to be seen in Java.
> best .NET IDE outside Windows runs on Java/Kotlin.
I would argue “in the world”. And it does not matter in the slightest to a user. The best Python IDE is PyCharm, also written in Java/Kotlin. I don’t know which PHP IDE is the best, but I’m pretty sure none of them are written in PHP, and the same probably applies to Ruby.
The top comment in this thread [1] highlights (potential) problems with virtual threads, referring to this PDF [2]. Does anyone know if these actually manifest in the way they are implemented?
Your second link is from 2018, and would indeed need a case study for Loom. For example the concerns
about thread-local storage are addressed, and the base overhead is understood (as in: don't use them for CPU-bound worloads, they'll fare better on IO-bound workloads)
Also, the cases studies in that paper are platform-level when I believe the language has to be involved; as a runtime has more information when resuming to a suspension point. The paper even acknowledges Go as a successful implementation although with C-compatibility call overhead caveat. In the Java world, the vast majority of programs stay in the Java language. So I'd say that paper would list Loom as the best implementation (And maybe revise their recommendation. It'd be useful to have that author's opinion of Loom in 2023)
This is relevant to environments where the user code is statically compiled into native code that depends on some thin runtime library and more or less directly interacts with C code. In case of full blown VM many of these problems are not as significant as the internal thread state representation is non-native anyway and you can dynamically instrument pretty much whatever you want (one issue are system calls and system library functions that do not have non-blocking equivalent, but that could be handled by either having separate OS-level thread for such things or simply ignoring the issue, with real implementations doing some mix of these two approaches).
I haven't touched Java since school and never worked in it professionally and that's been well over a decade. Is there a resource that people recommend that gives a good introduction to modern Java? Preferably succinct but doesn't need to be.
The Horstmann 2-volume series is probably the best books for someone learning Java, period. I wished there was a book like this for other languages: both comprehensive and up-to-date. For C++, I guess the closest would be Stroustrup, but even as the creator of the language, Stroustrup's books just aren't as comfortable to read as Horstmann's.
Every recursion can be converted to code that uses only a single stack. Tail calls can be easily eliminated at compile time automatically as well, that’s why it only needs compile-time support — I don’t specifically know what Scala/etc do, but the mentioned trampoline is basically just a function pointer one can jump to, accumulating results in some non-stack data structure if needed.
I'm not positive about this but I believe the virtual threads can yield at syscalls (i.e. IO calls). I don't think GUI application code is littered with these syscalls where a virtual thread can naturally yield so you get non-blocking for "free". Someone please correct me if I'm mistaken.
Under .NET's async await model, switching also generally only occurs when blocking IO is initiated (or blocking on a timer, etc). You cf course add your own yield points, but it is not frequently necessary. That is mostly limited to CPU/memory bound operations, as any IO bound operations would have the potential to yield.
However, something important to remember is that UI frameworks generally require most or all UI interaction to occur on one logical thread. This is because making all the UI code safe for calling from multiple threads would require tons of work, and likely require one massive lock that serializes most ui compontent method calls, or tons of smaller locks. And that is just for the minimum of safety. UI work has tons of implicit state.
User level logic would also likely need additional locking, since even if each method call to a control is thread safe, if you want to perform some kind of conditional action that does not already have some dedicated method, that would pose a problem without some form of synchronization code.
One advantage of the async-await approach in UI scenarios (where continuations always get run on the UI Thread) is that you know that between two awaits, no other code will be running on the UI thread, so can avoid synchronization code. You know if you read a value from a control and then conditional call a method on it, you don't need to worry about racing with another thread. Now whenever an "await" occurs, potentially arbitrary other UI code may have run in the interim, so you may need to reverify the state of things. But this is still much simpler to handle than possibly being preempted at any time, or having another thread literally changing things in parallel.
Green thread based approaches like Virtual Threads do avoid much of the complexity of async-await, but they often do lose those sorts of advantages. Even if the green thread approach has guarantees about the yield points (e.g. only has cooperative yields which only occur when certain specific methods are called), you still lose the ability to locally reason about where those yield points may occur, since any function/method call could potentially call one of those yielding functions, or call something that calls one of them, etc. The only way to regain that info is to "color" the functions again, which was the whole thing peple are trying to avoid with green threads in the first place.
Not familiar with .NET, virtual threads in Java are an alternative to async await in other languages. The approach Java took is similar to green threads in go.
TPL is a library to create Tasks and run them. They can be scheduled in different ways, on (native) thread pools for example. They are commonly used together with async/await. They are very similar to promises in JavaScript.
I read about the approach without async/await, but honestly I don’t really get it. Seems to be very dangerous to me, if you give up control about scheduling. But sure, async/await is completely viral and it is everywhere now. Most functions tend to be async.
> virtual threads in Java are an alternative to async await in other languages
Not really. Async/Await is mostly about syntax. Transform callback-hell into something that looks like linear code. The underlying threading model doesn't really matter for that.
What are these things? It sounds like it is a time-sliced sharing that offers only concurrency and not parallelism. In other words, it's a userland construct and not a kernel thread. Sounds more like GO coroutines but do we really need another name for them?
Typically ThreadLocals are used whenever something is costly to initialize per-request. The rule of thumb is usually: you might want to use some heavy object without locking, so you stick into a ThreadLocal.
However, if suddenly you have a million threads then this optimization doesn't work anymore. Sure, the concept of ThreadLocal still works, but in practice you'll end up creating a million of these heavy objects - something you wanted to avoid!
Expensive to initialize doesn’t imply large. And wanting to avoid expensive initialization (runtime) is orthogonal to wanting to avoid a larger memory footprint. So I don’t quite buy your argument.
Your question was why the advice was given to avoid ThreadLocals. This is the primary reason. It's not necessarily related to avoiding a larger memory footprint.
For what it worth, there actually is a separate JEP (I believe this: https://openjdk.org/jeps/429 ) for a new, scope-bases solution that promises much better performance.
The main benefit of virtual threads it that they release their carrier thread when they call (non native) blocking code, so that other virtual threads can be executed by the same carrier.
I don't think that it's the case for C# task, which also appear to be an higher level construct.
Do they? I recall ObjC getting a lot of criticism over their long method names.
But really, who cares? Who the hell types out the whole method in this day and age? Type the first few chars, up arrow, down arrow, tab. Jump around with vi bindings. I have not written a full line of code in well over a decade.
Although Apple deserves it since their IDE really blows compared to say Idea.
Given this is something you're probably going to find at most a handful of times in an entire code base, I don't think that's necessarily bad.
Java API stuff hasn't ever been a huge contributor to verbose code. Yeah I guess some of java.io's stuff isn't great, but the worst AbstractFactoryDelegateImplFacadeProviderVisitor gore was always in the application code.
Conceptually, these are all variations of the same thing. The difference is in how you use them and how much boiler plate you need to use them.
Funnily enough, the first versions of Java did not support OS threads and only had green threads for a while. Supporting real threads was a big deal at the time as it allowed you to use more than 1 processor. Of course, processors were single core at the time and most computers only had one of those anyway. Java 1.1 laid the foundations for proper multi threading and green thread support was eventually removed with Java 1.3. With Java 1.5 we got the java.concurrent package which enabled doing more complicated things with locks and other synchronization primitives that were a bit less primitive & brittle than using the synchronized & volatile keywords. That includes implementing green threads on top of real threads. Which is what frameworks like vert.x and others have been doing for ages.
So, in a way we're coming full circle here with virtual threads re-using the thread API, which in turn reused the original green thread APIs in Java 1.0.
That leaves off basically the whole point of it: JVM-native blocking calls don’t have to actually block, they can do an OS-native async call, and do some other work on the thread, returning upon completion.
Since Java uses very little FFI, it will benefit greatly from this automagical “no more blockingness”, of course only when there is some other available work in the meanwhile. Servers are the best fit for that.
I guess the idea is that you don't have to write tasks explicitly - if you have a sequence of actions you can just write it as a regular code, and the runtime will automatically insert points where your code might be suspended waiting for IO or other threads.
Don't be confused by the example, which semantically just shows a thread pool. It is hard to give an example in code as virtual threads ideally are purely an implementation detail and otherwise just look like normal threads.
In a typical thread pool if you schedule more long running tasks than you have worker threads, you either have to queue your tasks waiting for a free worker or you have to spawn a new worker. If your tasks are CPU bound and you do not care about latency/fairness, queuing is what you want. If your tasks might do blocking operations, you might underuse your CPU, or you simply want more fairness and not simple FIFO execution; in this case you can spawn new threads but OS threads are costly to spawn and schedule beyond a certain number. Virtual threads are simply cheaper threads as the scheduling is done in userspace and the JVM can in theory be smart about it.
Oh so jvm virtual threads will suspend execution and yield without cooperation, just like an OS scheduler? That's a big difference to "just a task scheduler" which just queues tasks and distributes N tasks on M threads.
Well, yeah. Haskell is a research language, while Java's stated design philosophy from day one has been to be conservative with adding new features, and judiciously add new features after they've proven useful in other languages.
So how does that make a difference to my point? The title is literally false as multiple other languages have already done it, Haskell in fact has even rewritten the underpinnings at least once the feature has been there so long.
The fact that X exists doesn't mean the "era of X" has started yet, you have to have the exponential adoption curve. The "Internet Age" started several years after the Internet was created
And clearly, adding this feature to Java is going to be like putting a web browser in Windows 95
Are you sure? All the discussion I can find online makes it seem to me like TPL and friends are just executing tasks on thread pools until completion. (see e.g., https://github.com/dotnet/runtime/issues/50796 for some discussion)
I don't think this is the same thing. As far as I can tell, the task abstraction is a threapool where you can submit operations and return futures. If a task blocks indefinitely, the underlying threadpool OS worker thread will be blocked, and the threadpool either has to run with less resources or spawn a new worker. Virtual threads are an M:N abstraction: blocking on a virtual thread will not block the underlying OS thread.
.NET might indeed have a virtual thread abstraction and if it does you could of course implement the Task abstraction on top of either virtual threads or OS threads, but what you linked to is not a proof that it does.
That looks similar to Java FutureTasks + Executors which is a very different concept from virtual threads.
Virtual threads mean that a blocking thread can yield to any other non blocking thread seamlessly and with very little overhead. .NET Tasks cannot do this as far as I can tell.
Oh interesting, that's very cool, I didn't realize Java was doing that. That's a different axis than M:N though (cooperative versus preemptive) and you could definitely write a preemptive async runtime for Rust (rtic comes to mind). But the async-std and tokio runtimes are certainly cooperative.
(As a note, cooperative scheduling also requires a runtime - Rust might not "have a runtime" by default but you need to opt into one to use async.)
Kotlin's are stackless (as in emulated by the compiler), Java's are stackful (as in a chunk of stack is copied into the heap along with a program resume point)
This means Kotlin's functions can become colored [1] which is a problem.
This said, I'm fully expecting Kotlin to react and pass on this feature to the users. Maybe after figuring out how to deal with Android.
It's not about feature / promise return types. Consider:
int someFunc() {
doA();
doB();
byte data[] = readFromSocket();
return doC(data);
}
int callerFunc() {
doX();
System.out.println(someFunc());
doY();
}
when we invoke the callerFunc() in a virtual thread, it executes till the potentially blocking socket reading, creates a callback or future -like object containing doC(), System.out.println(), doY() - such object is called a "continuation", see "continuation passing style". Then the socket reading is iniitated in an async way, with continuation registered as a callback to be invoked upon completion. Then the native OS thread is freed to do any other work.
In javascript approach we would need to manually mark some places `async` and use `await`.
@async
int someFunc() {
doA();
doB();
byte[] data = await readFromSocket();
return doC(data);
}
@async
int callerFunc() {
doX();
System.out.println(await someFunc());
doY();
}
So async / await is a poor man's continuation passing style.
The need to differentiate between 'async' and non 'async' code makes it more difficult to refactor or mix code between 'async' and non 'async' domains.
Some people argue that it's better to have the explicit distinction. I personally don't see benefits of this.
Your examples are not functionally equivalent. In the first example the callers of both functions are blocked, in the second the callers are not blocked.
That's the point of Java's virtual threads, basically nothing* will block. The JVM can simply replace a blocking user IO call to an async one under the hood, and in the meanwhile schedule another virtual thread to work. When the IO is ready the suspended thread might get continued.
In your example the thread is not blocked but the callers are blocked.
There is more to async/await than simply keeping os threads unblocked, they also provide a mechanism for keeping callers unblocked and synchronizing async contexts(parallel or concurrent).
Is Loom addressing this need? Otherwise it's Goroutines(or any green threading solution) without channels and select. The ecosystem will fracture around solutions to address the boiler and future chaining pains.
Java allows not blocking callers using Futures or callbacks for a long time.
As well as Promises and callbacks were available in Javascript before async / await.
var executor = Executors.newVirtualThreadPerTaskExecutor();
;; or, for old Java
var executor = Executors.newSingleThreadExecutor();
Future<Integer> f = executor.submit(someFunc);
What is relevant in the new Java VirtualThreads and Javascript async / await
is the possibility to write simple synchronous code, with the performance
similar to callback-based asynchronous code.
But that arguably doesn’t use this feature to its fullest — the most naive/easy to comprehend way to do concurrency is to start up multiple threads calculating something and simply wait for all of them to return, and at this point you are free to use the calculated results as is. This is the most-idiomatic way to make java virtual threads (with a try-with-resources block)
Every time something show up about anything related to Java, the discussions turn to this flame war about what language are better or worse than Java. Why can’t we just discuss the article at hand?
In this case, I’d love to hear more from experienced Java developers, with existing code-bases, who have tested these new virtual threads out.
A lot of work still needs to be done on the libraries and other tools before it's useful for end users. We've spent years migrating to "reactive frameworks" or being stuck in "legacy".
Virtual threads isn't just something to "enable". You do need to adapt existing codebases somewhat e.g. use of synchronization.
The upcoming future is likely a mix of reactive and virtual threads where appropriate. Virtual threads is still very good for short lived tasks.
The application that we are working on uses thousands of platform threads today (split over a handful of applicationservers). It’s a humongous banking system. I have been working on performance related improvements for years. I am now curious if these new virtual threads might be beneficial for us in some places. Need to read up on them a bit more
> Platform threads are a one-to-one wrapper over operating system threads, while virtual threads are lightweight implementations provided by the JDK that can run many virtual threads on the same OS thread. Virtual threads offer a more efficient alternative to platform threads, allowing developers to handle a large number of tasks with significantly lower overhead.
> The JDK can now run up to 10,000 concurrent virtual threads on a small number of operating system (OS) threads, as little as one
OK, but why bother with virtual threads if the JVM could just magically decide to run all my virtual threads on one thread? I guess "efficient" in this context doesn't mean "fast". I want my code to run on all available cores, and not be hobbled by a JVM that decided to hate me today.
> OK, but why bother with virtual threads if the JVM could just magically decide to run all my virtual threads on one thread? I guess "efficient" in this context doesn't mean "fast". I want my code to run on all available cores, and not be hobbled by a JVM that decided to hate me today.
If you have 10,000 threads blocked on a sleep (or I/O), then there's no reason to run them on more than one code. In fact, they won't usually be running at all.
This is the use case. It's less about giving the VM a new way to 'hate you today', and more about telling the VM when it can save resources and share system threads.
Edit: There is some potential for unexpected downside to the extent that this introduces a new scheduler for the virtual threads. If every thread is an OS thread, then it's the OS scheduler that controls when they run. With virtual threads, I assume the scheduling policy (deciding when virtual threads get time on OS threads) is controlled by a JVM scheduler that may or may not be as good at making the choices you'd like. But probably best to assume it won't summarily drop everything on a single OS thread and call it a day.