Hacker News new | past | comments | ask | show | jobs | submit login
A compact overview of JDK 21’s “frozen” feature list (vived.io)
160 points by mooreds on June 11, 2023 | hide | past | favorite | 186 comments




I wrote about how Record Patterns along with Pattern Matching for Switch can be used to write things like Tree-Rewrite rules in AST analyzers/program optimizers efficiently if anyone is curious:

This is how Spark's optimizer Catalyst works in Scala

https://gavinray97.github.io/blog/what-good-are-record-patte...

Kind of wild to believe this is valid modern Java:

        return switch (expr) {
            // x + 0 = x
            case Add(Var(var name), Const(var value)) when value == 0 -> new Var(name);
These two are my favorite new JDK features by miles, along with Sealed Types.


Do you know if these are just Java features, or if they're backed by new JVM bytecode intrinsics to ensure that these destructuring matches are able to be performed efficiently? (Think e.g. Erlang's recent "fused checks", where it can figure out whether something is an integer and whether it's non-negative using a single abstract-machine instruction.)

That is, for languages that already have features like this (e.g. Scala), will those languages be getting any benefits as a byproduct of Java getting these features?


The implementation uses invokedynamic (which has been available for ages) and dispatches to java.lang.runtime.SwitchBootstraps#typeSwitch(…):

https://docs.oracle.com/en/java/javase/20/docs/api/java.base......)

The concrete implementation in doTypeSwitch currently says “Dumbest possible strategy”:

https://github.com/openjdk/jdk/blob/master/src/java.base/sha...

There is no @IntrinsicCandidate, so I don't expect Hotspot to special-case any of this yet. The machinery described so far merely computes an array index, and there is a subsequent lookupswitch opcode that selects the appropriate case body to run.

I'd be somewhat surprised if Hotspot could unroll the checking loop and eliminate the subsequent switch. So the Java implementation looks rather bad from a performance point of view (but it's obviously designed for compact bytecode and future optimization). Scala could certainly do the same (the magic of invokedynamic is that it's not magic), and probably much better in many cases, even on current Hotspot.


FWIW, there's an open PR and discussion around the code for this:

https://github.com/openjdk/jdk/pull/9779


Haha, it's already merged. I should have run “git pull” before writing my comment. The “Dumbest possible strategy” comment is already gone, and appropriately so.

This is one reason why invokedynamic is so interesting: it is possible to switch implementations without recompiling everything.


If you want a deep-dive into the bytecode, there's a great article at:

https://medium.com/@nataliiadziubenko/java-20-pattern-matchi...

The tl;dr = no new intrinsics (for now), it uses table-switches and invokedynamic


I don't think you need any byte code intrinsics for pattern matching. Special cases aside, there is no generic efficient way to describe them. Java byte code is badly designed anyway so it's up to the VM compiler to do the heavy lifting.


I don't think it's that badly designed. JVM bytecode is not much more than AST serialization, a post-order traversal of expression trees. It's designed for verification - every control flow needs the same typed stack - rather than speed of interpretation, though it's easy to write an interpreter. It's zipped, so it doesn't need to be optimized for space.

I didn't like how it did exceptions, that seemed a bit too much like its own little mini VM. JSR/RET etc, return addresses on the operand stack. But that's gone now.


This is actually a core concept of the Java runtime. The compilers and what not “know” about idiomatic Java. The trivial example is things like the getter/setter pattern so prevalent in Java.

But the compiler doesn’t treat them as anything special, it’s just a method. But the runtime is well aware of this pattern, and what the resulting, straightforward byte code looks like, and can make optimization decisions from there.

So where, perhaps, you may want try and write “clever” code to bend the compiler to your will, Java promotes “just write Java, no reason for the code to be tricky, let the compiler and VM implementations be tricky”.


I think getters/setters are more tricky than just reading and writing the fields.

But all sorts of libraries except get/set so we often end up generating them.


It's just too bad they could not come up with an extended enum-as-tagged-union syntax. Using sealed classes for that pattern works but is still messy to write. It feels like writing enums with classes back in JDK 1.4 days.


I think "sealed interface" with implementations used by "records" gets you pretty close though right? This is usually what I do, something like:

    sealed interface Tree {
        record Leaf(int value) implements Tree {}
        record Node(Tree left, Tree right) implements Tree {}
    }

    var tree = new Tree.Node(new Tree.Leaf(1), new Tree.Leaf(2));


This is what I was talking about. With pattern matching I now expect this exact pattern will be 98% of usage of `sealed` anything. It would be nice to have some sugar like

    tagged-union Tree {
        Leaf(int value),
        Node(Tree left, Tree right)
    }
I used `tagged-union` because for backward compat I think using `enum` would not be possible but really it could be anything.


While I agree that some syntax sugar would be welcome here (AFAIK the best you can currently do is

  sealed interface Tree permits Tree.Leaf, Tree.Node {
    record Leaf(int value) extends Tree {}
    record Node(..) extends Tree {}
  }
), it is not too terrible. Also note that rust really mixed up the nomenclature here with their enums, these were known in every language previously simply as ADTs. It makes sense for these to not have identity, while java enums are always a single instance.


You want `implements Tree` here or this won't compile.


I'm impressed with how Java is shaping up. With records, pattern matching, destructuring, and virtual threads all arrived or arriving, what advantages do Kotlin and Scala bring?


It's the small things that make kotlin awesome. Not microscopic things like removing semicolons, but the left-to-right of .as?, .let and friends that allow simple things to remain simple instead of littering the code with almost-single-use names. Give those trivial intermediates a name when you think the name will be helpful, not when the syntax is unhappy without. Those aren't astronaut level language features to treat someone's Haskell-envy, but simple things that just happen to add up really well.


The stdlib of kotlin with immutable types, lots of good extension functions on for instance lists etc is hard to beat. Especially with the nice lambda syntax it makes it a joy to write functional code compared to streams and having to call separate functions by wrapping instead of chaining.

Operator overloading can be misused, but for certain things it makes stuff much prettier as well.


Scala stdlib is considerably better than the one from Kotlin, one detail I dislike about Kotlin collections is that `map` invocations always result in a List.


Kotlin's standard collection are "immutable collections" only in the sense that they are read-only. Adding an element to a Kotlin immutable set copies the whole thing,

https://github.com/JetBrains/kotlin/blob/924c28507067cbfbf78...

This is not that different from Java unmodifiable. In contrast the Scala immutable Set supports constant time addition,

https://docs.scala-lang.org/overviews/collections-2.13/perfo...

This isn't just an ivory tower concern either. Because Kotlin immutables aren't actually immutable (they just don't expose mutation interfaces) this means that if I get a ref to an immutable Set, I can't modify it but I can't rely on it not changing because it's still back by a mutable one.


Did kotlin ever add an equivalent of parallel streams?


Yes, via co-routines and flows. And with some convenient extension functions you can easily deal with Java APIs that take or return streams. The structured concurrency proposal in JDK 21 takes some steps in the same direction. That's basically the higher level stuff that will make Loom more usable.

In general the theme of that release seems to cherry pick a lot of stuff that Kotlin and Scala have been supporting for some time. That's a good thing. Java developers have been missing out on a lot of good stuff. There's quite a bit more of that of course where this came from.


Kotlin provides modern features in the Android development Java land stuck on the ancient Java 6.

Scala was an academic experiment on how to match nicely object oriented world with functional programing paradigm that got some hype because Java development was crawling like a snail. I am not sure if this experiment was successful after all, though.


With Scala 3 a lot of the idiosyncrasies of scala 1/2 got fixed. (If you can get over some naming.) What is mostly brings for me is showing that composition beats inheritance and that classic OO trying to shove the real world in an inheritance tree is not the way to go.

Also, it got Java to move up development, together with the move to cloudnative.

From a language enthousiast point of view I’m still curious why Kotlin and now these features in Java get so much traction while most have been available for 20 years in Scala. Was it marketing? Was it people? Was it symbol soup? Who knows?


As someone who is only tangentially familiar with jvm functional languages and hasn't written java in years, "symbol soup" almost certainly has a lot of the blame.

Java is simple. There were few operators, and really only one way to call a function. All function calls require parentheses. Conventions are available for pretty much any problem you can think of.

In functional land, you don't need to bother with calling a function a factory, or adapter, or whatever overwrought GoF pattern was maladapted. Despite being verbose and convoluted, Java was all basically the same things- classes and methods and a few annotations, and easy to Google concepts.

Enter symbol soup. Now, your conventions aren't named, you have to recognize them from experience. That creates writer's anxiety- even if I understand what I am reading, if I need to start from scratch I don't know that I'm organizing things right. There are multiple symbols that apply, combine or call functions, googling symbols is hard, asking questions is hard in meat space without a screen to show the unfamiliar syntax, and that's all table stakes before you get into understanding performance implications, code organization, maintainability, etc.

If I, hypothetical developer, don't know these things and don't have someone to hold my hand, but I DO know java, what's the point of learning the functional language?

This is the same problem F# had. Over time, C# kept getting all the good bits from F# without the baggage. Java has taken the longer road to get there, but today, if I were to pick one, the value proposition of the functional language appears to be on a trajectory of disappearing.


The problem I always have with OO (or imperative) languages picking up functional features is that the constraints of functional languages are, to me, among the biggest wins.

When I know that any library I invoke is literally incapable of changing the data I pass to it, that's tremendously freeing.


As a Java developer for 15 years and as a language enthusiast myself I'm curious as well. Because I definitely don't see many places where I would use pattern matching in my code or even records. Recently I wrote like 10k LoC and I used record once: in test code to return pair of values.

May be I'll replace some `if`-s with pattern matching, just like I use `instanceof` pattern matching today, but that's absolutely minor thing which is hardly worth mentioning.

I would even dare to say that lambdas and `var`-s for me are questionable features.

Actually as I grow older, I appreciate ascetic nature of Go and sometimes I think that using Java 1.4 might actually be preferable for many codebases.

I understand that for code which manipulates trees like compilers with their ASTs, pattern matching might be god send. The thing is... 99.99% of developers don't write this code, so optimizing language for it is strange goal.

Sometimes I think that some language features are driven by hype, even with Java.

I always happy about runtime improvements, though. And Java delivers in that front, so I can tolerate pattern matching and streams if I can get virtual threads and struct values.


> Because I definitely don't see many places where I would use pattern matching in my code or even records. Recently I wrote like 10k LoC and I used record once: in test code to return pair of values.

Without knowing your application domain or seeing the code, it's hard to guess why. But one reason is that pattern matching and union/algebraic type tend to go hands to hands. So depending on how your data domain was modeled, it is normal that you don't find yourself not needing to pattern match as much.

In general,it's normal to find one self "not needing" a feature that one is not used to, professional are very good at structuring things is a way that work best with the current toolings.

> I understand that for code which manipulates trees like compilers with their ASTs, pattern matching might be god send

Very true, but i can assure you , pattern matching (combined with record and ADT) are very useful tools in general computation as well.


As they say, "the determined Real Programmer can write FORTRAN programs in any language.".

ADTs and pattern matching go hand in hand, and they - along with immutable structures - are the cornerstones of functional programming. They provide you very strong compile-time guarantees in a very elegant and readable form. I like to write code this way, and out of 10k LoC I would estimate at least half is ADTs and pattern matching for me. Maybe more.

Too bad that although Java's ADTs are coming along nicely, pattern matching is in it's infancy (due to limitations inherent in Java). This is the single language feature I miss the most from Java and Kotlin from Scala.


Android isn't stuck on Java 6 and many of the changes are in the compiler not the runtime anyway.


It’s basically stuck there unless you go look up a list of caveats of what has been selectively implemented.


And then you have to consider devices you want to support.


> Kotlin provides modern features in the Android development Java land stuck on the ancient Java 6.

These days you can use Java 8 features with no issues, and Kotlin isn't really solving the fundamental problem of runtime version on target devices — it's just spitting out Java 8 bytecode by default. Android build system already _desugars_ newer Java language features for older runtimes, and they could do the same thing for all the new fancy features. I guess they choose not to because Kotlin is already there.

That said, I do believe Kotlin is still more concise, has better nullability handling, and upcoming exciting features of its own like compiler plugins


Google has been forced to update to Java 11 LTS and Android 14 brings Android 17 LTS.

It turns out that Kotlin being able to consume Java libraries is worthless if Android can only use jurassic libraries.


It is a common misconception. I use Java 17 on Android just fine. The only limitation is that you can't use features that rely on methods and classes introduced after Java 11, for example records. For bridging the gap between 6 and 11 Google provides a "desugaring" library.


So records in Java aren't syntactic sugar over an immutable class? They appear in Bytecode?


I'm not sure how they appear in bytecode but they extend java.lang.Record under the hood, which is a new class that the desugaring library doesn't have.

Switch expressions and all the pattern-matching stuff though — that's all implemented entirely in the compiler. I use it on Android quite extensively with no issues.


Until Java finally includes real nullability guarantees in its language (and its standard library) I'll stick to Kotlin when I can.

These improvements are still nice for when you're stuck dealing with Java code, but in my experience getting projects to run on the latest version of Java isn't very easy with various dependencies all needing support first.


According to the newest design discussion, that might be arriving with project Valhalla.

https://mail.openjdk.org/pipermail/valhalla-spec-observers/2...

But even if they do, I'm guessing that's still going to be a long ways away.


FWIW, it is not difficult to set up NullAway: https://github.com/uber/NullAway


It's also not nearly the same as having null checks be a first-class part of the language.


Yes, it does offend the language hipster instinct in me too to have to add 8 lines of gradle to a turn on a build plugin.


Java has tons of nice additions (Lombok, NullAway, Manifold) but that's not part of the language itself. When you bind yourself to libraries like these, you're stuck waiting for them to update whenever a new Java release comes out. That can take months or years, and sometimes a library just stops getting updated at all.

If Java were to include NulLAway in their standard language, which they clearly can do if they wanted to, I would consider it to have feature parity with other modern languages.


And then have to deal with the fact that the libraries you consume don't use NullAway either.


If you import Java libraries into Kotlin, you can still get NPEs from the libraries. And if you don't import Java libraries into Kotlin, well... you could just write Java without those Java dependencies, too.


I rather focus on the disadvantages of additional layers to debug, with their own set of libraries, and on Kotlin's case, a way to sell InteliJ licences.

Meanwhile using Java, means using JDK out of the box with no extra sugar. Pretty healthy.


> a way to sell InteliJ licences.

This doesn't change much for me, because I can't imagine writing Java without IntelliJ. Eclipse is just a disaster by comparison.


Indeed, how can anyone live without a full compiler instead of an incremental one, buying additional licenses for JNI development (Clion), ten finger chords, an index system that never stops, having to manually call for specific code inspections, ...


I'm not fond of Eclipse, but Netbeans exists =)


Don't know anything about Scala. Kotlin has null safety and a bit cleaner syntax, but other than that, I don't see too much advantage over Java for backend. In Android, Java is still lagging behind a lot. Also, Jetpack Compose, a declarative UI framework is Kotlin only. Kotlin is also working on wasm (so is Java I think, but Kotlin has working examples with wasm GC) and Jetpack Compose is going multiplatform, including wasm. This video has some examples in description https://youtu.be/oIbX7nrSTPQ


There is teavm and cheerpj which can already execute java (or any jvm class file) in js or wasm.


They're really great for making sure Java devs will absolutely never contribute to your project.

(I say this as a Java dev who occasionally gets tempted to try to help out in repos that are Kotlin/Scala, and gives up very quickly.)


Isn't that true for any other language as well ?


I'm happy to take a crack at Go and C++, but coming from Java it is totally impossible to decipher wtf is going on in Scala. Kotlin is better, but still pretty awful.

Also, devs who use Go and C++ usually have a good reason (embedded systems and such), but Kotlin and Scala use seems to be motivated mostly by vaguely hipster-y annoyance with Java 8. And, like, sure, if you are annoyed by Java 8 and then use a language designed to have nothing in common with Java 8 except that it can import Java libraries and run in the JVM... well, it's gonna be a pain in the butt for your colleagues who do Java all day. And some of them aren't going to make the effort to work with it.


> Also, devs who use Go and C++ usually have a good reason (embedded systems and such)

Just.. drop go from that. Go is closer to JS than to C++/embedded.


Go is great for CLIs and utilities. It compiles to native code, has a low memory footprint, and the source code is pretty readable even if you've never written Go before. And when it crashes you get a human-readable stack trace not just a core dump :)

But I've never seen a large-ish Go application that I didn't hate. Because, yeah, as you said, it's closer to JS in a lot of ways.


I meant it in the way that Go is not cut out for most embedded use cases due to having a fat runtime with GC.

It is ok for CLIs and small utilities, but I can’t really stand looking at it (they went with a type syntax that is neither C-like, neither Haskell-like and is absolutely unreadable to me, even though I can usually get the gist of any language from having seen my fair share of syntaxes).


All great points. Specially:

> Kotlin and Scala use seems to be motivated mostly by vaguely hipster-y annoyance with Java 8.

Now that I think about it they seem like Instagram of language/code/devs. Mostly obsessed on surface level, superficial syntax features. IMO it is great in same sense as cooking with meal kit cooking is superior to cooking with grocery shopping.


> wtf is going on in Scala

I hate to admit it as someone who enjoy trying new languages and enjoyed my time using haskell a long time ago, but scala 2.x was definitely the most confusing and intuitive language i have ever used professionally. Really felt like a collage of desperate features without any coherence between them.


In my experience Scala is one of the most intuitive and coherent languages out there. Very hard to beat in those regards.

A tiny language with only a few, but powerful, features which can be used as building blocks to express even the most advanced patterns.

The problem are just the people "holding it wrong".

If you try to use it as Java++ Scala will be more cumbersome than Kotlin.

If you try to use it as Haskell this will become very painful quite quickly.

You need to use it as Scala… This means you need to embrace its features and philosophy. Just use the best parts of OOP and FP together!


> A tiny language with only a few, but powerful, features which can be used as building blocks to express even the most advanced patterns.

And yet from the scala 3 website itself :

> One underlying core concept of Scala was (and still is to some degree) to provide users with a small set of powerful features that can be combined to great (and sometimes even unforeseen) expressivity. For example, the feature of implicits has been used to model contextual abstraction, to express type-level computation, model type-classes, perform implicit coercions, encode extension methods, and many more. Learning from these use cases, Scala 3 takes a slightly different approach and focuses on intent rather than mechanism. Instead of offering one very powerful feature, Scala 3 offers multiple tailored language features, allowing programmers to directly express their intent:

> The problem are just the people "holding it wrong".

If enough people have the same problem with a programming language, at one point it become the problem of the language, not the people.

> Just use the best parts of OOP and FP together!

Assuming that one can cleanly extract the best part of OOP and FP without bringing the baggage of eiter. It's not clear to me that those part don't have a certain level of "contradiction" which leads to the whole beeing less coherent than just OOP or FP.


> most confusing and intuitive

unintuitive, perhaps?


+1


Is this supposed to be a good thing or a bad thing?


Double edged sword. Spring devs: stay far, far away from me.


Mostly a bad thing, at companies with lots and lots of Java devs.


I can give perspective as someone who enjoys modern Java, writes Kotlin at their dayjob (and loves it), and also likes Scala 3.

Here are the things that if Java had, I probably wouldn't see a reason for other languages:

1. Lack of first-class lambda syntax. In Kotlin/Scala you can write something like:

   fun doSomething(handler: (String, Int) -> Foo): Blah
In Java, all you have are the "Function<>" and related interfaces, which are clunky to use.

2. Opaque types (Scala 3). These have been one of the most impactful programming features I've ever used, and I sorely miss them in languages that lack them.

    object Foo:
        opaque type UserId = Int
        opaque type Email = String
        def mkEmail(s: String): Email = s
        def findUserIdByEmail(email: Email): UserId = 42

    import Foo.*
    val email: Email = mkEmail("foo")
    val valid: UserId = findUserIdByEmail(email)
    val invalid1: UserId = findUserIdByEmail("bar") // error: can't use String as Email
    val invalid2: UserId = 123 // error: can't use Int as UserId
3. Union types (Scala 3). You can emulate them in Java/Kotlin with Sealed Types but it's much more verbose.

    type JsonScalar = String | Int | Boolean
    type Json = JsonScalar | Map[String, JsonScalar] | List[JsonScalar]

    // Makes writing functions that take multiple argument types much easier:
    def handle(it: String | Int | Boolean): Unit = ...
4. Context-oriented programming with "given/using" in Scala 3 and "context-receivers" in Kotlin.

This one is harder to explain succinctly but essentially it allows you to decorate methods/classes with required "contextual" args.

Instead of passing them as regular function arguments, you must invoke the function inside of an "environment"/"context" where the requirements are satisfied.

This makes threading dependencies through your code much easier, and eliminates the need for dependency injection frameworks in many cases.

    interface Logger {
        fun log(message: String)
    }

    object ConsoleLogger : Logger {
        override fun log(message: String) = println(message)
    }

    ctx(Logger)
    fun doSomething(): Int {
        log("Hello")
        return 42
    }

    fun main() {
        val logger = ConsoleLogger
        with (logger) {
            doSomething()
        }
    }
5. First-class support for asynchronous programming. With "suspend" in Kotlin and a current prototype being done in Scala 3:

- REPO: https://github.com/lampepfl/async | SLIDES: https://github.com/lampepfl/async/blob/main/scalar-slides.pd... | YOUTUBE TALK: https://www.youtube.com/watch?v=0Fm0y4K4YO8

6. Passing arguments by name, rather than positionally.

    fun doSomething(a: Int, b: Int, c: Int): Int = ...
    doSomething(a = 1, b = 2, c = 3)
7. Tuples (Scala). They're like anonymous data classes/records.

    val t: (Int, String) = (1, "foo")
    val (a, b) = t


1. Java already has succinct lambda expressions.

     parameter               -> expression
    (parameter1, parameter2) -> expression
    (parameter1, parameter2) -> { code block }
and function signatures like the below might be "clunky" from your point of view, but IMHO are more clear since they document explicit types. (and you can navigate to their javadoc)

    Blah doSomething(BiFunction<String, Integer, Foo> fn)
2. Conceded. There are some JEP's around this but they all got rejected.

3. Already covered in existing discussion. Verbosity level is fine.

4. Context oriented programming can easily be achieved using AOP in java. But frankly is readability and maintainability nightmare in any large project. I have seen projects/apps/libs that used this paradigm (in several languages) be re-written to explicitly designate all contexts.

5. Java with virtual threads now has far better support for async programming than Kotlin or Scala. It is now competitive with Golang in async ease-of-use. https://blog.rockthejvm.com/ultimate-guide-to-java-virtual-t...

6. shrug. Many popular languages have explicitly rejected function named parameters. Use a struct/record if you want named arguments is the usual answer.


> and function signatures like the below might be "clunky" from your point of view, but IMHO are more clear since they document explicit types. (and you can navigate to their javadoc)

    Blah doSomething(BiFunction<String, Integer, Foo> fn)
What happens when you want to pass a function of arity 3, 4, 5 or 6?

Also, it's nice to be able to give identifiers to the arguments:

    // In Kotlin (absurd example, just to prove a point)
    fun withHandler(
        handler: (name: String, age: Int, isAdult: Boolean, callback: (Int, Int, Int) -> Unit) -> Unit
    ) {
        handler("John", 25, true) { println(it, it, it) }
    }

    // In Java:
    interface Handler {
        void handle(String name, int age, boolean isAdult, Callback callback);
    }

    interface Callback {
        void call(int a, int b, int c);
    }

    class JavaClass {
        public static void withHandler(Handler handler) {
            handler.handle("John", 25, true, (a, b, c) -> {
                System.out.println("a: " + a + ", b: " + b + ", c: " + c);
            });
        }
    }


It is possible to create custom functional interfaces in Java. One is not restricted to what is provided in the stdlib.

    @FunctionalInterface
    public interface VargsFunction<T,R> {
      R apply(T...  t);
    }

    @FunctionalInterface
    public interface QuadFunction<T, U, V, W, R> {
      public R apply(T t, U u, V v, W w);
    }

Of-course Java is nowhere as flexible as C++ in this regard which has variadic templates, template parameter packs and template-template parameters. Well, you can do this with currying if one is feeling lazy, but obviously not recommended:

   Function<One, Function<Two, Function<Three, Function<Four, Function<Five, Six>>>>> func = a -> b -> c -> d -> e -> 'z';
The second point (named parameters) is one that several language designer greybeards (not just Java) have taken a deliberate design decision against for varying reasons. Use builders/records/structs is the usual advice given here anytime you ask them this question.


> 5. Java with virtual threads now has far better support for async programming than Kotlin or Scala. It is now competitive with Golang in async ease-of-use

green thread/stackfull coroutine such as in go/java and stackfless coroutine as in kotlin are well know solutions for introducing async programming in a way that feel natural to dev. They both have strength and limitation, i don't think one offer "far better support for async programming", and even more importantly they are not mutually exclusive and can be used together depending on one needs (see https://www.youtube.com/watch?v=zluKcazgkV4)


> Java with virtual threads now has far better support for async programming than Kotlin or Scala. It is now competitive with Golang in async ease-of-use.

Only if your problem set matches one of using threads. There are other async problems that don't fit cleanly into a "force it to be a blocking thread instead" model. In particular those in front ends where being on a specific thread at specific times is important.

Green threads work great for server-style async, which is where go is seeing success. Then again servers are probably the last major usage of OpenJDK, so copying Go's tradeoffs here probably makes sense for it. But it's not unambiguously "the best way to do async"


AFAIK you can create a virtual thread executor backed by a single native thread.


> BiFunction<String, Integer, Foo> fn

That signature is awful. I never remember if the return type is first or the last in the list. Also the "generic" types don't document anything, by design. Yes it's an actual type but it's too generic to have any value over something like Kotlin or Scala declaration. It's impossible to tell from this signature what the input and output types are supposed to represent.

Almost always it's better to define a new functional interface than using BiFunction or similar.


Isn't point (5) kind of moot with virtual threads?

I think golang shows that a synchronous, imperative paradigm wins the masses.

I'm not too brushed up with Kotlin suspend, but does it suffer from the classic "function coloring" problem that plagues other solutions? I've dabbled in functional effect systems in Scala, for example. I really enjoy them, but my coworkers sure don't when they realize that to perform some IO in a new place they will have to update a huge stack of type signatures to be wrapped in an IO monad. Async/await in javascript has the same issue, though the syntax is a bit friendlier.

My great hope for virtual threads in Java is that we can bring great IO performance and scalability, on par with golang, without retraining devs.


Yeah, Kotlin does have the colored-function problem, in the same way that Node/JS has.

Loom and Virtual Threads are one area I'm not as familiar with.

What I do know is that you can configure virtual threads as the coroutine dispatcher for Kotlin coroutines, and in a recent video Roman Elizarov talked about how the default "Dispatchers.IO" could theoretically leverage VT's in some future JDK.

https://youtu.be/zluKcazgkV4?t=2518


> Isn't point (5) kind of moot with virtual threads?

It might be but we don't know yet. I have yet to see a large scale application written with virtual threads (for the good reason that it is barely out of development!) I'm reserving judgement until I see virtual threads in use outside of toy examples.


> I think golang shows that a synchronous, imperative paradigm wins the masses.

I am not sure go shows that... Attributed the "poplarity"of go solely to its async model is kind of a leap.

> I'm not too brushed up with Kotlin suspend, but does it suffer from the classic "function coloring" problem

I never quite understand why function coloring is referred to as a problem. Including the async/non-async nature of a function in its signature is as natural as using any other monadic types as a return, such as Optional for example.

> My great hope for virtual threads in Java is that we can bring great IO performance and scalability, on par with golang, without retraining devs.

I think that's sometime a conflation that happens : virtual thread usually ease scalability at the the cost of performance.


> I never quite understand why function coloring is referred to as a problem.

For exactly the reason I said: colored functions are poison. Changing one type is easy. Changing a whole stack of types is tedious. And that's before you realize you tests have stopped compiling as well!


Colored functions are a warning.


How about typescript? It supports union types, lambda functions and type aliases. Would you consider it on par with Kotlin?


I adore TypeScript -- JS/TS has been my primary language for most of my career.

Kotlin has a few things that TS doesn't have but overall, I'd rate it very similarly. I probably enjoy writing TS a bit more than Kotlin.

EDIT: TS also has Opaque Types, sort of. They call them "brand types" or other names, it looks like this:

    declare const opaque: unique symbol
    declare type Opaque<T, K extends string> = T & { readonly [opaque]: K }

    type Email = Opaque<string, 'Email'>

    function mkEmail(email: string): Email { return email as Email }
    function takesEmail(email: Email) { console.log(email) }

    const email = mkEmail('foo@bar.com');
    takesEmail(email);
    takesEmail('wrong'); // error


Yeah i think extension functions and operator overloading are biggest things in kotlin that are missing in typescript. Its very easy to add functions to existing types and chain the method calls in kotlin.

Edit: didnt know about the brand types in typescript, TIL, thanks for the example.


Without Scala, Clojure or Kotlin this probably would have not happened.


> what advantages do Kotlin and Scala bring

That it's not Java. It's a huge thing with some developers not wanting to touch Java for <insert reason> but would be fine with Kotlin or Scala.


Then better go elsewhere, rather than work on Java Virtual Machine.

It is like using UNIX and not wanting to understand, touch, C.


I'm using Linux as my solely desktop OS since over 20 years. (As it's the only usable OS)

At the same time I hate C and everything around it!

Both things go quite well hand in hand. You don't have to touch any C-madness most of the time. Even when you compile stuff yourself, like for example the Kernel, you almost never have to interact with C directly.

I would still prefer an OS written in Scala, but we don't have that at the moment. So I guess I will stick with Linux for the time being.


I am quite vocal about C, as proven by my lenghty C rants, yet I do understand the value of knowing and understanding the foundations we stand on.


The JVM’s main value props are a world-class single dispatch runtime and GC, and a huge ecosystem of compatible OO libraries. It also doesn’t care whether you are writing Java by hand. If your team can master any of the more concise and productive languages, they should.


Except the tiny detail that a big part of that value proposition is written in Java.

If people want to ignoring the foundations of their toolbox, that says a lot about the skillset.


It’s hard for me to judge, when I’ve known Java since 1996. I might need to know Java, but I don’t need to write it. Maybe it’s enough that someone on the team knows it. Maybe it’s not really possible to master Scala/Clojure/Kotlin/Groovy/AspectJ without knowing at least some Java.

Likewise, I probably need to know C to use POSIX, but it’s so unsafe that I should not be writing it.


Yes, the poor guy that needs to jump in for:

- understanding stack traces

- writing idiomatic bindings for libraries

- debugging why Java frameworks fail consuming jar files whose bytecode was generated in a different way

- debugging why a jar misbehaves after a JVM bytecode rewriting tool messed up the bytecode sequence used to emulate other language semantics on top of the JVM

- adopting libraries whose compiler plugins and annotation procesors only understand Java semantics

- mapping debuging information and telemetry data from tooling like JFR into the original code of the guest language.


There is no desctructuring in Java, as mentioned in JEP 441 (for some "future version" of the language).

Or maybe I'm missing something in which case would you point me to a JEP or an article about destructuring in Java?

EDIT: nevermind, I should have checked JEP 440, which is about destructuring record patterns...

Aside from that, immutable / readonly collections and null safety are two big reasons for Kotlin or Scala.


The major things from Scala that I find useful are:

- higher kinded types

- null in types

- for comprehensions

- macros

- opaque types

- implicits/type classes

- persistent immutable collections

- EDIT named & default params


I don't see much advantage from Kotlin unless you are stuck with an old jdk (like the one from Android), still, the latest jdk is still far away from current Scala.


Feature soup is kind of meaningless. The open source community has whatever it needs now.

Many "alot" of java programmers in the US cant code to JDK8 already. I love the JVM but these Java releases are not being adopted on any real scale for a reason. They break things. And syntax sugar is boring and unnecessary for a crew of software engineers that have no real ethos surrounding records or any of these features.

They are just being rolled out to appease devs from other ecosystems. They will not form a new or better method for building systems or improve performance.


For me, the biggest pain point with Java is the lack of optional chaining <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...> and nullish coalescence <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...>. I've noticed so many libraries and JDK APIs that try to solve this problem in such a limited way. What is `Objects.toString(thing)` other than a botched `thing?.toString() ?? "null"`? And that utility method is only limited to `toString`, what if you want null-safety for other methods? You have to write that yourself. It's silly. Adding optional chaining and nullish coalescence would be a wonderful QOL change. It'd reduce code complexity, and it would flatten code (ie, hide less logic behind tiny methods).

I do a lot of serialisation. You have no idea how much I'd love to be able to just to `data.set("thing", this.valueEnum?.name())` instead of having to do `data.set("thing", this.valueEnum == null ? null : this.valueEnum.name())`.


> I do a lot of serialisation. You have no idea how much I'd love to be able to just to `data.set("thing", this.valueEnum?.name())` instead of having to do `data.set("thing", this.valueEnum == null ? null : this.valueEnum.name())`.

You may be already familiar with this, but mapstruct is a godsend package that easily beats any kind of optional chaining.


Introducing annotation processors will never be a godsend.


Disagree. They should not be the first tool to reach for, but they are basically just safe compile-time macros (mapstruct at least is compile-time only)


>what advantages do Kotlin bring?

Elvis operator. Drops mike.


This question is fundamentally flawed. People don't use Kotlin just as "a better Java" (maybe they did in 2017), Kotlin is an independent language that runs on the JVM, as well as other targets. Most Kotlin devs don't see Java and Kotlin in competition with each other or wish to use Java if it only had feature X.


>With records, pattern matching, destructuring, and virtual threads all arrived or arriving,

Yet literally none of those things actually matter for any sort of real world application.

What matters is how quickly you can get code to do what you want, and Java/Kotlin lost that plot almost a decade ago.


> What matters is how quickly you can get code to do what you want...

This is the falsest statement I have read in a long time.

Maybe, if you write once and then throw it away (like a use-once shell script).

Otherwise you will have to maintain your code, and then readability and safety trump everything else. If not, it is about performance, but then a lot of time is spent on fine-tuning.


And yet, every single java enterprise project has a shitload of files, and compiling and running all of those takes significant time, in the 10s of seconds, all because of how Java is structured.

Trying to edit something means you have to dig through layers of inheritance into factories and injection frameworks, debugger is a slow piece of shit that almost needs ide wrapper to run, and as far as safety goes, people seem to have forgotten about log4shell, REALLY quick.

You can believe that strong OOP and all the theoretical bullshit in JDK21 matters, but in the future, Java developers will be the first to go as the world moves towards much smaller dev teams that are well versed in a super high level, AI powered language+compiler, where you can knock out production web apps in a day.


> What matters is how quickly you can get code to do what you want

it's pretty quick to start with java (and presumably kotlin too - not too familiar with it).

I argue that it's even faster than with javascript. You at least don't have to toil with the build tool (java == maven).


Hey, author there - HackerNews Effect killed our AWS hosted WordPress ;(

You can find mirror on Substack: https://vived.substack.com/p/the-compact-overview-of-jdk-21s...

EDIT: We did the redirection to the Substack mirror.


Your pop-over to join the newsletter and continue reading won’t allow someone to continue reading without signing up for your newsletter. There’s no way to close the pop-over modal. This is user hostile behavior. When scrolling, it reappears over content. Very anti-user behavior.

This is equivalent to forced purchase required to use the restroom at a venue. Don’t be like that. Compassion is good for business.


It is default behaviour of Substack - to be honest I am not sure if it is even possible to change.

However, you don't need to subscribe - you can close the popup and carry on!


I know, not your fault, I’m just throwing a hail-mary in case they see it. It’s sooo frustrating. Once you leave and come back it doesn’t show. So obviously they are storing state on it and pushing it in your face when the MarTech gods deem it appropriate.

Good write up though and appreciate the KotH meme.


Substack popups have a Continue Reading button that dismisses it...


> There’s no way to close the pop-over modal

There is, you click outside the modal. Or hit Escape key.


On mobile viewing on safari, there was no such feature.


"web scale"


And we don't even use MongoDB!

https://www.youtube.com/watch?v=b2F-DItXtZs


I’m impressed. One of the big hurdles of extendable types is figuring out which subtype a parameter is. Overloaded methods are a blunt tool that involves lots of indirection. Generics force you to structure classes in a specific way. Pattern matching is the cleanish solution to the problem of “what type so I have here and what do I do with it?”


Seeing Java getting these features is a breath of fresh air. I wonder how this will impact Kotlin.

Being compatible with Java was one of the goals for Kotlin. We soon have a lot of features in the JVM mother language that are solved different in Kotlin. For example:

  - Data Classes vs Java's Records
  - Nullability
  - String Templates
Not diverging from the core language is what made TypeScript successful on a long term. This won't work for Kotlin (and was not a goal). It will be interesting to see whether the languages will diverge even more - maybe to an extend where they will become incompatible - or the interop will converge somehow. Diverging languages will certainly make the interop harder.


Kotlin still has a lot for itself, with the scope functions, the delegation with "by", the syntax that make it very easy and lightweight to create a DSL (infix functions are great for asserts).

I think it's a very expressive and elegant language if you use its syntax properly.

Maybe some of kotlin's "workarounds" will be converted into the JVM equivalents in the backend.


The relationship between Kotlin and Java is not the same as TypeScript and JavaScript. TypeScript is intentionally a syntactical superset of JavaScript. It only adds type annotations and nothing else. This allows TypeScript to be easily compatible with JavaScript, with very little risk of future breaks in compatibility.

Kotlin cannot choose the same route as TypeScript, since Java is already statically typed (even if that typing is not always very strong). There is no point in adding type annotations to it. Most of the improvement that Kotlin originally sought (and still seeks) to have on Java are in syntax and semantics. This means Kotlin has to modify the syntax and cannot just be a simple transpiler that naturally incorporates new Java language features.

The interop strategy that Kotlin has chosen instead is: 1. Keep track on what Java is doing. 2. Add compatibility to Java language features when they are released. 3. Avoid introducing incompatible features too quickly when Java is developing the same thing. 4. If the Java development direction is settled but the feature is not released yet, introduce features that have a clear upgrade path for compilation and interop on new Java releases.

Kotlin is already compatible with Java records (just add the @JvmRecord annotation to a data class). This annotation forces the Kotlin data class to be immutable and the compiler will generate a java record for you if you target the JVM.

Nullability marking is not something that is currently being worked on for Java. Some publications have misleadingly modified the title for "JEP 401: Flattened Heap Layouts for Value Objects (Preview)" [1] to "JEP 401: Null-restricted types", because this very early proposal mentions possible interactions with null-restricted types - but the null-restricted proposal doesn't even exist yet. It's quite hard to predict how Kotlin would be compatible with a feature that may or may not come around 2030.

String templates are still a preview feature in Java 21, so don't expect to see them in production before Java 25 in September 2025. String Templates, like string interpolation, is essentially syntactic sugar, but it provide a better story for type-safety and flexibility than classic string interpolation. Kotlin already solves the simple interpolation cases with its string interpolation, but I agree it leaves some things to be desired (multi-line handling is especially painful[2]). The type-safety/flexibility story is generally handled in Kotlin with type-safe builders[3], and it's often a better solution in my opinion, but there many cases where a string template would be more readable.

The main reason Kotlin might need to support string templates, is that all this syntactic sugar has an interface: the StringTemplate interface. Java libraries may rely on it on the far future[3], and then Kotlin will need to maintain compatibility somehow. JetBrains are not ignoring this issue of course. You will find multiple tickets on their tracker talking about custom string interpolation, that even predate the String Templates JEP[5], but it's currently a wait-and-see-approach.

I think the most important set of features for Kotlin to look at is Project Valhalla[6]. It's an even bigger revolution than project Loom. Kotlin is already preparing for this, with value classes[7]. I believe the main reason that value class usage in Kotlin is so highly restricted in Kotlin right now, is that they are waiting for Project Valhalla, and do not want to give up interop later.

You can also think of other past examples when Kotlin did introduced a future earlier than Java, and then gradually made it interoperable. The best example is closures. Kotlin supported closures since its early beta versions (before Java 8 was released), and for a long time supported generating Java 6 bytecode. Since Java 8 closures relied on a new JVM bytecode instruction (InvokeDynamic), Kotlin used a different mechanism to implement closures. When Java 8 was released, they could easily maintain API compatibility with Java, since they are always invoked through an interface. Kotlin eventually added support for generating lambdas in the same style as Java, but they only made this default in Kotlin 1.9[8], to maintain maximum compatibility. This didn't affect interop (which was already resolved since Java 8 was released), it only had impact on bytecode size and perhaps a small performance impact in some cases.

I think this list show that Kotlin is currently handling future interop quite well.

[1] https://openjdk.org/jeps/401

[2] https://youtrack.jetbrains.com/issue/KT-46365/Multiline-stri...

[3] https://kotlinlang.org/docs/type-safe-builders.html

[4] Java is a conservative language and most Java deployments are many versions behind the latest LTS. Until last year (https://newrelic.com/resources/report/2022-state-of-java-eco...), most of the Java systems in production were running Java 8, and only recently Java 11 started becoming the dominant version, with Java 8 strongly trailing behind. Java 21 is coming out this year, but Java 17 is barely deployed anywhere. For this reason, you'll find most libraries target still target Java 8 or Java 11, and avoid Java 17 features like record. Since String Templates will be out only in 2025, I don't expect to see libraries requiring them in production before 2030.

[5] https://youtrack.jetbrains.com/issue/KT-16366

[6] https://openjdk.org/projects/valhalla/

[7] https://youtrack.jetbrains.com/issue/KT-42434/Release-inline...

[8] https://youtrack.jetbrains.com/issue/KT-45375


I think the site is currently experiencing the hug of death, but doesn't have an archive.is link :( .


It amazes me how many people still can't configure a webserver in 2023 to not fall down under a few queries/second load serving up a textfile.


Is the HN hug of death truly only single digit QPS?


I've had posts on the front page of HN (#1 or #2). That generated about 20k visits.

Assuming:

* 1 page/visit, which is the modal value based on my analytics

* the period where it is on the front page is 6 hours (I don't know this, but I could if I could be bothered to go back and look; I don't have granularity down to the minute, though)

That is 20000 requests/(66060), which rounds to 1 request/second.

Of course, there are all the other assets (JS, images, etc) that you need to account for as well. And it might spike to higher.


That assumes the hits are evenly distributed.

I think it’s more likely that there’s a big flood at first and then it drops off towards a low number.


I think that's completely useless? The whole point of qps is that it gives insight into spikes.


If the entire page is 50 KB, it think within five minutes, it can be served 20,000 times over typical commercial broadband (assuming 30 Mbps net capacity)? 20,000 requests can't really form a huge spike. Most personal-ish sites won't be hosted on commercial broadband, so they should have more bandwidth than that.


That's been my experience. But even the tiniest pizzabox server should have no trouble at 40+ qps.


Genuine question, what is it causes websites to keel over in a mild breeze like that? Surely not raw CPU, memory, or network bandwidth on the machine? Is it just some arbitrary limit imposed on whatever VM they are running?


> Surely not raw CPU, memory, or network bandwidth on the machine?

Why not? It gets worse and worse as we aim for more abstractions and layers of tooling.

Where the latest hype is server rendering React in NodeJs for what instead code by a static site...


I am sure they are very good reasons for it, but even after reading part of the latest valhala design document i still don't quite understand by nullability and/or identity are used as key differential between value types and "regular" class type. It seems to make the design and integration of value object very convoluted and is basically representational flatness as a mere consequences instead of being the central defining factor of a value object.


> basically representational flatness as a mere consequences

And that’s exactly what they are going for, AFAIK.

You as a programmer should first and foremost care about the semantics that you want to express in your programs. In many cases it means that an object you use has and need an identity. But you may find so that it doesn’t make sense in a given case, like a date, or a coordinate — so you can express that it doesn’t have identity, allowing for more freedom on the compilers part.

Finally, you may even say that an implicitly zeroed object makes sense as a default for your class, and when this particular case happens and your object can’t be null, the compiler can even completely inline/flatten your data. But the performance improvements are generally not the goal themselves, they are neat advantages you may get by restricting your semantics.


> You as a programmer should first and foremost care about the semantics that you want to express in your programs.

I think what is semantics vs implementation details depends on the task at hands. From my understanding, the need for value types mainly originate from the need for precise control of memory layout, In which case the representation is part of the important semantic. C#/.Net seems to have a simpler approach by just providing representation flatness as a tool and let the developer chose what they want to do/focus on.

> In many cases it means that an object you use has and need an identity. But you may find so that it doesn’t make sense in a given case, like a date, or a coordinate — so you can express that it doesn’t have identity, allowing for more freedom on the compilers part.

If this is confusing it to me, the need to identity seems to be a property of a specific object instance, not really a property of a class/type.

> But the performance improvements are generally not the goal themselves

I am not sure how true this is, the design document itself reference numeral code performance as one of motivating factor.

> when this particular case happens and your object can’t be null, the compiler can even completely inline/flatten your data

As someone who have "worked" on quite of bit of runtime/compilers, "compiler can" always turns out to be "compiler doesn't". This idea of a compiler to auto-magically deciding and optimizing data layout sound great in theory, but in practice is very very hard to achieve in most meaningful ways.


C# and Java have very different philosophies here - the former prefers exposing the primitives to the developers at the price of a simpler runtime but a more complex language, while java does the opposite.

Of course performance is a goal, but they asked the fundamental questions around the topic, and managed to boil it down to something that will also solve another painpoint of the language at the same time, and help heal the rift between primitives and objects.

And your last paragraph is true in general, but the explicit goal of all this is to restrict the possible semantics of code to enable optimizations - so in the end a compiler-enforced nullable, primitive class will be reliably flattened.


> C# and Java have very different philosophies here - the former prefers exposing the primitives to the developers at the price of a simpler runtime but a more complex language, while java does the opposite.

I don't think it fair to say that the addition of struct in C# make it a more complex language. If anything ( well i might be bias here, since i am mainly a C++/native dev.) it make it explicit the difference between reference semantic and value semantic in a way that is very easily identifiable.

> solve another pain point of the language at the same time, and help heal the rift between primitives and objects.

From my perspective , this is root cause of the issue. A "primitive value" and an object (in the java/small talk tradition) are not the same category of thing and do not belong on the same hierarchy. It seems to me that somewhere in mid early to mid 2000, java/jvm looked at C++ and wanted to have unified type hierarchy or somehow treat values and object as the same thing. This is only possible in C++ and other native languages because those do not have "object" as defined in small talk.

The main issue with the primitive and object being different were that one can't defined additional primitive types with more complex behavior and that the generic system doesn't work well with primitive types. And now instead of addressing those two simple problems effectively, we have project valhala which took more than a decade to arrive to solution that looks to me less elegant.

Sometime two simple solution are better than one big abstract design.


Structs may not be a big and complex addition to a language, but C# has done something like this for many times, and I would definitely rank the language somewhere below c++ in complexity. Swift is another similarly complex language with very non-trivial interactions between said features.

> And now instead of addressing those two simple problems effectively

Those are not all simple problems.


Is it too much to ask for null safety in the type system?


Nope not at all, here you go:

https://openjdk.org/jeps/401

It's part of Valhalla, as a pre-requirement for heap-flattening.

Ctrl+F for "Null-restricted type". The strawman proposal syntax is "Foo!".

  When inlined, a null-restricted class type should have a heap storage footprint and execution time (when fully optimized) comparable to the primitive types. For example, a Point!, given the class declaration above, can be expected to directly occupy 128 bits in fields and array components, and to avoid any allocation in stack computations. A field access simply references the first or second 64 bits. There are no additional pointers.
For a summary of current status and what it means for you as a developer, I recommend the linked email here and the author's comments in the thread:

https://www.reddit.com/r/java/comments/13xtog3/valhallas_lat...


Thanks, hopefully it will be live sooner rather than later.


As a typescript developer who had to make something in java, it's so bizarre and alien to me that anything anywhere might be null, and I have to manually check if things are null everywhere, and then play a guessing game when I get a NPE. It feels like the most unsafe, unsound thing ever.


You could add a few @Nullable annotations to your code where you might return null and be completely null-safe. Nullness is a so-called trivial property, so it can be statically analyised even in such a bolted on way.


On launch day, we absolutely should expect to see that. But after the industry spent 28 years producing a corpus that may or may not round-trip nulls, well, “I don’t have a solution but I have a lot of respect for the problem.”


In recent publication Brian Goetz explains why it is so complex to introduce one:

https://mail.openjdk.org/pipermail/valhalla-spec-experts/202...

Great read.

TLDR: It is hard to decide what the defaults should looks like, so they plan to add special markers (! and ?) to allow programmers to pass that detail to objects with "undefined nullability" (without ! or ?).


Less than 30 minutes after a post on HN and it’s not accessible. Lol


I had an article on the front page of HN, it melted my server and I ended up using a CDN.

Useful lesson, but until you have massive traffic happen once, it's overengineering :) .


Setting up a CDN is free, quick and easy. It should be the default if you're deploying a static site. Definitely not overengineering.


I respectfully disagree, depending on a closed source, for profit megacorp should not be the default, certainly not for a hobby-scale project.


I'm curious why you think that.

From my perspective, you should do what is best for yourself and your users. I think you should definitely be leveraging things like CDNs that are low effort and high impact to save yourself time and help your users.

Not using things that are closed source and from megacorps feels like a luxury that a lot of people cannot afford. Not monetarily, but in terms of time and effort. When I'm working on a side project I want to remove all the friction I possibly can so I stay motivated and keep working on the project.


First, to be clear, I mean this within the context of self-hosting. If you want to optimize your time by not self hosting, by all means, do it!

But if you have decided to self host, I would suggest reaching for one of the many FOSS caching reverse proxy solutions first.


If you consider sharing the browsing habits of all of your visitors with Cloudflare to be free, then yes.


Use Cloudflare? I would never!

My blog uses a custom CDN I wrote in Assembly for maximum performance.

I also wrote a custom browser (Guess what language) so my visitors browsing habits weren't shared with Google via Chrome.

I will go to the ends of the Earth to protect my users' browsing habits.


I didn't find it to be free, quick or easy, but then again, I used Cloudfront and had Wordpress on the backend. This was also a few years ago; wrote about it briefly here: https://www.mooreds.com/wordpress/archives/2565


Today I configured setup for my website (which is empty, I'm more interested with configuring things than writing actual stuff). All I did: set up github pages (few clicks in repository settings) and set up cloudflare DNS with "proxied" setting (which is default one). So now my empty website vbezhenar.com is protected by mighty cloudflare and it took very low effort and zero dollars to do so. You might want to check it out.


That's great. I have over 1000 posts on my WP blog, so moving it is a bit more involved. But if you're starting out today, what you suggested seems great!


Well, all I know is that there must be a long tail of websites that will never need it.


Sure, but in the scenario discussed (Post hitting top of HN) a CDN is like insurance.


Melt in what way? Just didn't have enough network bandwidth, or something else?


The web server fell over and wasn't able to respond to any requests. So folks visiting it were getting either 500 errors or "We can’t connect to the server" errors.


As the author, I assure you that we will be conducting a post-mortem analysis for sure!

We experienced an astounding increase in traffic, nearly 100 times our usual volume

You can find mirror on Substack: https://vived.substack.com/p/the-compact-overview-of-jdk-21s...


I'm curious to see how structured concurrency feels in production. I'm still leaning towards C#'s async and TPL syntax. Manual context, fork and unwrap syntax might feel cumbersome. I enjoy the await sugar and implicit context handling but the explicit scopes solves the bugs you see in C# where the default thread pool is abused.


While virtual threads will be stable in Java 21, Structured Concurrency is still a preview feature. You probably won't see it in production anytime soon.

Preview features require a special flag when compiling and running them, and they won't run on newer versions of the JVM. I don't expect to see StructuredTaskScope in common production use before the next LTS version is out.

But it doesn't mean you cannot have structured concurrency before that. Even in language that mostly enforce Structured Concurrency like Kotlin, it's still a library feature. Even the original blog post which formulated this concept, described a library that implemented structured concurrency for Python[1]. You can pretty easily implement structured concurrency yourself by creating your own implementation of StructuredTaskScope, if you need it right now. You can even structured concurrency in C#[2] or Go[3].

[1] https://vorpus.org/blog/notes-on-structured-concurrency-or-g...

[2] https://github.com/StephenCleary/StructuredConcurrency

[3] https://github.com/sourcegraph/conc


David Fowler has tweeter threads about researching how Java and Go do it, including an experimental runtime.

By the way, .NET already has structured concurrency via Dataflow.


Green Threads Experiment if anyone is interested in what they've done in .NET: https://github.com/dotnet/runtimelab/issues/2057

Personally Asyc/Await is the only thing keeping me from the C# ecosystem.


A bit ironic that the last post is a UI thread question. I would welcome green threads or a virtual threading system, but considering the UI work that C# handles, its no wonder async/await was implemented. If Java virtual threads are a success, I expect C# to follow suit.


It won't, this was discussed at BUILD 2023, and they have no plans to do so.

If I am not mistaken it was on this session,

"ASP.NET Core and Blazor futures, Q&A"

https://build.microsoft.com/en-US/sessions/4cfe374e-a9a0-4a8...


It seems appropriate that C# would include both models. The language design seems to allow you to choose what tool is best for your application.


It won't, this was discussed at BUILD 2023, and they have no plans to do so as they would need to duplicate the API surface of many low level capabilities.

If I am not mistaken it was on this session,

"ASP.NET Core and Blazor futures, Q&A"

https://build.microsoft.com/en-US/sessions/4cfe374e-a9a0-4a8...


Java sucks as a language and no amount of lipstick on a pig will make it look good


This comment was childish but I'm struggling with Java at work. People who fight over whether to implement visitors, adapters, methods or services for the same thing


I refuse to believe Java has gone through 20 major revisions since 1.0

I remember when 1.5 came out and Sun's marketing folks insisted it be called Java 5. I think thats when they jumped the shark, and haven't corrected course back since, versioning-wise

And I wonder if someone keeps track of what the proper traditional Java major version should be now. I'd guess its on 2.x or 3.x at best


If you keep to the standard edition platform and never used internal APIs, you could argue Java never broke backwards compatibility, effectively having a 1. before every version number.

In practice, you could say that Java had only one major version bump: from 8 to 9, when it closed down internal APIs with Jigsaw and gave away Java EE to become Jakarta.


Sure, that's easy. The answer is 21. You don't get to write versioning rules for other projects and there is no golden standard.


Is there some kind of objective standard for version numbers I’m not aware of? Maybe you could call only LTS versions “major versions” if you wanted.


There’s https://semver.org/, which Java unfortunately has not chosen to follow. As joaonmatos pointed out, 8->9 was the most painful major version change, and I think there have been a couple others when deprecated features were (or will be) fully removed.


Projects that follow semver in theory often don't actually follow it so why would we expect everyone else to?


I think the attempt communicates a lot of useful info and succeeds pretty often. I agree they can’t guarantee it, as with any bugs.


It’s fine. If your biggest gripe with something made by a corporation is its name then you will always be unhappy.


After 8.0 then started to release very quickly. I'd say, we're now with Java 10u30 by old standards (or 1.10u30 by even older standards)


>And I wonder if someone keeps track of what the proper traditional Java major version should be now. I'd guess its on 2.x or 3.x at best

What does that even mean?


Before Java 5, Java used 1.x scheme for its major releases (Java 1.0, 1.1, 1.2, 1.3, 1.4). Java 5 was both Java 1.5 and Java 5 and was marketed as Java 5. Since then Java switched to new versioning scheme. Some people apparently are salty about that. I'm salty as well, I'd prefer semver, but Google Chrome 114 probably made it too seductive for marketing to use this crazy scheme, so here we are, with releases like Java 20 which add nothing new.


JDK 20 introduced the Scoped Values JEP as a preview, and also had significant changes to the Panama Foreign Function & Memory (FFM) preview API:

- Scoped Values: https://openjdk.org/jeps/429

- Panama FFM 2nd: https://openjdk.org/jeps/434

The changes in the Panama API were so drastic between 19 and 20 that any code you had written would no longer work. Which is the point of interim releases like these.

I happen to have a lot of experimental Panama code that has been continually evolving with these non-LTS releases since the initial release.


I think that preview features are not something that any sizeable chunk of developers will use. So while technically you're correct, I still consider Java 20 to be pretty minor release.


That's fair, and I'm inclined to agree.

Just wanted to point out there wasn't _really_ nothing new, even if it might have been somewhat pedantic.


Well, they were going with 1.5, 1.8 because the 5 and 8 are minor version numbers as it's all backwards compatible.


There is no universal standard for “major revision” so this is fine.

The only seeming standard is that big integers are bad. ;) But don’t tell Chrome that.


>I refuse to believe Java has gone through 20 major revisions since 1.0

What's your take on Chrome and Firefox?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: