Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That would actually be an interesting direction. Swift would be an amazing server side language.


IBM took a stab at it a bit ago - https://developer.ibm.com/languages/swift/

There is some neat stuff that you can do with it in docker https://hub.docker.com/_/swift


Obligate ARC is not a good fit for server software. Even obligate tracing GC ala Go and all the JVM/CLR languages gives you better throughput than that, though obviously doing neither is best.


I think Swift could work very well in a lambda/serverless context.

In that case, you'd be writing mostly functional Swift code with sparing use of reference types, which would mean you wouldn't hit ARC at all and you'd have static memory management similar to Rust.

That, along with Swift's expressiveness, ADT's and awesome type system in general would make it a great experience.

Kind of like the ergonomics of Python with an actual good compiler eliminating obvious mistakes like Rust.


> in a lambda/serverless context.

Perhaps, but that's not server software, is it?


You wouldn't consider code executing in a lambda to be server-side? What is it then?


A software function, as in "functions as a service". Very different from the concerns of most server-side code.


Yeah I mean I think it's largely semantic. I think when most people talk about "swift on the server" they mean they want to write their server-side business logic in Swift, and that seems perfectly suitable in a serverless context.


Sorry, don't know enough of the theory behind this – why is that not a good fit?

Happy to read up on this if you don't have the time to type it up.


I’m not sure if it’s fair to dismiss ARC as a bad fit for server-side work in general, but the typical argument is that atomic retain/release operations become quite expensive when you share memory across threads. It’s easy to imagine, for example, how a global database connection pool accessed on every request across dozens of threads would have non-zero ARC overhead.

In practice, compiler optimizations reduce the amount of retain/release pairs emitted, but I have no idea how the resulting performance compares to GC languages.


It's mostly not the atomic op overhead that's expensive, though -- it's the sharing.

You can write shared-nothing algorithms using ARC objects that are local to a single thread or core, and while it will be slightly more expensive than RC objects, the N^2 effect of sharing between N cores won't occur.



Interesting, thanks for sharing.

Related thread on Swift forums [1] seems to suggest that the latest Swift compiler would generate code that performs a lot better than the Swift 4.2 version. I'm interested in checking that for myself.

[1]: https://forums.swift.org/t/swift-performance/28776


Ah I see very interesting, thanks!


Because it's too slow.


Great explanation, thanks...


I think he has enough on his plate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: