Hacker News new | past | comments | ask | show | jobs | submit | seabrookmx's favorites login

Sure. Here are some resources:

* Span<T>: https://learn.microsoft.com/en-us/archive/msdn-magazine/2018...

* C# now has a limited borrow checker-like mechanism to safely handle local references: https://em-tg.github.io/csborrow/

* Here is a series of articles on the topic: https://www.stevejgordon.co.uk/writing-high-performance-csha...

* In general, avoid enterprise style C# (ie., lots of class and design patterns) and features like LINQ which allocate a lot of temporaries.


LLMs were never the road to AGI.

This is interesting, because at 47 people they're in transitional scaling. Like they've had to solve "too many teams for the CTO to manage directly" but not "too many teams to align effectively".

At 15 strong self-directed teams, you can have a few teams focused on the high level directives, and a few entropy repair teams that mostly self-manage.

The way to think about it is maybe like homeostasis. Self-directed product teams will implement new features, fix bugs, and generally keep the thing on track, but the efficiency drops off as the feedback mechanisms of talking to customers reaches equilibrium.

To mix metaphors, a leadership team creates a kind of current flow in that system. When you're small you can go to each of those teams and ensure that current flow is happening.

But at a larger size, that doesn't work. You have to engineer and carefully craft the feedback mechanisms the teams are working off of to induce that current. This is a hard problem, but it's where things like minimum attrition policies, OKRs, etc spring from: leadership trying to have a policy that induces current.


There are many wannabe security researchers who find issues that are definitely not exploitable, and then demand CVE numbers and other forms of recognition or even a bounty. For example, there might be an app that crashes when accepting malformed trusted input, but the nature of the app is that it's never intended to and realistically never will be exposed to an adversary. In most people's eyes, these are simply bugs, not security bugs, and while are nice to fix, aren't on the same level. It's not very difficult to find one of these!

So there is a need to differentiate between "real" security bugs [like this one] and non-security-impacting bugs, and demonstrating how an issue is exploitable is therefore very important.

I don't see the need to demonstrate this going away any time soon, because there will always be no end of non-security-impacting bugs.


Useful product, great marketing site. This is obviously a labor of love. Keep it up!

Sadly I've moved on from Python world.

Funny anecdote, using Pydantic everywhere to improve maintainability made me realize I'm fighting an uphill battle with Python and I should move to a statically typed language, so I switched to C#.

Thanks for your work.


No, you won't. People who gripe about spring didn't bother to do things the spring way. If you wrote your own DAL, you did it wrong. JpaRepositories and the like are the way. If you wrote your own JSON serializer/deserializer you did it wrong, Spring includes Jackson and will do it for you (YMMV). If you rolled your own authentication, you did it wrong. Spring Security can magically lock down your app with as little as a single configuration method, providing jwt/oauth/openid-connect enterprise security out of the box.

The issue with spring boot is not it's feature-set or it's robustness. It's with it's heavy handedness to force you down the "spring way". Annotation driven. Extending provided interfaces. Making it "sticky" and impossible to remove. Want to introduce gRPC? yeah, you're going to have to find a way to weave that into Netty or use a different port. It's robust enough just up unto a point where you hit a roadblock and have to dig deep deep into Spring land to understand whats going on.

It's still the best framework for standard crud web api's and such with Java. Though micronauts is quickly approaching. You can't go wrong choosing spring boot in 2023 if your intention is to ship in 2024. Whether or not that codebase is solid in 2027 is another matter.


Flutter/Dart is Fuchsia's UI framework.

Go was created by three folks that got fed up waiting on C++ compile times, and from their point of view Go is designed for people unable to take feature rich languages, on Rob Pike's own words.

"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt."

Or alternatively,

"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."


Docker compose is a pretty poor development environment experience. Constantly having to rebuild containers to recompile dependencies; dealing with permissions differences for volume mounts; having to modify all the scripts to start with "docker compose run --rm"; having to deal with no shell history or dot files in the application containers... it leaves a lot to be desired.

I find these “shorter work weeks are just as effective” articles to be nonsense, at least for knowledge workers with some tactical discretion. I can imagine productivity at an assembly line job having a peak such that overworking grinds someone down to the point that they become a liability, but people that claim working nine hours in a day instead of eight gives no (or negative) additional benefit are either being disingenuous or just have terrible work habits. Even in menial jobs, it is sort of insulting – “Hey you, working three jobs to feed your family! Half of the time you are working is actually of negative value so you don’t deserve to be paid for it!”

If you only have seven good hours a day in you, does that mean the rest of the day that you spend with your family, reading, exercising at the gym, or whatever other virtuous activity you would be spending your time on, are all done poorly? No, it just means that focusing on a single thing for an extended period of time is challenging.

Whatever the grand strategy for success is, it gets broken down into lots of smaller tasks. When you hit a wall on one task, you could say “that’s it, I’m done for the day” and head home, or you could switch over to something else that has a different rhythm and get more accomplished. Even when you are clearly not at your peak, there is always plenty to do that doesn’t require your best, and it would actually be a waste to spend your best time on it. You can also “go to the gym” for your work by studying, exploring, and experimenting, spending more hours in service to the goal.

I think most people excited by these articles are confusing not being aligned with their job’s goals with questions of effectiveness. If you don’t want to work, and don’t really care about your work, less hours for the same pay sounds great! If you personally care about what you are doing, you don’t stop at 40 hours a week because you think it is optimal for the work, but rather because you are balancing it against something else that you find equally important. Which is fine.

Given two equally talented people, the one that pursues a goal obsessively, for well over 40 hours a week, is going to achieve more. They might be less happy and healthy, but I’m not even sure about that. Obsession can be rather fulfilling, although probably not across an entire lifetime.

This particular article does touch on a goal that isn’t usually explicitly stated: it would make the world “less unequal” if everyone was prevented from working longer hours. Yes, it would, but I am deeply appalled at the thought of trading away individual freedom of action and additional value in the world for that goal.


Because the documentation is bad. Oauth is really simple:

Lets say you want to use google as an auth provider. You do this:

"Hey google who is this guy? I'm going to send them to google.com/oauth, send them back to example.com/oauth, and in the headers of the request include the word "Authorization: bearer" followed by a bunch of text"

Google says "Oh yeah I know that guy, here I'll send them back to where you said with a token"

Then later on you can take the token and say "Hey google, somebody gave me this token, who is it?"

That's pretty much it. You have to trust that google isn't lying to you, but that's kindof the point of oauth.

But that's never what the documentation says. It's always 10 pages long and the examples are like "here's a fully functioning python web server using flask and function decorators, oh the actual auth flow, which is really like 3 lines of code, is hidden inside of a library".

To people who write documentation: PLEASE for the love of god show me how to talk to your API both using your library, but also using something like urllib2 or requests or something.

Ideally the documentation is the absolute most minimal way of making the service work, and then adds more and more usefulness on top of that. I'm not going to judge you for writing bad code in an example. The example could practically be pseudocode for all I care. I just want to see generally how your API is supposed to work.

edit: yes, auth0, I am looking at you.


C# .NET had async and await in 2012, for comparison. I've always loved Java but Microsoft deserves immense credit for raising the bar, and so quickly, too.

I've been doing reliability stuff for near two decades. The one thing I am sure of is there is no way to just engineer your way to reliability. That is to say, no person, no matter how smart, can just invent some whizbang engineering thing and suddenly you have reliability.

Reliability is a thing that grows, like a plant. You start out with a new system or piece of software. It's fragile, small, weak. It is threatened by competing things and literal bugs and weather and the soil it's grown in and more. It needs constant care. Over time it grows stronger, and can eventually fend for itself pretty well. Sometimes you get lucky and it just grows fine by itself. And sometimes 50 different things conspire to kill it. But you have to be there monitoring it, finding the problems, learning how to prevent them. Every garden is a little different.

It doesn't matter what a company like Fly does technology wise. It takes time and care and churning. Eventually they will be reliable. But the initial process takes a while. And every new piece of tech they throw in is another plant in the garden.

So the good news is, they can become really reliable. But the bad news is, it doesn't come fast, and the more new plants they put in the ground, the more concerns there are to address before the garden is self sustaining.


Python keeps growing in number of users because it’s easy to get started, has libraries to load basically any data, and to perform any task. It’s frequently the second best language but it’s the second best language for anything.

By the time a python programmer has «graduated» to learning a second language, exponential growth has created a bunch of new python programmers, most of which don’t consider themselves programmers.

There are more non-programmers in this world, and they don’t care - or know about - concurrency, memory efficiency, L2 cache misses due to pointer chasing. These people all use python. This seems to be a perspective missing from most hackernews discussions, where people work on high performance Big corp big data web scale systems.


I wish Eudora was still around. The source code is available, my secret dream is to work on somtimes in the summer...

https://computerhistory.org/blog/the-eudora-email-client-sou...


Sadly, the pervasiveness of JavaScript means that UTF-16 interoperability will be needed as least as long as the Web is alive. JavaScript strings are fundamentally UTF-16. This is why we've tentatively decided to go with UTF-16 in Servo (the experimental browser engine) -- converting to UTF-8 every time text needed to go through the layout engine would kill us in benchmarks.

For new APIs in which legacy interoperability isn't needed, I completely approve of this document.


LMGTFY sudo smartctl -t long -a /dev/sdX

I’m working on making body doubling a more main stream approach to accomplishing everyday tasks.

Body doubling is known within the ADHD community and entails performing a task in the presence of another. More details here: https://www.healthline.com/health/adhd/body-double-adhd

It helps to engage motivation by using another person as the proxy. I wrote a bit about how I think it works here: https://doubleapp.xyz/blog/body-doubling-proxy

The technique goes way beyond just ADHD applications for executive functions and is something we tend to do anyways, e.g., running with a friend, studying in a group, etc.

It solves an issue for myself and I truly want to help others with the approach by making a way to stay accountable through the help of others.


Yeah, I've been using linux for over 20 years, but I was pretty shocked by the number of sharp edges I encountered with arch. A recent update basically borked grub. On investigating the issue I found that the arch maintainers were shipping grub builds from the grub master branch, and when I pointed out this might not be the best idea the maintainer got huffy and said maybe I wasn't 'ready for arch'.

I installed popOS that day. I miss the AUR a bit, but pop resolves most of the issues I had with ubuntu, and starts and runs noticeably faster than arch, so all in all I'm pretty happy with it. It's a shame that arch can be so user hostile. I really admire it's wiki.


I've heard there is a secret chord, that prophets play to please their lords.

But if you care for music less than algebra, just know it goes like this; the fourth, the fifth: the minor falls, the major lifts.


> Spoken Mandarin would be a great basis for a logical world language.

Grammatically, yes (with a few caveats). Phonetically… perhaps not so much. Mandarin has a pretty poor phonetic inventory having lost a large number of sounds (including finals) throughout the history of its development. Words in Mandarin tend to be longer (as in having 3 syllables on the average) compared to other Chinese languages that have retained more sounds.

> The only ”weird” parts are tonality and those darned counting words.

There is nothing specifically weird about them. English has them too, they are called «collective nouns», i.e. «a flock of birds», «two shivers of sharks», «an ambush of DevOps engineers», «three pandemoniums of webdevs», «murders of Deloitte consultants», «a dazzle of Rust developers» or «a pitying of enterprise architects». Flock, shiver, ambush, pandemonium, murders, dazzle, pitying are your «darned» counting words.


Software Engineer at Spacelift[0] here - a CI/CD specialized for Infra as Code (including Terraform).

A pattern we're seeing increasingly commonly are Platform Engineering teams doing the bulk of the work, including all the fundamentals, guidelines, safety railing, and conventions, while Software Engineers only use those, or write their own simple service-specific Terraform Stacks which however extensively use modules developed by the former.

This does also seem like the sweet spot to me, where most of the Terraform code (and especially the advanced Terraform bits) is handled by a team that's specialized for it. If you don't have a Platform Engineering team, or one that is playing its role (even if its called DevOps or Ops or SRE) in even a medium company, you'll probably start having as many approaches to your infrastructure as there are teams, complexity will explode, and implementation/verification of compliance requirements will be a chore. Just a few people responsible for handling this will yield huge benefits.

And yes, I can wholeheartedly recommend Spacelift if you're trying to scale Terraform usage across people and teams - and not just because I work there.

Disclaimer: Opinions are my own.

[0]: https://spacelift.io


At various points in my career, I worked on Very Big Machines and on Swarms Of Tiny Machines (relative to the technology of their respective times). Both kind of sucked. Different reasons, but sucked nonetheless. I've come to believe that the best approach is generally somewhere in the middle - enough servers to ensure a sufficient level of protection against failure, but no more to minimize coordination costs and data movement. Even then there are exceptions. The key is don't run blindly toward the extremes. Your utility function is probably bell shaped, so you need to build at least a rudimentary model to explore the problem space and find the right balance.

Note that Cloud Run is not built on Kubernetes, but on Borg. It implements the Knative Serving API spec, mainly for portability reason with Knative and Kubernertes.

Source: I'm the Cloud Run PM and we have commmunicated about that publicly in the past.



I completely miss the point of the article, but my pet peeve is that operator== is even defined for floats in most programming languages. It really, really shouldn't.

Instead if should return an error "Floating points should be compared using this library function that includes acceptable difference. If you want exact math use BigDecimal or similar. If you know what you're doing use library function with acceptable difference = 0.0"

And yes, Uncle Bob is giving some terrible advice. My least favorite is his advice to split "too long" pure functions into a stateful class with "short enough" methods that later can be called in the wrong order because now there's a right and wrong way to call them.


I don't know if marketing was involved with using the term 'isolate' or not but if they are isolates as described by companies such as Cloudflare and Google, it might help to speak a bit more about the actual implementation at the infrastructure level.

Isolates are a really interesting approach to deal with the inherent nature of scripting languages to deal with the lack of threads as most scripting languages are inherently single-thread/single-process. If you have a 2000 line ruby class named 'Dog' you can easily overwrite it with the number 42. This is awesome on one hand, however it makes scaling the vm through the use of threads too difficult as threads will share the heap and then you have to put mutexes on everything removing any performance gain you would have normally gotten. Instead the end-user has to pre-fork a ton of app vms with their own large memory consumption, their own network connections, etc and stick it behind a load balancer which is not ideal to their compiled, statically typed cousins and frankly I just don't see the future allowing these languages as we continuously march towards larger and large core count systems. I'd personally like to see more of the scripting languages adopt this construct as it addresses a really hard problem these types of languages have to deal with and makes scaling them a lot easier. To this note - if you are working on this in any of the scripting languages please let me know because it's something I'd like to help push forward.

Having said that, they should never be considered as a multi-tenant (N customers) isolation 'security' barrier. They are there for scaling not security.


Submodules aren't bad. But in a world where I have to explain that running revert on the 250 megs of jar files someone comitted isn't a fix, and where people often just delete and re-clone entire repos because they don't know what's going on, they incur a heavy support burden on the few people that know how to use them.

You know, the people that already carry everyone else through their job.


My take as a TPM and certified in Scrum: the better and more skilled the team members, the less you need Scrum and other frameworks. Scrum is great for teams composed of new developers who don't yet know how to work together, or teams at companies with poor culture. But the best teams are self-organized and don't necessarily need the guidance of Scrum. As Gergely mentioned, team autonomy is a big factor in job satisfaction (and performance).

But, it can still be worth doing in high performance environments if specific things are needed. Part of being Agile is adapting processes to teams, and some Scrum processes can be useful (relative estimation for example) even when not doing Scrum itself.

As an aside, Gergely has a newsletter[1] (of which this is one of the free articles) that is absolutely fantastic. It's $100/yr IIRC and well worth it. Gergely deep dives into topics around tech including hiring, structure, comp, etc.

Gergely also has a book[2] that helped me get hired in SF when I moved here a couple of years ago.

[1] - https://newsletter.pragmaticengineer.com/about [2] - https://thetechresume.com


The problem in the end comes down to the fine line Canada always straddles due to the proverbial "sleeping with the elephant" (see quote from Trudeau Sr.)

The fear being that if Canada deregulates fully and opens critical sectors like telecomms etc. to full competition we will completely lose these industries to US (or Chinese, etc.) interests.

And yet at the same time domestic regulation is continually captured by predatorial internal interests.

Canada has always been this way, and it's frustrating as hell. Almost every industry has a caste of "goold old boys" who seek to prevent competition by capturing regulatory agencies.

Many times many participants all went to private school together @ Upper Canada College, etc. Or sit/sat on boards together.

You could say "ok, disband the regulation and let them compete" but many times that just leads to the total collapse of certain domestic industries because they simply can't compete with US capital.


I think a lot of people's introduction to OKRs is John Doerr's book "Measure What Matters". That's where I learned about them.

The book explains how Andy Grove introduced the practice at Intel and it was very effective. The book seems to attribute the success to the practice itself and seems to say "if you adopt OKRs, you will succeed like Intel did".

I suspect that this success is misattributed. I suspect that Andy Grove was probably an excellent manager and I think he could have succeeded with something other than OKRs. I think he understood that what was really important was to get everybody across the organization to focus on essentially one big goal. He needed to make sure that everybody was pulling in the same direction and together, and OKRs provided a tool to do that.

When my organization decided to implement OKRs, my question to my peers was "who is our Andy Grove?"

If the people implementing OKRs focus too much on the practice and not enough on the motivation, I think you just end up with cargo-culting. The setting and tracking of KRs becomes the objective. So people treat it like busywork because OKRs don't really seem to matter - they just gets in the way of the "important" stuff.

As one of my coworkers says, the title of the book is "Measure What Matters", but it's too easy to slide into "What Is Measured Is What Matters".


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: