Hacker News new | past | comments | ask | show | jobs | submit | cmbothwell's favorites login

Clojure and Flatlaf [1] tick all the boxes for me. If I want declarative-ish UI, I can always throw in Seesaw [2]. Everything else I find cumbersome, pulls tons of unnecessary dependencies and (usually) ends up abandoned in a year or two. With Swing, I know it is well-documented and will work for the next 30 years. But YMMV and I'm hoping that HumbleUI could be an exception.

[1] https://www.formdev.com/flatlaf/

[2] https://github.com/clj-commons/seesaw


My observation over the years as a software dev was that velocity is overrated.

Mostly because all kinds of systems are made for humans - even if we as a dev team were able to pump out features we got pushed back. Exactly because users had to be trained, users would have to be migrated all kinds of things would have to be documented and accounted for that were tangential to main goals.

So bottleneck is a feature not a bug. I can see how we should optimize away documentation and tangential stuff so it would happen automatically but not the main job where it needs more thought anyway.


For any given thing or category of thing, a tiny minority of the human population will be enthusiasts of that thing, but those enthusiasts will have an outsize effect in determining everyone else's taste for that thing. For example, very few people have any real interest in driving a car at 200 MPH, but Ferraris, Lamborghinis and Porsches are widely understood as desirable cars, because the people who are into cars like those marques.

If you're designing a consumer-oriented web service like Netflix or Spotify or Instagram, you will probably add in some user analytics service, and use the insights from that analysis to inform future development. However, that analysis will aggregate its results over all your users, and won't pick out the enthusiasts, who will shape discourse and public opinion about your service. Consequently, your results will be dominated by people who don't really have an opinion, and just take whatever they're given.

Think about web browsers. The first popular browser was Netscape Navigator; then, Internet Explorer came onto the scene. Mozilla Firefox clawed back a fair chunk of market share, and then Google Chrome came along and ate everyone's lunch. In all of these changes, most of the userbase didn't really care what browser they were using: the change was driven by enthusiasts recommending the latest and greatest to their less-technically-inclined friends and family.

So if you develop your product by following your analytics, you'll inevitably converge on something that just shoves content into the faces of an indiscriminating userbase, because that's what the median user of any given service wants. (This isn't to say that most people are tasteless blobs; I think everyone is a connoisseur of something, it's just that for any given individual, that something probably isn't your product.) But who knows - maybe that really is the most profitable way to run a tech business.


Over my years of hiring and working with other software engineers, I’d say they fall into two key categories:

- Engineers who know how to build apps with a specific set of tools or frameworks and focus on applying this knowledge

- Engineers who know how to model their work in terms of data structures and the algorithms or pipelines being applied to them

The first category can be effective and efficient at applying their knowlege, particularly because of experience and practice with the tools. These are the specialists - the front-ends, the Rails devs, the embedded engineers and so on. They know more about the constraints of their environments.

The second category think more about what they are doing rather than how they are doing it. They are the generalists. They think about React as a functional-ish way to convert state into a DOM tree; they recognise the value and reasons behind various different approaches to development and don’t box themselves in.

I find the second category almost always more effective. That doesn’t mean specialists are without value - you need your embedded engineers to understand that space in depth, for example.

Especially when hiring I like to probe for this during a system design exercise: ask a question and walk through the design of a simple system or pipeline of some kind. If the engineer answers in terms of specific technologies (“I would use Kafka to send gRPC to MongoDB”), they’re usually inflexible. If they answer in terms of techniques and data flows (“I would use a work queue to distribute payloads over the network to backing store databases”) they usually get it.

I reckon changing your mindset a bit can help with the fatigue described in the article. Though I admit I’m as frustrated as anyone else the first time I bring up a new project after a whole an app the tooling has broken and the industry has moved on (looking at you, frontend!)


No, job security is why the code base is poorly written and uncommented. Pile of technology growth (POTG) is caused by a legitimate desire to avoid future pitfalls. Above all, application devs do NOT want to be in a situation where they must use first principles to keep the app alive. What they ignore is that the benefit of an added component is 1 but the cost scales as N^2. Call it "the integrator's dilemma". POTG via the integrator's dilemma is exascerbated by easy, fast dependency managers, FOMO and willful ignorance of managers to any concern other than "ship on that date".

Yet doing everything from first-principles is not viable, nor is focusing only on non-functional requirements. The solution is engineering leadership that values code not written, dependencies not added. They wince at a big commit. Every component must pull its weight. They use every part of every component and dwell in the community of it. If they don't have time for a new community, they hire a new person who does, and they may represent a new specialization on your team.

A solid team needs 5 people. CSS and Figma ('designer'). SPA ('front-end eng'). The appservers, database and outward api calls ('backend eng'). Infrastructure and CI/CD ('devops'). Finally, you need a person who owns goals, measuring past and articulating future, and take point on user and business comms ('product'). Project management can be a part-time role of anyone on the team, but fits well with product. I do not think a single human mind can be a designer/front-end/back-end/devops role and do a good job. They just don't have time to learn and stay up-to-date with all of that, and it requires an untenable amount of context switching.


Why Clojure = for Datomic, Rama, Electric and Missionary. No need for long blog post - this stack screams if your app fits within its intended operational margins - e.g. enterprise cloud information systems and rich interactive web products.

Welcome to the new digital divide people, and the start of a new level of "inequality" in this world. This thread is proof that we've diverged and there is a huge subset of people that will not have their minds changed easily.

The most important thing for good sound is distance between the mic and your mouth - since SnR falls off with the square of distance. Some people use a standalone mic mounted to an arm on the front of their desk, but I find that to be too intrusive and ugly. The ideal setup is a headphone with a boom mic.

A lot of business conference headphones have a boom mic, as well as gaming headphones, but they all cheap out on the actual microphone element. For ideal sound quality you want to get a V-Moda BoomPro (wired, $20) or an Antlion ModMic Wireless ($140). These tack onto your existing headphones with a small magnetic clip (so you can take it off when not in a call). You can look up sound quality comparisons on YouTube, these two devices are far ahead of even the most expensive business/gaming headsets. It makes a vast difference in sound quality.

The ModMic Wireless also works around the "Bluetooth sound quality is shit in headset mode" issue, since your headphones are in output-only (A2DP) mode and the ModMic has its own separate Bluetooth connection for mic-only.

Finally, camera quality and lighting are important too. Here's a good article on lighting: https://languageoflight.blog/2020/10/08/time-zooms-by/. For the camera you can use an actual camera (DSLR/mirrorless, ideally with a fast lens) with an HDMI capture card, there's plenty of info about this available on the internet. An old iPhone can also work quite well afaik? Either way, it'll be much better than a webcam, particularly if your room's lighting conditions aren't ideal.


software development costs are out of control the whole industry is one big grift, there’s no accountability anywhere, 20% of the devs are doing 80% of the work, the business would instantly terminate the 80% for cause if they could only discern the difference, but they can’t, because the business side is also a big grift with the exact same problem all the way up to the founders recursively

Sure some people that care about minutiae are code artisans but what I have seen more often is co-workers weaponizing these discussions to hide their own incompetence.

I have seen so many people going on and on about best practices and coding styles and whatnot and using big words just in hopes to keep discussions going so no one figures out out that they don't know how to code.


[Former member of that world, roommates with one of Ziz's friends for a while, so I feel reasonably qualified to speak on this.]

The problem with rationalists/EA as a group has never been the rationality, but the people practicing it and the cultural norms they endorse as a community.

As relevant here:

1) While following logical threads to their conclusions is a useful exercise, each logical step often involves some degree of rounding or unknown-unknowns. A -> B and B -> C means A -> C in a formal sense, but A -almostcertainly-> B and B -almostcertainly-> C does not mean A -almostcertainly-> C. Rationalists, by tending to overly formalist approaches, tend to lose the thread of the messiness of the real world and follow these lossy implications as though they are lossless. That leads to...

2) Precision errors in utility calculations that are numerically-unstable. Any small chance of harm times infinity equals infinity. This framing shows up a lot in the context of AI risk, but it works in other settings too: infinity times a speck of dust in your eye >>> 1 times murder, so murder is "justified" to prevent a speck of dust in the eye of eternity. When the thing you're trying to create is infinitely good or the thing you're trying to prevent is infinitely bad, anything is justified to bring it about/prevent it respectively.

3) Its leadership - or some of it, anyway - is extremely egotistical and borderline cult-like to begin with. I think even people who like e.g. Eliezer would agree that he is not a humble man by any stretch of the imagination (the guy makes Neil deGrasse Tyson look like a monk). They have, in the past, responded to criticism with statements to the effect of "anyone who would criticize us for any reason is a bad person who is lying to cause us harm". That kind of framing can't help but get culty.

4) The nature of being a "freethinker" is that you're at the mercy of your own neural circuitry. If there is a feedback loop in your brain, you'll get stuck in it, because there's no external "drag" or forcing functions to pull you back to reality. That can lead you to be a genius who sees what others cannot. It can also lead you into schizophrenia really easily. So you've got a culty environment that is particularly susceptible to internally-consistent madness, and finally:

5) It's a bunch of very weird people who have nowhere else they feel at home. I totally get this. I'd never felt like I was in a room with people so like me, and ripping myself away from that world was not easy. (There's some folks down the thread wondering why trans people are overrepresented in this particular group: well, take your standard weird nerd, and then make two-thirds of the world hate your guts more than anything else, you might be pretty vulnerable to whoever will give you the time of day, too.)

TLDR: isolation, very strong in-group defenses, logical "doctrine" that is formally valid and leaks in hard-to-notice ways, apocalyptic utility-scale, and being a very appealing environment for the kind of person who goes super nuts -> pretty much perfect conditions for a cult. Or multiple cults, really. Ziz's group is only one of several.


I've been doing go to market for technology businesses to some degree for the better part of 20 years now and I still learn more every day. I don't really see it as something you can easily learn by reading about, mostly because there is a lot of nuance.

That said, it basically boils down to 3 things:

Personas

Channels

Messages

You have an archetype of someone in your head that would like your tool, you want to test if that archetype exists. That persona likely consumes content, your job then is to understand the channels that persona typically likes to consume their content, and try to serve them a message they can understand in the time you have to serve it. There are thousands of personas in millions of channels consuming billions of messages, that is why it's hard.


This whole trail through to the top comment is spot on.

It's also helped me work out a corollary in my mind that has puzzled me for a bit.

My sense, in the Marxist tradition, is that modern organisations are dependent on the extraction of value from highly capable technical resources and, especially outside the tech bubble, they largely resent this dependency.

Let's say developers are an example of a highly capable technical resource, though I am by no means limiting the scope.

This results in a series of mechanics that lead to developers being alienated:

  - From the product of their development (ownership of IP and the resultant value of their code, distance from seeing the positive impacts of their work or talking to those it helps)
  - From the act of developing itself (by its reduction to commercial use and control over how it is done, approval gates, arbitrary coding standards, ticket systems, scrum processes, project managers and product owners, Jira, timesheets etc.)
  - From their fellow workers (stack ranking, power dynamics, labour competition, structural organisational tension)
  - From their human nature and natural talent (by the reduction of their humanity and passion and capability to a mere "developer" or "engineer", use of stereotypes, reduction of humanity to output/LOC/story points delivered, corporate gaslighting at questioning this state of affairs etc.)
And most non-developer people that have worked in an average organisation and spent much time with developers have seen all of this at play, and heard how much developers hate it. Yet many still refuse, even in the face of self-interest (e.g. faster delivery of outcomes for a non-technical manager), to empathise and accept the reality of the experience enough to support better workplaces for developers. A tangible recent example is the insistence with all sorts of reasons on getting developers back into the office where they can be watched despite demonstrably lower productivity and engagement.

My sense of this is that what developers can bring to the modern world is the closest humanity has got to magic. And this dependency is resented. And this resentment leads to workplaces in which this resentment is externalised in the form of debasement (e.g. caricatures and other forms of ego compensation - "they're just the boffins, they don't have people skills!"), control ("the boffins can't really be trusted - better add some process and oversight, and given they don't have the people skills better make sure they aren't anywhere near management/clients!"), and dependency inversion ("sure the boffins can do their coding stuff, but they'd be nothing without us to help babysit and organise things, they don't get the way the world works, clearly it's they who actually need us!"). And this environment in turn leads developers to internalise this systemic resentment as a resentment of themselves, their capability, and their work, aka burnout.

But one question has been bubbling away for me for a while.

How do so many organisations arrive at a system in which it's almost a badge of honour to not be one of the doers? That those who can't do, should, as a moral claim, oversee, and manage, and lead? And we should keep adding more of those people until the doers can't possible do. Even when that produces lower tangible results.

Maybe at one point I internalised the Office Space / IT Crowd idea - the non-doers are "people people" who didn't spend decades at their PCs honing their craft but instead went to wild parties and focused on normal people stuff (the implication of course being that developers are lesser than normal people). Maybe the developers really can't be trusted. Maybe they do need to be managed and watched and distanced. Maybe the code monkey caricature (before being reclaimed by those it was used to demean) is right.

But now I wonder.

What if those people are empowered by the systems into positions of power over developers because in the first instance they affirm the original resentment: what if putting non-doers in charge safely perpetuates the idea that the developers belong at the bottom of the pyramid and affirms the extraction of their labour in support of the salaries and profits of those above relying on it? Or to put it another way, what if the code monkey caricature is effectively a justification of Marxist exploitation?

And what if the second order effect here is that some, let's call them the senior management class - leaders of the Second order of Pournelle's Iron Law of Bureaucracy in the example above - are more conscious of these dynamics and consciously perpetuating them. What if they are knowingly hiring more non-doers to help keep this balance and control. What if it's not accidental, or people skills, that means non-doers are in charge? What if the very reason they're there in the first place is to be in charge even in positions outside of formal leadership (or at least indirectly support the power of someone else who brought them in for that reason) - after all they're not there to do.

So to close out a long post, a corollary to Pournelle's Iron Law of Bureaucracy: in any bureaucratic organisation the ability to be able to contribute to the goals of the organisation will be an insurmountable barrier to having influence or control within it.


IMO the main problem is in the very way organisations are structured - it is about power, and people (really meaning management) want to achieve good enough results, given that their control is maintained. Agile as aesthetics is cool, but actual implementations move power away from management and we can't have that. So we will rebrand project managers to POs and whatever managers to scrum masters, and will follow the rituals as long as they don't get in the way of normal run of things.

Agile was grounded on solid principles, but is very ideologically naive, which allowed its easy cooption by consultants and management.


Ah, the suckless philosophy - making everything as terse, austere and featureless as possible in the name of 'simplicity'. It's a wonder so few people want to adopt it.

To get a further glimpse into that philosophy check this out: http://harmful.cat-v.org/software/ and recoil in horror as literally everything you've ever used (and sometimes even liked) is deemed harmful. There also used to be some more ahem 'controversial' content which I assume was removed to get with the current times.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: