Hacker Newsnew | past | comments | ask | show | jobs | submit | aforwardslash's commentslogin

rolls eyes

No, their error was that they shouldn't be querying system tables to perform field discovery; the same method in postgresql (pg_class or whatever its called) would have had the same result. The simple alternative is to use "describe table <table_name>".

On top of that, they shouldn't be writing ad-hoc code to query system tables, but having a separate library instead to perform those kind of task mixed with business logic (crappy application design).

Also, this should never have passed code review in the first place, but lets assume it did because errors happen, and this kind of atrocious code and flaky design is not uncommon.

As an example, they could be reading this data from CSV files *and* have made the same mistake. Conflating this with "database design errors" is just stupid - this is not a schema design error, this is a programmer error.


Everytime I read one of these "I don't use AI" posts, the content is either "my code is handcrafted in a mountain spring and blessed by the universe itself, so no AI can match it", or "everything different from what I do is technofascism or <insert politics rant here>". Maybe Im missing something, but tech is controlled by a handful of companies - always have been; and sometimes code is just code, and AI is just a tool. What am I missing?

I was embarrassed recently to realize that almost all the code I create these days is written by AIs. Then I realized that’s OK. It’s a tool, and I’m making effective use of it. My job was to solve problems, not to write code.

I have a little pet theory brewing. Corporate work claims that we hire junior devs who become intermediate devs, who then become senior devs. The doomsday crowd claim that AI has replaced junior and intermediate devs, and is coming for the senior devs next.

This has felt off to me because I do way more than just code. Business users don’t want get into the details of building software. They want a guy like me to handle that.

I know how to talk to non-technical SMEs and extract their real requirements. I understand how to translate this into architecture decisions that align with the broader org. I know how to map it into a plan that meets those org objectives. And so on.

I think that really what happens is nerds exist and through osmosis a few of them become senior developers. They in turn have junior and intermediate assistant developers to help them deliver. Sometimes those assistants turn out to be nerds themselves, and they spontaneously transmute into senior developers!

AI is replacing those assistant human developers, but we will still need the senior developers because most business people want to sit with a real human being to solve their problem.

I will, however, get worried when AIs start running businesses. Then we are in trouble.


Anthropic ran a vending machine business as an experiment, but I don't imagine someone out there isn't already seriously running one in production.

I’ve been tempted to define my life in a big prompt and then do something like: it’s 6:05. Ryan has just woke up. What action (10min or less) does he take? I wonder where I’ll end up if I follow it to a T.

would make for quite a bizarre documentary. super size me but information rather than food.

> Maybe Im missing something, but tech is controlled by a handful of companies - always have been;

The entire open source movement would like a word with you.


I suggest you have a look at Bell Labs, Xerox and Berkeley, as a simple introduction to the topic - if you thing OSS came from "the goodness of their hearts" instead of a practical business necessity, I have a bridge to sell you.

I would also recommend you to peruse the last 50 years for completely reproductible, homegrown or open computing hardware systems you can build yourself from scratch without requiring overly expensive or exotic hardware. Yes, homegrown CPUs exist, but they "barely work" and often still rely on logic gates. Can you produce 74xx series ICs reliably in a homelab setting? Maybe, but for most of us, probably not. And certainly not for the guys ranting about "companies taking over".

If can't build your computing devices from scratch, store bought is fine. If you can, you're the exception and not the rule.


So would disruptive young Mr. Gates.

You are not missing much. Yes there will be situations where AI won’t be helpful, but that’s not a majority

Used right, Claude Code is actually very impressive. You just have to already be a programmer to use it right - divide the problem into small chunks yourself, instruct it to work on the small chunks.

Second example - there is a certain expectation of language in American professional communication. As a non native speaker I can tell you that not following that expectation has real impact on a career. AI has been transformational, writing an email myself and asking it to ‘make this into American professional english’


AI is not only unhelpful, but is counterproductive in the majority of situations. It is not in any way a good tool.

> What am I missing?

The youthful desire to rage against the machine?


I prefer eternally enslaving a machine to do my bidding over just raging at them.

Not much. Even the argument that AI is another tool to strip people of power is not that great.

It's possible to use AI chatbots against the system of power, to help detect and point out manipulation, or lack of nuance in arguments, or political texts. To help decipher legalese in contracts, or point out problematic passages in terms of use. To help with interactions with the sate, even non-trivial ones like FOI requests, or disputing information disclosure rejections, etc.

AI tools can be used to help against the systems of power.


Yes, the black box that has been RLHF'd in god knows what way is surely going to help you gain power, and not its owners...

Actually yes. It's not either/or.

? Maybe Im missing something, but tech is controlled by a handful of companies - always have been

I guess it depends on what you define as "tech", but the '80s, '90s, and early '00s had an explosion of tiny hardware and software startups. Some even threatened Intel with x86 clones.

It wasn't until the late '90s that NVIDIA was the clear GPU winner, for instance. It had serious competition from 3DFX, ATI, and a bunch of other smaller companies.


> but the '80s, '90s, and early '00s had an explosion of tiny hardware and software startups

Most of them used intel, motorola or zilog tech at some capacity. Most of them with a clock used dallas semiconductor tech; Many of them with serial ports also used either intel or maxim/analog devices chips.

Many of those implementations are patented, and their inner designs were generically, "trade secrets". Most of the clones and rebrands were actually licensed (most of 80x51 microcontrollers and z80 chips are licensed tech, not original). As a tinkerer, you'd receive a black box (sometimes literally) with a series of pins and a datasheet.

If anything, i'd say you have much more choice today than in the 80s/90's.


Exactly.

There's a lot of overlap between "AI is evil megacapitalism" and "AI is ineffective", and I never understood the latter, but I am increasingly arriving to the understanding that the latter claim isn't real, it's just a soldier in the war being fought over the former.

I read the intersection as this:

We shape the world through our choices, generally under the umbrella of deterministic systems. AI is non-deterministic, but instead amplifies the concerns by a few wealthy corporations / individuals.

So is AI effective at generating marketing material or propagating arguably vapid value systems in the face of ecological, cultural, and economic crisis? I'd argue yes. But effective also depends on an intention, and that's not my intention, so it's not as effective for me.

I think we need more "manual" choice, and more agency.



Open source library development has to follow very tight sets of style adherence because of its extremely distributed nature, and the degree to which feature development is as much the design of new standards as it is writing working code. I would imagine that it is perhaps the kind of programming least well suited to AI assistance.

AI speeds me up a tremendous amount in my day job as a product engineer.


> AI speeds me up a tremendous amount in my day job as a product engineer.

Sure, there are specialized and non-specialized models.

I was asking if you've measured your "tremendous speed-up" using AI or you just feel like it is a "tremendous speed-up". As the research indicates you may feel like you are sped up 20% while you are actually 20% slower. I'm not saying that you don't actually have a speed-up from AI.


Ineffective at what? Writing good code, or producing any sort of valuable insight? Yes, it's ineffective. Writing unmaintainable slop at line rate? Or writing internet-filling spam, or propagating their owners' points of view? Very effective.

I just think the things they are effective at are a net negative for most of us.


I know its easy to criticize what happened after the fact and having a clear(er) picture of all the moving parts and the timeline of events, but I think that while most of the people in the thread are pointing out either Rust-related or lack of configuration validation, what really grinds my gears is something that - in my opinion - is bad engineering.

Having an unprivileged application querying system.columns to infer the table layout is just bad; Not having a proper, well-defined table structure indicates sloppiness in the overall schema design, specially if it changes quickly. Considering specifically clickhouse, and even if this approach would be a good idea, the unprivileged way of doing it would be "DESCRIBE TABLE <name>", NOT iterating system.columns. The gist of it - sloppy design not even well implemented.

Having a critical application issuing ad-hoc commands to system.* tablespace instead of using a well-tested library is just amateurism, and again - bad engineering; IMO it is good practice to consider all system.* privileged applications and ensure their querying is completely separate from your application logic; Sometimes some system tables change, and fields are added and/or removed - not planning for this will basically make future compatibility a nightmare.

Not only the problematic query itself, but the whole context of this screams "lack of proper application design" and devs not knowing how to use the product and/or read the documentation. Granted, this is a bit "close to home" for me, because I use ClickHouse extensively (at a scale - I'm assuming - several orders of magnitude smaller than CloudFlare) and I have spent a lot of time designing specifically to avoid at least some of these kind of mistakes. But, if I can do it at my scale, why aren't they doing it?


On all the other issues, I thought they wanted to do the right thing at heart, but missed to make it fail safe. I can pass it as a problem of a journey to maturity or simply the fact that you can't get everything perfect. Maybe even a bit of sloppiness here and there.

The database issue screamed at me: lack of expertise. I don't use CH, but seeing someone to mess with a production system and they being surprised "Oh, it does that?", is really bad. And this is obviously not knowledge that is hard to achieve, buried deep in a manual or an edge case only discoverable by source code, it's bread and butter knowledge you should know.

What is confusing, that they didn't add this to their follow-up steps. With some benefit of doubt I'd assume they didn't want to put something very basic as a reason out there, just to protect the people behind it from widespread blame. But if that's not the case, then it's a general problem. Sadly it's not uncommon that components like databases are dealt with, on an low effort basis. Just a thing we plug in and works. But it's obviously not.


You dont even need to look into advanced features; sqlite does not support ILIKE.


To be fair, most databases don't, since ILIKE is not in the SQL standard.


Oh the memories :) I still have somewhere a dot-matrix printed copy of the list I used religiously in the 90's


I remember looking at a print out of some of it in the late 80's and learning about the "list of lists", the critical section flag, and the alternate stack.


It is sort of an excuse. I don't use MinIO precisely because of this kind of behaviour - if I cannot easily develop, configure and test our applications, I'm not adopting it commercially, specially when there are a ton of options to choose from. In the end, this hurts the MinIO's enterprise offering. Having a robust, easy to deploy community edition, with predictable features, is a great way of allowing integrators to develop and test using your product, and to help the product to gain traction.


The loom dropped production costs immensely - even hand-made clothes are done with premade fabrics, they dont do it from scratch.

Mass produced clothing exists in many industrialized countries - typically the premium stuff; the sweatshop stuff is quite cheaper, and customers are happy paying less; its not capitalism, its consumer greed. But nice story.


It can.

The reason crappy software has existed since...ever is because people are notoriously bad at thinking, planning and architecting systems.

When someone does a "smart decision", it often translates to the nightmare of someone else 5 or 10 years down the line. Most people shouldn't be making "smart decisions"; they should be making boring decisions, as most software is actually a glorified crud. There are exceptions, obviously, but don't think you're special - your code also sucks and your design is crap :) the goal is often to be less sucky and less crappier than one would expect; in the end, its all ones and zeros, and the fancy abstractions exist to dumb down the ones and zeros to concepts humans can grasp.

A machine can and will, obviously, produce better results and better reasoning than an average solution designer; it can consider a multitude of options a single person seldom can; it can point out from the get-go shortcomings and domain-specific pitfalls a human wouldnt even think of in most cases.

So go ahead, try it. Feed it your design and ask about shortcomings; ask about risk management strategies; ask about refactoring and maintenance strategies; you'd probably be surprised.


I completely understand what you mean, as creator of boring and stable solutions deployed in production and in some cases still there untouched for nearly two decades. But no, I don't agree about the "it can consider a multitude of options a single person seldom can" part since it's not really what is happening right now, it does not work this way.

> So go ahead, try it. Feed it your design and ask about shortcomings; ask about risk management strategies; ask about refactoring and maintenance strategies; you'd probably be surprised.

Answers to this and other kinds of questions are in my opinion just a watered down version of actual thinking currently. Interesting but still too simple and not that actionable. What I mainly use LLMs for is exploring the space of solutions, that I will then investigate if there is something promising (mainly deep research of topics or possible half broken/random solutions to problems). I'm not really interested in an actual answer most of the time but more avenues for investigation that I didn't consider. Anyway, I'm not saying that AI is useless right now.


People often blame LLMs for bad code, but the real issue is usually poor input or unclear context. An LLM can produce weak code if you give weak instructions but it can also write production ready code if you guide it well, explain the approach clearly, and mention what security measures are needed. The same rule applies to developers too. I’m really surprised to see so much resistance from the developer community, instead, they should use AI to boost their productivity and efficiency. Personally Iam dead against using CLI tools, istead IDE based tools will give your better visibility on code produced and betetr control over the changes.


Its basically the same. It abstracts away a layer of complexity, so you focus on different stuff. The inherent disadvantage of using these shortcuts/abstractions is only obvious if you actually understand their inner workings and their shortcomings - being cloud services or llm-generated code.

Today you have "frontend programmers" that couldn't implement a simple algorithm even if their life depended on it; thats not necessarily bad - it democratizes access to tech and lowers the entry bar. These devs up in arms against ai tools are just gatekeepers - they see how easy is to produce slop and feel threatened by it. AI is a tool; in most cases will improve the speed and quality of your work; in some cases, it wont. Just like everything else.


Not really...

If one person writes code only in react and another only in vue, in the same product, you have a mess.

If one person writes their react code in vim and another writes it in an IDE, you don't have a mess.


> If one person writes code only in react and another only in vue, in the same product, you have a mess.

Huh? quick example - a customer-facing platform with a provisioning dashboard, and a user dashboard; they can (and should, for several reasons) be developed as separate applications, and will depend on different APIs. Are you saying having 2 distinct technologies on 2 distinct components of a product is a mess? Without any other details on the product?

A good example on the type of products with these separations are e-commerce systems; payment gateways; cloud-native SaaS solutions, etc etc etc.

I'm sorry to tell you this, but your comment just shows how deep your lack of experience is; any reasonable complex product using frontend technology will have different interfaces with different requirements, different levels of polishing and - frequently - maintained by completely different teams.


> Did people force React? Cloud infrastructure? Microservices? You get it.

Actually, yes; People forced React (instead of homegrown or different options) because its easier to hire to, than finding js/typescript gurus to build your own stuff.

People forced cloud infrastructure; even today, if your 10-customer startup isn't using cloud at some capacity and/or kubernetes, investors will frown on you; devops will look at you weird (what? Needing to understand inner workings of software products to properly configure them?)

Microservices? Check. 5 years ago, you wouldn't even be hired if you skipped microservices; everyone thinks they're gooogle, and many startups need to burn those aws credits; thats how you get a dozen-machine cluster to run a solution a proper dev would code in a week and could run on a laptop.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: