Hacker News new | past | comments | ask | show | jobs | submit | Madmallard's comments login

What does this even matter?

You can just feed literally all of google street view into a traditional ML model ...


Can you?

Yeah well only in the last 10 years did internet companies start employing psychology PHDs to find the best possible ways to exploit people they can. That is basically what the problem is. Short-form content and algorithmic display of what evidently appeals to you the most is literally zombifying people.

I personally don't think technology for the most part is good for society. It makes nature boring and predictable and life less interesting as a whole if this is true, but I don't think we even understand the degree to which technology is just ruining life for the future. We don't have adaptations to deal with anything and adaptations take tens of thousands of years if not way more to occur. The romantic thought is that technology can help us solve the problems that come up as a result of itself, but I'm less optimistic there just because of how things have been going. It seems like human nature and us not being good at understanding large complex systems as a species results in the malignant actors and developments taking root and metastasizing over time.

- global warming - antibiotic resistance - environmental contamination - food quality diminishing - explosive increase in chronic disease, especially in young people - extinction of most other species - fertility problems - declining birth rates - poly-pharmacy becoming normal - now things related to energy consumption with AI and cryptocurrency - huge decline in social behaviors across the population

Just seems like for every new advancement we're making new chronic issues that are barely incentivized at all for being managed and alleviated


At the beginning of the 1800s, half of people's children died. We literally beat fucking Thanos for children. That's not a 'romantic thought'.


The issue is not technology but how and where it is applied.

Tens and tens of billions are spent to generate cute pics instead of same tech applied to radiology, diseases cure, etc.


The wheel is technology, metallurgy is technology, irrigation is technology.

Technology is vital to a functioning society.

There's certainly more debate to be had whether various bits of modern technology are net positive or net negative, but even still I personally believe modern technology is mostly neutral to very good for humanity in a vacuum and it is other forces like modern capitalism that bend it toward being harmful.

eg. Social media is very clearly having a net negative impact on modern society, but I don't believe that would still be true if it wasn't driven by algorithms created to maximize ad revenue above all other concerns.

And obviously there is some inherent coupling of modern technology and capitalism that isn't avoidable, but I don't think capitalism on its own is wholly bad, its the slavish cult-like worship of it as the only way to do things that causes it to be so destructive.


life is a game of balance in the sweet band between uninhabitable extremes. technology obeys the same law. both too little, and too much, are deadly.


Is it actually good at solving complex code or is it just garbage and people are lying about it as usual?

In my experience EXTENSIVELY using claude 3.5 sonnet you basically have to do everything complex or you're just introducing massive amounts of slop code into your code base that while functional is nowhere near good. And for anything actually complex like requires a lot of context to make a decision and has to be useful to multiple different parts, it's just hopelessly bad.


I've played with it the whole day (so take it with a grain of salt). My gut feeling is that it can produce a bigger ... "thing". I am calling it a "thing", because it looks very much as what you want, but the bigger it is - the more the chances of it being subtly (or not) wrong.

I usually ask the models to extend a small parser/tree-walking interpreter with a compiler/VM.

Up until Claude 3.7 the models would propose something lazy and obviously incomplete. 3.7 generated something that looks almost right, mostly works, but is so overcomplicated and broken in such a way, that I rather delete it and write it from scratch. Trying to get the model to fix it resulted in running in circles, spitting out pieces of code that didn't fit the existing ones etc.

Not sure if I prefer the former or the latter tbh.


What about the research surrounding high dose vit C, thiamine, dexamethasone or whatever that concoction was?


How old were they when this happened?


About 60


Probably as much as companies can get away with, which likely will just be more and more as AI tools get better.


AI isn't taking American jobs. Foreign developer agencies utilizing AI and being paid a fraction of what American employees are being paid are taking American jobs.


Outsourcing literally in 100% cases results in shitty product. Every single time.

So when some company outsources its development it is always good news - it can be pretty easily crushed with in-house dev team from a competitor.


Why do you assume that this is about outsourcing? So far, every international company I've worked for in the last decade has insourced dev, but moved it out of the USA, then laid off the American devs. That is not about outsourcing at all, it is about responding to cost discrepancies in the global market.


It's happening at every single big company.


What makes you think (1) will be true?

It is only generating based on training data. In mature code bases there is a massive amount of interconnected state that is not already present in any github repository. The new logic you'd want to add is likely something never done before. As other programmers have stated, it seems to be improving at generating useful boilerplate and making simple websites and such related to what's out there en masse on Github. But it can't make any meaningful changes in an extensively matured codebase. Even Claude Sonnet is absolutely hopeless at this. And the requirement before the codebase is "matured" is not very high.


> The new logic you'd want to add is likely something never done before.

99% of software development jobs are not as groundbreaking as this. It’s mostly companies doing exactly what their competitors are doing. Very few places are actually doing things that an LLM model has truly never seen crawling through GutHub. Even new innovative products generally boil down to the same database fetches and CRUD glue and JSON parsing and front end form filling code.


Groundbreakingness is different from the type of novelty that's relevant to an LLM. The script I was trying to write yesterday wasn't groundbreaking at all: it just needed to pull some code from a remote repository, edit a specific file to add a hash, then run a command. But it had to do that _within our custom build system_, and there's few examples of that, so our coding assistant couldn't figure out how to do it.


> Even new innovative products generally boil down to the same database fetches and CRUD glue and JSON parsing and front end form filling code.

The simplest version of that is some CGI code a PHP script. Which everyone should be writing according to your description. But why so many books have been written to be able to do this seemingly simple task? So many frameworks, so many patterns, so many methodologies....


I don't know man

It can't do anything in these random Phaser games I'm making and even translating my 10,000 line XNA game to Phaser. It is totally hopeless.

Phaser has been out forever now, and XNA used to be too.


> It is only generating based on training data

This is not the case anymore, current SOTA CoT models are not just parroting stuff from the training data. And as of today they are not even trained exclusively on publicly (and not so publicly) available stuff, but they massively use synthetic data which the model itself generated or distilled data from other smarter models.

I'm using and I know plenty of people using AI in current "mature" codebases with great results, this doesn't mean it does the work while you sip a coffee (yet)

*NOTE: my evidence for this is that o3 could not break ARC AGI by parroting, because it's a banchmark made exactly for this reason. Not a coding banchmark per se, but still transposable imo.


Try Devin or OpenHands. OpenHands isn't quite ready for production, but it's informative on where things are going and to watch the LLM go off and "do stuff", kinda on its own, from my prompt (while I drink coffee).


Really sucks that antibiotics, especially bacteriocidal ones, appear to target mitochondria as if they were bacteria. This mistargetting causes sometimes severe and long-lasting side effects.


They are (ancient) bacteria


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: