Hacker Newsnew | past | comments | ask | show | jobs | submit | georgewsinger's commentslogin

Some explanations considered in this video:

1. Software engineering isn't "real" engineering.

2. Nobody cares about bugs when they write software in the first place.

3. Software is hard.

4. Software is early.


If you like reasoning about a program in terms of expression trees/graphs, I recently discovered that Wolfram Language has built-ins for this:

https://reference.wolfram.com/language/ref/ExpressionTree.ht...


This was such a great story.

Steve was a mischievous person himself, so surely a part of him respected this.


My first real job my boss told me "Everyone fucks up, it's ok, when you do your first big fuck up... just be honest, and tell me."

3 years later I accidentally took down all the ATMs for one of the largest consumer banks in America for a while in the middle of the night.

My boss came in "Hey you finally did it, you took longer than most, but that was a good one!" and that was all that was ever said about it.


This is so cool. Real-time accent feedback is something language learners have never had throughout all of human history, until now.

Along similar lines, it would be useful to map a speaker's vowels in vowel-space (and likewise for consonants?) to compare native to non-native speakers.

I can't wait until something like this is available for Japanese.


The approach in the article is roughly equivalent to having someone listen to you speak and then repeating back in their own voice so you can attempt to copy their accent. Certainly nice to have available on demand without needing to coordinate schedules with another human.

A good accent coach would be able to do much better by identifying exactly how you're pronouncing things differently, telling you what you should be doing in your mouth to change that, and giving you targeted exercises to practice.

Presumably a model that predicts the position of various articulators at every timestamp in a recording could be useful for something similar.


> something language learners have never had throughout all of human history

.. unless they had access to a native speaker and/or vocal coach? While an automated Henry Higgins is nifty, it's not something humans haven't been able to do themselves.


Native speakers are less helpful at this than you might think. Speech coaches are absolutely the way to go, but they're outside the price range for most people ($200+/hr for a good coach). BoldVoice gives coach-level feedback and instruction at a price point that everyone can access, on demand.


Do you have another blog post showing your product giving targeted feedback about individual speech sounds? That's what I would expect from a coach.


Not yet - this was our first technical blog post. You can check out the BoldVoice app and test out the sound-level feedback yourself. Or watch this app walkthrough video - https://www.youtube.com/watch?v=3Sv5K4Z9P4c


You can take a language class rather than have a personal instructor. Although accents are a sensitive topic so I don't remember mine going into it much.


As someone who took English classes for years growing up, I wish that were the case. In fact, most teachers don't really know how to teach pronunciation. Also, in a typical group class setting, it's challenging to give each student one-on-one feedback. On BoldVoice, we solve that with 1) unlimited instant feedback from sound-level AI - your most patient coach. 2) in-depth video lessons from the best coaches in the world (Hollywood accent coaches). I'm a cofounder of BoldVoice, by the way. :)


Language class and accent coaching are very different things.


Try learning a language where they won't understand you with a foreign accent. I assume tonal languages are like this but haven't tried learning any.

Japanese is sort of like this - you have to say foreign words the Japanese way very forcibly, to the point that Americans will think you're being racist if they hear you do it.


That's a fascinating idea! Definitely something to try out for our team. We actively and continuously do all sorts of experiments with our machine learning models to be able to extract the most useful insights. We will definitely share if we find something useful here.


> Real-time accent feedback is something language learners have never had throughout all of human history, until now.

Do you have a source for this? It doesn't seem plausible to me, but I'm not an expert.


What kinds of speed can be achieved, in terms of words per per minute?


I didn’t formally measure but if I had to guess I’d estimate I’m hitting ~40 words per minute on the last example.


Is this really how SOTA LLMs parse our queries? To what extent is this a simplified representation of what they really "see"?


This is partly completely misleading and partly simplified, when it comes to SOTA LLMs.

Subject–Verb–Object triples, POS tagging and dependency structures are not used by LLMs. One of the fundamental differences between modern LLMs and traditional NLP is that heuristics like those are not defined.

And assuming that those specific heuristics are the ones which LLMs would converge on after training is incorrect.


Yes, tokenization and embeddings are exactly how LLMs process input—they break text into tokens and map them to vectors. POS tags and SVOs aren't part of the model pipeline but help visualize structures the models learn implicitly.


Does anyone know if NOVAMIN based toothpastes (e.g. Sensodyne) have been tested?


The non-novamin Sensodyne was tested at 116ppb for lead and the tester listed the concerning ingredients: hydrated silica and titanium dioxide, which both are in the Sensodyne with Novamin tube I have from the UK.


It says Sensodybe right in the article. However, typical Guardian, it is an article telling you what to think and not giving you all the information.

The question is, is there a safe level of lead, and are these tooth pastes under it?


Pretty sure the scientific consensus is there’s no safe level of lead exposure.


That is incorrect.

The ideal amount is zero. But 1 part per 10 trillion is safe.

So… we have an estimated upper limit. Could we lower it without being able to detect a change in health effects? Likely.


They only tested US toothpastes


Novamin toothpaste is only sold & mfg in the UK. There are some conspiracy theories going around that the ingredient is so good they won't sell it to us in the US! [1]

I actually buy it off Amazon and use it myself because I have teeth sensitivity and it contains no SLS, which causes some irritation for me. It is quite interesting stuff. I doubt it would have lead since a synthetic compound. [2]

[1] https://medium.com/@ravenstine/the-curious-history-of-novami...

[2] https://en.wikipedia.org/wiki/Bioglass_45S5


No, it's a trademark issue.

And, as Canada is a place, it has Novamin-based Sensodyne.


Can you share some sources?


The author should make a meta-entry about how he makes the (insanely beautiful) diagrams in the book (ideally walking through the process).


In the FAQ:

    07 How do you make the illustrations?
    By hand, in Figma. There's no secret - it's as complicated as it looks.


using hand??? not after effect ?

god damn, that's some patient making animation right there


Respect


He has more content with figures on another platform: https://typefully.com/DanHollick


The pair of animations on the page are beautifully done, not just technically but aesthetically as well. If the rest of the book is like that I'll be getting a copy.


Insane that people would downvote a totally reasonable comment offering a competing alternative. HN is supposed to be a community of tech builders.


I would wager a sizeable chunk of the people here have no idea about the nature of this site's ownership/origin. This crowd finds this sort of thing to be a sort of astro-turfing - not communal.

edit: And I can't say I disagree.


It'a github link for an MIT-licensed project...

If the community considers that astroturfing, we have completely lost the plot what building is.


The MIT license is basically the license of choice for growth hacking these days. Many VC backed companies follow this strategy - it serves to grow your userbase, a free-tier for developers using your ecosystem and last but not least, a chance for volunteers to do free work for you.

This is perhaps too cynical for this specific instance, but it's not overly cynical more broadly. Considering users of the site have to evaluate many of these offerings frequently, I don't blame them for having a negative gut reaction.


Very impressive! But under arguably the most important benchmark -- SWE-bench verified for real-world coding tasks -- Claude 3.7 still remains the champion.[1]

Incredible how resilient Claude models have been for best-in-coding class.

[1] But by only about 1%, and inclusive of Claude's "custom scaffold" augmentation (which in practice I assume almost no one uses?). The new OpenAI models might still be effectively best in class now (or likely beating Claude with similar augmentation?).


Gemini 2.5 Pro is widely considered superior to 3.7 Sonnet now by heavy users, but they don't have an SWE-bench score. Shows that looking at one such benchmark isn't very telling. Main advantage over Sonnet being that it's better at using a large amount of context, which is enormously helpful during coding tasks.

Sonnet is still an incredibly impressive model as it held the crown for 6 months, which may as well be a decade with the current pace of LLM improvement.


Main advantage over Sonnet is Gemini 2.5 doesn't try to make a bunch of unrelated changes like it's rewriting my project from scratch.


I find Gemini 2.5 truly remarkable and overall better than Claude, which I was a big fan of


Still doesn't work well in Cursor unfortunately.


Works well in RA.Aid --in fact I'd recommend it as the default model in terms of overall cost and capability.


Working fine here. What problems do you see?


Not the OP but believe they could be referring to the fact it’s not supported in edit mode yet, only agent mode.

So far for me that’s not been too much of a roadblock. Though I still find overall Gemini struggles with more obscure issues such as SQL errors in dbt


Cline/Roo Code work fine with it


This was incredibly irritating at first, though over time I've learned to appreciate this "extra credit" work. It can be fun to see what Claude thinks I can do better, or should add in addition to whatever feature I just asked for. Especially when it comes to UI work, Claude actually has some pretty cool ideas.

If I'm using Claude through Copilot where it's "free" I'll let it do its thing and just roll back to the last commit if it gets too ambitious. If I really want it to stay on track I'll explicitly tell it in the prompt to focus only on what I've asked, and that seems to work.

And just today, I found myself leaving a comment like this: //Note to Claude: Do not refactor the below. It's ugly, but it's supposed to be that way.

Never thought I'd see the day I was leaving comments for my AI agent coworker.


> If I'm using Claude through Copilot where it's "free"

Too bad Microsoft is widely limiting this -- have you seen their pricing changes?

I also feel like they nerfed their models, or reduced context window again.


Claude is almost comically good outside of copilot. When using through copilot it’s like working with a lobotomized idiot (that complains it generated public code about half the time).


It used to be good, or at least quite decent in GH Copilot, but it all turned into poop (the completions, the models, everything) ever since they announced the pricing changes.

Considering that M$ obviously trains over GitHub data, I'm a bit pissed, honestly, even if I get GH Copilot Pro for free.


What language / framework are you using? I ask because in a Node / Typescript / React project I experience the opposite- Claude 3.7 usually solves my query on the first try, and seems to understand the project's context, ie the file structure, packages, coding guidelines, tests, etc, while Gemini 2.5 seems to install packages willy-nilly, duplicate existing tests, create duplicate components, etc.


Node / Vue


Also that Gemini 2.5 still doesn’t support prompt caching, which is huge for tools like Cline.



Oh, that must’ve been in the last few days. Weird that it’s only in 2.5 Pro preview but at least they’re headed in the right direction.

Now they just need a decent usage dashboard that doesn’t take a day to populate or require additional GCP monitoring services to break out the model usage.


It's viable context, context length where is doesn't fall apart, is also much longer.


I do find it likes to subtly reformat every single line thereby nuking my diff and making its changes unusable since I can’t verify them that way, which Sonnet doesn’t do.


I don't understand this assertion, but maybe I'm missing something?

Google included a SWE-bench score of 63.8% in their announcement for Gemini 2.5 Pro: https://blog.google/technology/google-deepmind/gemini-model-...


I keep seeing this sentiment so often here and on X that I have to wonder if I'm somehow using a different Gemini 2.5 Pro. I've been trying to use it for a couple of weeks already and without exaggeration it has yet to solve a single programming task successfully. It is constantly wrong, constantly misunderstands my requests, ignores constraints, ignores existing coding conventions, breaks my code and then tells me to fix it myself.


I feel that Claude 3.7 is smarter, but does way too much and has poor prompt adherence


2.5 Pro is very buggy with cursor. It often stops before generating any code. It's likely a cursor problem, but I use 3.7 because of that.


Eh, I wouldn't say that's accurate, I think it's situational. I code all day using AI tools and Sonnet 3.7 is still the king. Maybe it's language dependent or something, but all the engineers I know are full on Claude-Code at this point.


The image generation improvement with o4-mini is incredible. Testing it out today, this is a step change in editing specificity even from the ChatGPT 4o LLM image integration just a few weeks ago (which was already a step change). I'm able to ask for surgical edits, and they are done correctly.

There isn't a numerical benchmark for this that people seem to be tracking but this opens up production-ready image use cases. This was worth a new release.


Thanks for sharing that. that was more interesting then their demo. I tried it and it was pretty good! I have felt that the ability to iterate from images blocked this from any real production use I had. This may be good enough now.

Example of edits (not quite surgical but good): https://chatgpt.com/share/68001b02-9b4c-8012-a339-73525b8246...


I don’t know if they let you share the actual images when sharing a chat. For me, they are blank.


wait, o4-mini outputs images? What I thought I saw was the ability to do a tool call to zoom in on an image.

Are you sure that's not 4o?


I’m generating logo designs for merch via o4-mini-high and they are pretty good. Good text and comprehending my instructions.


It's using the new gpt-4o, a version that's not in the API


in the api or on the website?


also another addition: i previously tried to upload an image for chatgpt to edit and it was incapable under the previous model i tried. Now its able to change uploaded images using o4mini.


Claude got 63.2% according to the swebench.com leaderboard (listed as "Tools + Claude 3.7 Sonnet (2025-02-24)).[0] OpenAI said they got 69.1% in their blog post.

[0] swebench.com/#verified


Yes, however Claude advertised 70.3%[1] on SWE bench verified when using the following scaffolding:

> For Claude 3.7 Sonnet and Claude 3.5 Sonnet (new), we use a much simpler approach with minimal scaffolding, where the model decides which commands to run and files to edit in a single session. Our main “no extended thinking” pass@1 result simply equips the model with the two tools described here—a bash tool, and a file editing tool that operates via string replacements—as well as the “planning tool” mentioned above in our TAU-bench results.

Arguably this shouldn't be counted though?

[1] https://www.anthropic.com/_next/image?url=https%3A%2F%2Fwww-...


I think you may have misread the footnote. That simpler setup results in the 62.3%/63.7% score. The 70.3% score results from a high-compute parallel setup with rejection sampling and ranking:

> For our “high compute” number we adopt additional complexity and parallel test-time compute as follows:

> We sample multiple parallel attempts with the scaffold above

> We discard patches that break the visible regression tests in the repository, similar to the rejection sampling approach adopted by Agentless; note no hidden test information is used.

> We then rank the remaining attempts with a scoring model similar to our results on GPQA and AIME described in our research post and choose the best one for the submission.

> This results in a score of 70.3% on the subset of n=489 verified tasks which work on our infrastructure. Without this scaffold, Claude 3.7 Sonnet achieves 63.7% on SWE-bench Verified using this same subset.


Somehow completely missed that, thanks!

I think reading this makes it even clearer that the 70.3% score should just be discarded from the benchmarks. "I got a 7%-8% higher SWE benchmark score by doing a bunch of extra work and sampling a ton of answers" is not something a typical user is going to have already set up when logging onto Claude and asking it a SWE style question.

Personally, it seems like an illegitimate way to juice the numbers to me (though Claude was transparent with what they did so it's all good, and it's not uninteresting to know you can boost your score by 8% with the right tooling).


It isn't on the benchmark https://www.swebench.com/#verified

The one on the official leaderboard is the 63% score. Presumably because of all the extra work they had to do for the 70% score.


OpenAI have not shown themselves to be trustworthy, I'd take their claims with a few solar masses of salt


they also gave more detail on their SWEBench scaffolding here https://www.latent.space/p/claude-sonnet


I haven't been following them that closely, but are people finding these benchmarks relevant? It seems like these companies could just tune their models to do well on particular benchmarks


The benchmark is something you can optimize for, doesn't mean it generalize well. Yesterday I tried for 2 hours to get claude to create a program that would extract data from a weird adobe file. 10$ later, the best I had is a program that was doing something like:

  switch(testFile) {
    case "test1.ase": // run this because it's a particular case 
    case "test2.ase": // run this because it's a particular case
    default:  // run something that's not working but that's ok because the previous case should
              // give the right output for all the test files ...
  }


That’s exactly what’s happening. I’m not convinced there’s any real progress occurring here.


Right now the Swe-Bench leader Augment Agent still use Claude 3.7 in combo with o1. https://www.augmentcode.com/blog/1-open-source-agent-on-swe-...

The findings are open sourced on a repo too https://github.com/augmentcode/augment-swebench-agent


Also, if you're using Cursor AI, it seems to have much better integration with Claude where it can reflect on its own things and go off and run commands. I don't see it doing that with Gemini or the O1 models.


I often wonder if we could expect that to reach 80% - 90% within next 5 years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: