Hacker Newsnew | past | comments | ask | show | jobs | submit | jofzar's commentslogin

I actually don't mind this one, 9950 is the actual chip, x3d is the cache (where it's larger) and the 2 stands for it being on both chiplets.

I really want a x3d because a game I play is heavily single threaded, I have the income and the financial stability but I can't in any good conscious upgrade to am5 with the ram prices. It's insane

Yep exactly the same situation.

I would not be surprised if we see casualties in adjacent markets, such as motherboards, coolers and whatnot.


AMD had an upgrade path with the 5700x3d, assuming you’re on AM4.

Just reading now that they went out of production half a year ago which is a shame. I was very impressed being able to upgrade with the same motherboard 6 years down the line.


I'm the mythical customer who went from a 1700X in a B350 motherboard near launch day to a 5800X3D in the same board (after a dozen BIOS updates). Felt amazing. Like the old 486DX2 days.

Same! Kept checking back for bios updates and even years later they kept announcing more support! Truly crazy.

Other than the speed it’s a very good reason to go with amd, the upgrade scope is massive, on am5 you can go from a 6 core and soon all the way to a 24 core with the new zen6


What game, if you don't mind my asking?

World of Warcraft

I worked in scheduling and timekeeping industry for a little bit, when pen and paper is mentioned you think "oh it's just notes written, and some other things" but in reality it's literally whole departments storing everything in daily/weekly sheets/binders and it's like 20 people's job to keep it all in order and keep the ship running for next week.

When someone asks what the plan is for next week, the answer is normally, it needs to be written out, or I'll have to find this for you etc.


Yeah, my first job at a startup was at an oil and gas saas that ingested unstructured data into a standardized db for smaller operators.*

"How much money did we make yesterday?" was a nontrivial question that required a several people a couple of days to compile manually before our software.

--- * Would probably make a killing today; this was over a decade ago and the extraction was 98% regex and custom if statements


I'm going the opposite of everyone else is saying.

This is sick OP based on what's in the document, it looks really useful when you need to quickly fix something and need to validate the changes to make sure nothing has changed in the UI/workflow except what you have asked.

Also looks useful for PR's, have a before and after changed.


Exactly. We need more tools like this. With the right model, picking apart images and videos isn't that hard. Adding vision to your testing removes a lot of guess work from ai coding when it comes to fixing layout bugs.

A few days ago I had a interaction with codex that roughly went as follows, "this chat window is scrolling off screen, fix", "I've fixed it", "No you didn't", "You are totally right, I'm fixing it now", "still broken", "please use a headless browser to look at the thing and then fix it", "....", "I see the problem now, I'm implementing a fix and verifying the fix with the browser", etc. This took a few tries and it eventually nailed it. And added the e2e test of course.

I usually prompt codex with screenshots for layout issues as well. One of the nice things of their desktop app relative to the cli is that pasting screenshots works.

A lot of our QA practices are still rooted in us checking stuff manually. We need to get ourselves out of the loop as much as possible. Tools like this make that easier.

I think I recall Mozilla pioneering regression testing of their layout engine using screenshots about a quarter century ago. They had a lot of stuff landing in their browser that could trigger all sorts of weird regressions. If screenshots changed without good reason, that was a bug. Very simple mechanism and very effective. We can do better these days.


ah feel your pain.. Codex interaction is exactly the pain point. “I fixed it” / “no you didn’t” five times in a row, you feel gaslighted by your own agent in a way. That’s the loop I wanted to kill. I didnt' know about Mozilla screenshot regression actually

Thanks! Yeah the before/after PR thing is exactly what proofshot pr is built for.

> It’s not a testing framework. The agent doesn’t decide pass/fail. It just gives me the evidence so I don’t have to open the browser myself every time.

From the OP, i don't think this is what is meant for what you are saying.


These aren't really comparable, OP's is something that records, captures and reproduces with steps.

playwright can do all of that too. I'm confused why this is necessary.

If coding agents are given the Playwright access they can do it better actually because using Chrome Developer Tools Protocol they can interact with the browser and experiment with things without having to wait for all of this to complete before making moves. For instance I've seen Claude Code captures console messages from a running Chrome instance and uses that to debug things...


I've also had Claude run javascript code on a page using playwright-cli to figure out why a button wasn't working as it should.

Because LLM users are NIH factories?

That's exactly what Playwright does, but also something you don't really need in order to debug a problem.

Not a single clip/recording of how this sounds?

Like CMON this is the bare minimum here.


> Instead of spending hours getting two or three systems to integrate with mine with the proper OAuth scopes or SAML and so on

As someone who's job is handling oauth and saml scope, I am not convinced anyone can get these right.

Saml atleast acts nice, oauth on the other hand is a fucking nightmare.


Every time I request the wrong OAuth scope that doesn't have the authorization to do what I need, then make a failing request, I hear Jim Gaffigan affecting a funny authoritative voice saying, "No." I can't be the only one who defensively requests too much authority beyond what I need with extra OAuth scopes, hoping one of them will give me the correct access. I've had much better luck with LLMs telling me exactly which scopes to select.

I always hear Little Britain’s “computer says nooooo”

oauth is the one area where I genuinely trust the LLM more than myself. not because it gets it right but because at least it reads all the docs instead of rage-quitting after the third wrong scope

And the libraries provided by the various OAuth vendors are only adding fuel to the fire.

A while ago I spent some time debugging a superfluous redirect and the reason was that the library would always kick off with a "not authenticated" when it didn't find stored tokens, even if it was redirecting back after successful log in (as the tokens weren't stored yet).


" Does that seem hard? I think it’s hard. The relevant physical phenomena include at least"

Imo no, this seems like something that would be in multiple scientific papers so a LLM would be able to generate the answer based on predictive text.


A full model of a cup of water cooling is, in fact, incredibly difficult.

Impossible, since it is chaotic.

But a T(t) model should not be too hard for an LLM with a basic heat transfer book in its training set.


You don't need a full model of every atomic interaction because all of those chaotic interactions end up averaging out. Given enough coin flips you will end up on a 50/50 split even if the individual flips are unpredictable. Given enough atomic interactions the heat will transfer in the same way every time.

Imo photopea is a Photoshop replacement, it's just not a professional Photoshop replacement.

It's for everyone who had a pirated version of cs3 on their computer for basic edits.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: