Hacker News new | past | comments | ask | show | jobs | submit | thunky's comments login

> the resumes the LLM likes the most will be the "fake" applicants > the strongest matches are the fakest applicants

How do you know that you didn't filter out the perfect candidate?

And did you tell the LLM what makes a resume fake?


I don't think an LLM will be good at spotting fake resumes. I was trying to point out that if you use an LLM to screen for matches to the job, you can expect to find a lot of people that used ChatGPT to customize their resume to your role. As more & more people realize that using an LLM gets you passed AI resume filters, you can expect all positive resumes to be LLM output, so using an LLM as a way of identifying potential applicants will be less & less useful over time.

I was skeptical that you knew with confidence what made a resume fake, other than it being "too good to be true". Which I don't blame you for, it's an optimization.

But it also means that the perfect candidate, while probably unlikely, would be rejected.


I'd like to see the before and after questions because it seems possible that the layman's version was less exact and therefore interpreted differently, even if the intention was the same. Which, can happen with humans too.

Given the topic I'm unfortunately not comfortable sharing the details in a public space like this, but the answer that it gave was not just a misinterpretation of the question, it was actually entirely wrong on the merits of its own interpretation.

And even if it were a misinterpretation the result is still largely the same: if you don't know how to ask good questions you won't get good answers, which makes it dangerous to rely on the tools for things that you're not already an expert in. This is in contrast to all to people who claim to be using them for learning about important concepts (including lots of people who claim to be using them as financial advisors!).


If you don't know how to ask a human doctor a good question you can't expect to get a good answer either.

The difference is that a human doctor probably has a lot of context about you and the situation you're in, so that they probably guess what your intention behind the question is, and adjust their answer appropriately. When you talk to an LLM, it has none of that context. So the comparison isn't really fair.

Has your mom ever asked you a computer question? Half of the time the question makes no sense and explaining to her why would take hours, and then she still wouldn't get it. So the best you can do is guess what she wants based on the context you have.


Doesn’t matter. The LLM’s job should be to deliver results in the way that is intended, regardless of the skill of the prompter. If it can’t do that then it’s really no better than a Google search.

That is a pretty high bar. Humans aren't any better than a Google search based on this criteria.

An expert human can give you the answer you need with the same layman prompt, without errors.

You would first have to find the expert however, which might not be trivial. Anyway, I think there is value in the space between a basic google search and a human expert. If you don't think so that is fine.

Odd take. YouTube has just about anything you might be interested in, in abundance. Even if it is "mostly crap" there would probably still be more non-crap than you could consume in your lifetime.

And youth/health. Trump may think he's on top of the world but I bet he'd trade it all away to be 25 rather than on his last leg.

Possible counterexample: Larry Ellison, although he's at least an order of magnitude richer than Trump.

> modern framework

Modern usually doesn't last long.


Dunno. React has been around for 12 years, Angular’s current incarnation for 8 (older than React if you count <1.5), and Vue for 10.

> So either there's going to be a cut in benefits and/or the retirement age will be bumped up

Or...raise the contribution limit which fixes the whole thing easily without having to screw over the people that paid in and just want to get back what they were promised.

Raise the retirement age? Really? All this advancement to make our lives better and more efficient, and we're going to conclude that we all need to to work more?

And meanwhile we can piss away cash by the trillion but when it comes to social security suddenly there's no money to be found anywhere.

They've fooled everyone into believing "the fund will be depleted" in x years. Then put some more money in assholes.


But Social Security solvency isn't the only thing that needs to be addressed. Medicare, for example.

Considering that the population has decided it doesn't want any more significant immigration, ensuring the median age (along with median working age) will increase faster than lifespans, it seems foolhardy to think we can have our cake and eat it, too. Any kind of tax increase is a difficult sell.[1] If we can successfully raise the limit on payroll taxes, we better make it count. Note that social security disability benefits exist and presumably would remain an option for those unable to work.

That said, I appreciate you pointing that out. I hadn't realized how much revenue there was still to be had in lifting the payroll cap, at least according to https://crr.bc.edu/to-fix-social-security-increasing-the-wag...

[1] How quickly the left forgets that the president who pushed through devastating tax cuts in 2018 was just recently re-elected.


> I've been using Ruby for over 10 years

Maybe after a decade or two I wouldn't care, but as someone who has only used Ruby casually I would steer clear of it for anything serious, largely due to the lack of namespaces.


That's why you hold the humans that run the computer accountable.


That's sort of the whole point though

The computer allows the humans a lot of leeway. For one thing, the computer represents a large diffusion of responsibility. Do you hold the programmers responsible? The managers who told them to build the software? How about the hardware manufacturers or the IT people who built or installed the system? Maybe the Executives?

What if the program was built with all good intentions and just has a critical exploit that someone abused?

It's just not so straightforward to hold the humans accountable when there are so many humans that touch any piece of commercial software


That's fair but I was referering to the humans that delegate "business decisions" to computers, which is what I thought the context was...

For example, if American Airlines uses a computer to help decide who gets refunds, they can't then blame the computer when it discriminates against group X, or steals money, because it was their own "business decision" that is responsible for that action (with the assist from a stupid computer they chose to use).

This is different from when their landing gear doesn't go down because of a software flaw in some component. They didn't produce the component and they didn't they didn't choose to delegate their "business decisions" to it, so as long as they used an approved vendor etc they should be ok. Choosing the vendor, the maintainence schedules, etc, etc: those are the "business decisions" they're responsible for.


> For example, if American Airlines uses a computer to help decide who gets refunds, they can't then blame the computer when it discriminates against group X

If American Airlines uses a computer to automatically decline refunds, which human(s) do we hold accountable for these decisions?

The engineers who built the system?

The product people who designed the system, providing the business rules that the engineers followed?

The executives who oversaw the whole thing?

Sometimes there is one person who you can pin blame on, who was responsible for "going rogue" and building some discrimination into the system

Often time it is a failure of a large part of a business. Responsibility is diffused enough that no one is accountable, and essentially we do in fact "blame the computer"


> which human(s) do we hold accountable for these decisions?

Personally I'd be satisfied holding the company as a whole liable rather than a single person.


What does it mean to hold "a company" liable?

All that does is create a situation where decision makers at companies can make the company behave unethically or even illegally and suffer no repercussions for this. They might not even still be at the company when the consequences are finally felt


> What does it mean to hold "a company" liable?

It means that the company is sued and is responsible for damages.

> decision makers at companies can make the company behave unethically or even illegally and suffer no repercussions for this

But now you've just argued yourself back to the "which human(s) do we hold accountable for these decisions?" question you raised that I was trying to get you out of.


Workday said in a statement it was "honored to partner with OPM" to modernize its HR systems via the 12-month $342,200 contract.

Only $342k?

Yes it is: https://www.usaspending.gov/award/CONT_AWD_24322625C0006_240...


> Will we see ads being integrated into AI response?

Already exists. Go to bing.com/chat and type something like "i'm looking for a nice suitcase".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: