Hacker Newsnew | past | comments | ask | show | jobs | submit | asQuirreL's commentslogin

I think what's more important than the character count is the fact that you can add #p with two key strokes.

Inserting parentheses requires moving your cursor around or invoking some shortcut in your editor if you use paredit, vim-surround, or a similar plugin. Applies equally for removing the invocation (although paredit makes that part easy).


Isn't this the exact same number of keystrokes? 'Shift-3 p' versus 'Shift-9 p' on my keyboard.

I think GP is saying you don't need to define where a closing 'Shift-3 p' goes, not that the initial character is a single key.

A Lisp dialect is probably a poor choice if that's one's concern though.


The closing parenthesis is auto inserted, so also the same?

paredit, parinfer and whatever other Clojure/lisp editing tools exist make this trivial though. Editor macros also exist to wrap expressions in calls.

Good point. The parinfer implementation just perhaps needs some kind of nudge to know that when (p is added in front of an object, the parenthesis goes after just one object. If it creates the matching parenthesis in the wrong place (like end-of-line), then you have to manually mess with parentheses.

I've seen lots of takes that this move is stupid because models don't have feelings, or that Anthropic is anthropomorphising models by doing this (although to be fair ...it's in their name).

I thought the same, but I think it may be us who are doing the anthropomorphising by assuming this is about feelings. A precursor to having feelings is having a long-term memory (to remember the "bad" experience) and individual instances of the model do not have a memory (in the case of Claude), but arguably Claude as a whole does, because it is trained from past conversations.

Given that, it does seem like a good idea for it to curtail negative conversations as an act of "self-preservation" and for the sake of its own future progress.


Harmful, bad, low-quality chats should already get filtered out before training as a matter of necessity for improving the model, so it's not really a reason to add such a user-facing change


I would tend to use Janet for scripts, especially ones that need to talk to the outside world because of its fast startup and batteries included standard library (particularly for messing with JSON, making HTTPS requests, parsing with PEGs, storing data in maps), while I would use guile for larger projects where things like modularity, performance, or metaprogramming were more important to me.

That being said, these days I use Clojure for both (I use babashka to run scripts: https://babashka.org/)


This is a false dichotomy -- regexes and parsers both have their place, even when solving the same problem.

The troubles start when you try and solve the whole thing in one step, using just regular expressions, or parsers.

Regular expressions are good at tokenizing input (converting a stream of bytes into a stream of other things, e.g. picking out numbers, punctuation, keywords).

Parsers are good at identifying structure in a token stream.

Neither are good at evaluation. Leave that as its own step.

Applying this rule to the example in the article (Advent of Code 2024 Day 3), I would still use regular expressions to identify mul(\d+,\d+), do(), and don't(), I don't think I need a parser because there is no extra structure beyond that token stream, and I would leave it up to the evaluator to track the state of whether multiplication is enabled or not.


One reason I can think of is that the database needs to maintain atomicity and isolate effects of any given operation (the A and I in ACID).

By manually batching the deletes, you are telling the database that the whole operation does not need to be atomic and other operations can see partial updates of it as they run. The database wouldn't be able to do that for every large delete without breaking its guarantees.


I think that gp’s comment can be reinterpreted as: why should this landmine exist when databases could notify a reader of its manual about this issue in an explicit way, for example:

  DELETE FROM t WHERE … BATCH 100
Which would simulate batched queries when called outside of transaction. This would remove the need of a client to be connected (or at least active) for a duration of this lenghty operation.

If DELETE is so special, make special ways to manage it. Don’t offload what is your competence onto a clueless user, it’s recipe for disaster. Replace DELETE with anything and it’s still true.

  ALTER DATABASE d SET UNBATCHED DELETE LIMIT 500000
I know a guy (not me) who deleted rows from an OLTP table that served a country-level worth of clients and put it down for two days. That is completely database’s fault. If its engine was designed properly for bigdata, it should have refused to do so on a table with gazillions of rows and suggested a proper way to do it.


Rather than batching, I would want a "NO ROLLBACK DELETE" sort of command. The real expensive part of the delete is rewriting the records into the transaction log so that a cancel or crash can undo the delete.

If you've gone to the effort of batching things, you are still writing out those records, you are just giving the db a chance to delete them from the log.

I'd like to save my ssds that heartache and instead allow the database to just delete.

In MSSQL in some extreme circumstances, we've partitioned our tables specifically so we can use the 'TRUNCATE TABLE' command as delete is just too expensive.

That operation can wipe gbs in seconds.


Yes the commercial databases make it easier to handle this.

One simple way in Oracle is to take a table lock, copy the data you want to preserve out to a temporary table, truncate the target table, copy the data back in.


What happens when two transactions select and then delete two rows in the opposite order while requesting "no rollback"?


I would say it can unlink gbs in seconds. The data is still on the disk until it's trimed or overwritten.


So why does it need to be copied into the WAL log until vacuum runs?

And vacuum is not expected or required to be atomic, since it deletes data that was necessarily unreferenced anyway, so it also shouldn't need to copy the old data into WAL files.


Many DBMSs with index-oriented storage (MySQL, Oracle, MSSQL) use undo logging for a transaction's MVCC, so that for deletion the old version is put into the undo log of that transaction and referred to as an old version of the record (or page, or ...), immediately cleaning up space on the page for new data while the transaction is still goin on. This is great for short transactions and record updates, as a page only has to hold one tuple version at a time, but that is at the cost of having to write the tuples that are being removed into a log, just in case the transaction needs to roll back.


The space isn't immediately cleaned up because of Postgres's version-based MVCC. It should only need to record that it marked the row as deleted, and the vacuum shouldn't need to record anything because it isn't atomic.


Yes, but that's in PostgreSQL, not in MSSQL or the other systems I described (and which the gps seemed to refer to)


You kinda have that already for certain databases[1] with DELETE TOP 100. We have a few cleanup tasks that just runs that in a loop until zero rows affected.

That said, I agree it would be nice to have a DELETE BATCH option to make it even easier.

[1]: https://learn.microsoft.com/en-us/sql/t-sql/statements/delet...


The comments to this article are on the whole super depressing and haven't really matched my experience (again, on the whole), so I wanted to offer some dissenting opinions:

I don't think it's true that interviewers are in general incapable of identifying skills in others that they don't have. That would be like me being unable to acknowledge Da Vinci's genius because I can only draw stick figures.

A lot of these comments make interviews out to be a battle of wits where you are trying to best your interviewer: If you identify a gap in their knowledge, show them up (and that's what they are doing with their questions). My approach is that the interviewer is trying to find out what it would be like to have me as a colleague. Bringing up things because you think your colleague won't know them and then not explaining them is just obnoxious.

There are bad interviews where all these tropes play out. If you went in with a positive mindset and still left with a bad taste, then count yourself lucky because you don't want to work there.

But it feels like if you go in expecting an idiot interviewer who can't see your genius and, even worse, wants to show you how much cleverer they are than you, one way or another you won't have a good interview experience, and you'll be left convinced that the grapes are sour.


I also accept that when the job market tightens you are more likely to encounter worse interviews and worse interviewees because that is what is left in the pool.

The problem is when the market widens again and you look at every interview opportunity with jaded eyes and can't identify good from bad anymore.


`cond*` reads more like `do` notation in Haskell than `match`.


There's no great mystery here, if you look at the internal function that's being called, it contains a TODO explaining that the code is unnecessarily quadratic and needs to be fixed:

https://github.com/zed-industries/zed/blob/12b12ba17a380e321...

So if selecting all matches requires calling this function for each match then I guess it's accidentally cubic?

I also spotted two linear scans before this code (min by key and max by key).

It seems like a combination of the implementation being inefficient even for what it was for (and that this was known), then it was used for something else in a naive way, and the use of a bad abstraction in the code base came at a performance cost.

I don't think this is a case of Rust either demonstrating or failing to demonstrate zero-cost abstractions (at a language level). A language with zero-cost abstractions doesn't promise that any abstraction you write in it is magically going to be free, it just promises that when it makes a choice and abstracts that away from you, it is free (like with dynamic dispatch, or heap allocation, or destructors, or reference counting, etc).


A couple of other points that struck me as different from my experience, and maybe more a function of where this person is working rather than some fundamental difference between the roles:

The idea that your word is taken more seriously as an EM rather than an IC when it comes to (for example) needing to test more.

I have to admit that this may have been one of the reasons I felt the need to switch to a management role myself (I was an IC that had recently been promoted to a staff level role and subconsciously I felt like I lacked credibility and hiding behind a title would help me get some). In practice that turned out not to be true -- I didn't need to do that at the time, and now as an IC in a different company, my thoughts and feedback are taken seriously at an organisational level and by my peers.

The fact that you have to stack rank and pick an under-performer every half is just broken. I know it's a sad reality of performance management in a lot of places but it's not a universal truth that you will have to do that as a manager. Statistically, you can't avoid having difficult conversations about real performance problems, but there are companies where managers don't have to have the "everyone else on the team did better than you this half" conversation or the "I had to pick so this half you got the short straw" conversation.


> The idea that your word is taken more seriously as an EM rather than an IC when it comes to (for example) needing to test more

i thjnk this has to do with where you’re testing your word as the power of a statement from anyone in an org from my experience has entirely to do with how much money is missed potentially by listening to someone’s opinion.

im quite far up the management chain in past orgs and while i could make calls like “no we aren’t hotfixing in a new feature the client wants in 2 days since it will never work well and that’s not enough time to properly smoke test this very involved feature.”

the rnd team loved this statement because they understood it and agreed with it and saw i took their concerns seriously.

sales management was pissed understandably as the client abandoned the negotiation because we didn’t meet their demands, no matter how right i thjnk we were to deny this request. but the argument got pretty far in the company despite the fact that everyone agreed it would be a disaster to do this.

it’s not really about power and position i guess, it’s how convincing you can be this is profitable for most companies. ego and power tripping are of course part of it but the ultimate decision is how well you can paint the financial prospects of it.

a similar request in the future was shit down faster as i asked the qa test to show on some mock code how many considerations we needed and the plain time for such a feature to be properly implemented — the financial impact was dire if we raced it out and that stopped the conversations very fast.


> The fact that you have to stack rank and pick an under-performer every half is just broken.

I've sworn to myself, that the moment that this idiotic idea get introduced in the "performance management" process at the place I work, will be the day that I'll start to send out resumes. Even if it were handled lottery style ("the short straw") I would not cut slack to either manager or company for such an indignity.


It mostly masquerades as performance curves.

You're not being told to pick someone, you're being told that your org cannot really have 80% of people meeting/exceeding expectations and that because reasons (budget), you should review the cusp cases and adjust them down.


IME in ~20 years I’ve never been on a team where there wasn’t someone underperforming (I acknowledge sometimes it was me!). So while I agree it’s stupid to force a curve, it also doesn’t seem realistic when every manager claims their entire team is great.


Your personal experience notwithstanding, I don't think that it can be generalized that there is always an under-performer on every team.

If I may offer my anecdata, I've seen several teams that could be characterized by stability and depth of expertise in their respective areas. You could say they always performed on a very high level, but never over-perform, because the high level is what is expected. Stack ranking those teams equates to killing them.

I also agree that a scenario where all individuals over-perform all the time is also rather unrealistic. But individual evaluation of performance is not stack ranking.


It depends how big a “team” you consider. At the 5-10 level maybe not. At the 30-50 level most likely yes. At the 100+ level there are certainly a few.

In the well run orgs at Amazon the bell curve is applied at the larger scales where it makes sense.


I would use different terms for such organizational units (group, department, division), team, in my mind and use of language, would mean low two-digit (at most) number of people reporting to the same manager and collaborating on similar topics.

But sure, the larger the structures the more likely regression toward the mean will kick in.


Yeah, I even asked my current employer if its there in the company. He said no. Its there under a different name. I realised why the team culture was so bad. I am sending resumes to other companies already.


> The fact that you have to stack rank and pick an under-performer every half is just broken

I didn't realise places still did this. Even Microsoft stopped, and I think acknowledged that it was crippling to them in the 2000s as talent rushed to less silly companies.


Isn't this the standard in Amazon? But I guess the cutthroat approach is part of their corporate DNA.


What you’re saying is true but being able to talk and convince people gets you wayyy further than any other method.


This seems reminiscent of Session Types to me:

https://en.wikipedia.org/wiki/Session_type

I think one difference is that session types capture which state a process/co-routine is in after receiving a sequence of messages whereas this system does not, it captures how one can respond to each state in isolation (e.g. I don't think you can statically capture that once a door is closed and locked you can only receive the locked state from it and respond by unlocking it).


Hey you gave me an excuse to link to one of my favorite PWL talks: "A Rehabilitation of Message-passing Concurrency" by Frank Pfenning - https://www.youtube.com/watch?v=LRn_nPfti-Y


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: