Hacker Newsnew | past | comments | ask | show | jobs | submit | tyrust's commentslogin

I wonder if, in the libertarian worldview, privacy legislation is viewed as a force for good preserving liberty or evil stifling free enterprise.

Code review is rarely done live. It's usually asynchronous, giving the reviewer plenty of time to read, digest, and give considered feedback on the changes.

Perhaps a spicy patch would involve some kind of meeting. Or maybe in a mentor/mentee situation where you'd want high-bandwidth communication.


My first job did IRL code reviews with at least two senior devs in the loop. It was both devastating and extremely helpful.

Yeah when we first started, "code review" was a weekly meeting of pretty much the entire dev team (maybe 10 people). Not all commits were reviewed, it was random and the developer would be notified a couple of days in advance that his code was chosen for review so that he could prepare to demo and defend it.

Wow, that's a very arbitrary practice: do you remember roughly when was that?

I was in a team in 2006 where we did the regular, 2-approve-code-reviews-per-change-proposal (along with fully integrated CI/CD, some of it through signed email but not full diffs like Linux patchsets, but only "commands" what branch to merge where).


Around that time frame. We had CI and if you broke the build or tests failed it was your job to drop anything else you were doing and fix it. Nothing reached the review stage unless it could build and pass unit tests.

Right, we already had both: pre-review build & test runs, and pre-merge CI (this actually ran on a temp, merged branch).

This was still practice at $BIG_FINANCE in the couple of years just before covid, although by that point such team reviews were reducing in importance and prominence.

Doing only IRL code reviews would certainly improve quality in some projects :)

It's probably also fairly expensive to do.


Am old enough that this was status quo for part of my career, and have also been in some groups that did this as a rejection of modern code review techniques.

There are pros & cons to both sides. As you point out it's quite expensive in terms of time to do the in person style. Getting several people together is a big hassle. I've found that the code reviews themselves, and what people get out of them, are wildly different though. In person code reviews have been much more holistic in my experience, sometimes bordering on bigger picture planning. And much better as a learning tool for other people involved. Whereas the diff style online code review tends to be more focused on the immediate concerns.

There's not a right or wrong answer between those tradeoffs, but people need to realize they're not the same thing.


I would guess that 3 part code review would actually be most effective. Likely even save on costs. First part is walkthrough on call, next independent review and comments. Then per need an other call over fixes or discussion.

Probably spend more time on it, but would share the understanding and alignment.


Pair programming? That is realtime code review by another human

And yet... is it? Realtime means real discussion, and opportunity to align ever so slightly on a common standard (which we should write down!), and an opportunity to share tacit knowledge.

It also increases the coverage area of code that each developer is at least somewhat familiar with.

On a side note, I would love if the default was for these code reviews to be recorded. That way 2 years later when I am asked to modify some module that no one has touched in that span, I could at least watch the code review and gleem something about how/why this was architect-ed the way it was.


IMO, a lot of what I think you are getting at should be worked out in design before work starts.

Fagan inspection has entered the room

My kid got a copy of your first book as a gift a couple years ago. It's really fun to have on the shelf. The buttons are so satisfyingly clicky. Thanks!


Thanks for buying it!!


> now

Is this still a problem? Your example video is from nearly twenty years ago, RAM is over a decade old. I think the advent of streaming (and perhaps lessons learned) have made this less of a problem. I can't remember hearing any recent examples (but I also don't listen to a lot of music that might be victim to the practice); the Wikipedia article lacks any examples from the last decade https://en.wikipedia.org/wiki/Loudness_war

Thankfully there have been some remasters that have undone the damage. Three Cheers for Sweet Revenge and Absolution come to mind.


Certified Audio Engineer here. The Loudness Wars more or less ended over the last decade or so due to music streaming services using loudness normalization (they effectively measure what each recording's true average volume is and adjust them all up or down on an invisible volume knob to have the same average)

Because of this it generally makes more sense these days to just make your music have an appropriate dynamic range for the content/intended usage. Some stuff still gets slammed with compression/limiters, but it's mostly club music from what I can tell.


This goes along with what I saw growing up. You had the retail mastering (with RIAA curve for LP, etc.) and then the separate radio edit which had the compression that the stations wanted - so they sounded louder and wouldn't have too much bass/treble. And also wouldn't distort on the leased line to the transmitter site.

And of course it would have all the dirty words removed or changed. Like Steve Miller Band's "funky kicks going down in the city" in Jet Airliner

I still don't know if the compression in the Loudness War was because of esthetics, or because of the studios wanting to save money and only pay for the radio edit. Possibly both - reduced production costs and not having to pay big-name engineers. "My sister's cousin has this plug-in for his laptop and all you do is click a button"...


> I still don't know if the compression in the Loudness War was because of esthetics,

Upping the gain increases the relative "oomph" of the bass at the cost of some treble, right?

As a 90s kid with a bumping system in my Honda, I can confidently say we were all about that bass long before Megan Trainor came around. Everyone had the CD they used to demo their system.

Because of that, I think the loudness wars were driven by consumer tastes more than people will admit (because then we'd have to admit we all had poor taste). Young people really loved music with way too much bass. I remember my mom (a talented musician) complaining that my taste in music was all bass.

Of course, hip hop and rap in the 90s were really bass heavy, but so was a lot of rock music. RHCP, Korn, Limp Bizkit, and Slipknot come to my mind as 90s rock bands that had tons of bass in their music.

Freak on a Leash in particular is a song that I feel like doesn't "translate" well to modern sound system setups. Listening to it on a setup with a massive subwoofer just hits different.


> Korn

It wasn't the bass, but rather the guitar.

The bass player tuned the strings down a full step to be quite loose, and turned the treble up which gave it this really clicky tone that sounded like a bunch of tictacs being thrown down an empty concrete stairwell.

He wanted it to be percussive to cut through the monster lows of the guitar.


Music, as tracked by Billboard, cross genre, is as loud as ever. Here’s a survey of Billboard music:

https://www.izotope.com/en/learn/mastering-trends?srsltid=Af...

I have an Audio Developer Conference talk about this topic if you care to follow the history of it. I have softened my stance a bit on the criticism of the 90’s (yeah, people were using lookahead limiting over exuberantly because of its newness) but the meat of the talk may be of interest anyway.

https://www.youtube.com/watch?v=0Hj7PYid_tE


As an ex audio engineer, I would say that the war ended and loudness won.


That makes sense, thanks for the reply!


It's still a problem, although less consistently a problem than it used to be for the reason entropicdrifter explained.

There's a crowdsourced database of dynamic range metrics for music at:

https://dr.loudness-war.info/

You can see some 2025 releases are good but many are still loudness war victims. Even though streaming services normalize loudness, dynamic range compression will make music sound better on phone speakers, so there's still reason to do it.

IMO, music production peaked in the 80s, when essentially every mainstream release sounded good.


The implication might be that if they're not willing to give their email then they will definitely not give their time.


Poor societies are perfectly capable of criticizing hedonism. I don't see what's luxurious about criticizing the many examples of absurd resource waste through luxury good consumption.


> high-brow discussion of US politics

lol, implying you'll find that on HN


I write a decent amount of Python, but find the walrus operator unintuitive. It's a little funky that API_KEY is available outside of the `if`, perhaps because I had first seen the walrus operator in golang, which restricts the scope to the block.


This isn't really unique to the walrus operator, it's just a general python quirk (albeit one I find incredibly annoying). `for i in range(5): ...` will leave `i` bound to 4 after the loop.


Oddly enough, "except" variables don't remain bound!

    try:
        x = int('cat')
    except Exception as e:
        pass
    print(e)  # <- NameError: name 'e' is not defined
So, it appears Python actually has three variable scopes (global, local, exception block)?


Nope, it's more complicated than that:

    e = 'before'
    try:
        x = int('cat')
    except Exception as e:
        e2 = e
        print(e)
    print(e2) # <- This works!
    print(e)  # <- NameError: name 'e' is not defined
It's not a scoping thing, the bound exception variable is actually deleted after the exception block, even if it was already bound before!


lol - raku maybe weird, but at least it has sane variable scoping


Also true of JavaScript pre-ES5, another language that on first glance seems to only have function scope: it actually does have block scope, but only for variables introduced in `catch` blocks. AFAIU that was the standard way for a dumb transpiler to emulate `let`.


I wonder if that was ever popular, considering the deoptimization effects of try/catch, and given that block scope can also be managed by renaming variables.


exceptions being the exception is funny somehow


very recursive


you can say that again.


Exception blocks don't create a different scope. Instead, the name is explicitly (well, implicitly but deliberately) deleted from the scope after the try/except block runs. This happens because it would otherwise produce a reference cycle and delay garbage collection.

https://stackoverflow.com/questions/24271752

https://docs.python.org/3/reference/compound_stmts.html#exce...


This is the type of things that make me roll my eyes at all the wtf JavaScript posts[0], yes there are a lot of random things that happen with type conversions and quite a few idiosyncrasies (my favourite is that document.all is a non empty collection that is != from false but convert to false in an if)

But the language makes sense at a lower level, scopes, values, bindings have their mostly reasonable rules that are not hard to follow.

In comparison python seems like an infinite tower of ad-hoc exceptions over ad-hoc rules, sure it looks simpler but anywhere you look you discover an infinite depth of complexity [1]

[0] and how half of the complaints are a conjugation of "I don't like that NaNs exist

[1] my favourite example is how dunder methods are a "synchronized view" of the actual object behaviour, that is in a + b a.__add__ is never inspected, instead at creation time a's add behaviour is defined as its __add__ method but the association is purely a convention, eg any c extension type need to reimplement all these syncs to expose the correct behaviour and could for funzies decide that a type will use __add__ for repr and __repr__ for add


> yes there are a lot of random things that happen with type conversions and quite a few idiosyncrasies... the language makes sense at a lower level, scopes, values, bindings have their mostly reasonable rules

The "random things" make it practically impossible to figure out what will happen without learning a whole bunch of seemingly arbitrary, corner-case-specific rules (consider the jsdate.wtf test currently making the rounds). And no, nobody is IMX actually simply complaining about NaNs existing (although the lack of a separate integer type does complicate things).

Notice that tests showcasing JavaScript WTFery can work just by passing user data to a builtin type constructor. Tests of Python WTFery generally rely on much more advanced functionality (see e.g. https://discuss.python.org/t/quiz-how-well-do-you-know-pytho...). The only builtin type constructor in Python that I'd consider even slightly surprising is the one for `bytes`/`bytearray`.

Python's scoping is simple and makes perfect sense, it just isn't what you're used to. (It also, unlike JavaScript, limits scope by default, so your code isn't littered with `var` for hygiene.) Variables are names for objects with reference semantics, which are passed by value - exactly like `class` types in C# (except you don't have to worry about `ref`/`in`/`out` keywords) or non-primitives in Java (notwithstanding the weird hybrid behaviour of arrays). Bindings are late in most places, except notably default arguments to functions.

I have no idea what point you're trying to make about __add__; in particular I can't guess what you think it should mean to "inspect" the method. Of course things work differently when you use the C API than when you actually write Python code; you're interacting with C data structures that aren't directly visible from Python.

When you work at the Python level, __add__/__iadd__/__radd__ implement addition, following a well-defined protocol. Nothing happens "at creation time"; methods are just attributes that are looked up at runtime. It is true that the implementation of addition will overlook any `__add__` attribute attached directly to the object, and directly check the class (unlike code that explicitly looks for an attribute). But there's no reason to do that anyway. And on the flip side, you can replace the `__add__` attribute of the class and have it used automatically; it was not set in stone when the class was created.

I'll grant you that the `match` construct is definitely not my favourite piece of language design.


> methods are just attributes that are looked up at runtime

At runtime when evaluating a + b no dunder method is looked up and there is no guarantee that a + b === a.__anydunder__(b) https://youtu.be/qCGofLIzX6g

What i mean with weird scoping is

    def foo():
      e = 'defined'
      try:
        raise ValueError
      except Exception as e:
        print(e)
      print(e) # this will error out

    foo()
I also dislike how local/global scopes work in python but that is more of a personal preference.

I agree that that Javascripts standard library is horrible the jsdate.wtf is an extreme but apt example, IMO most of these are solved with some "defensive programming" but I respect other opinions here.

> And no, nobody is IMX actually simply complaining about NaNs existing

I watched many Javascript WTF! videos on youtube and NaNs and [2] == "2" were usually 90% of the content.


Anyway actually my biggest gripe with python is that i find the module/impor/export system counterintuitive


> Oddly enough

It's not that odd, since it's the only situation where you cannot keep it bounded, unless you enjoy having variables that may or may not be defined (Heisenberg variable?), depending on whether the exception has been raised or not?

Compare with the if statement, where the variable in the expression being tested will necessarily be defined.


> Compare with the if statement, where the variable in the expression being tested will necessarily be defined.

    if False:
        x = 7
    print(x)

    print(x)
          ^
    NameError: name 'x' is not defined
Ruby does this sort of stuff, where a variable is defined more or less lexically (nil by default). Python doesn't do this. You can have local variables that only maybe exist in Python.


While somewhat true, what would this be bound to?

    for i in range(0):
        pass


Well, after writing my comment, I realized that a python interpreter could define the variable and set it to None between the guarded block and the except block, and implicitly assign it to the raised exception right before evaluating the except block, when the exception as been raised. So technically, it would be possible to define the variable e in GP example and have it scoped to "whatever is after the guarded block", just like what is done with for blocks.

Is there any chance this would cause trouble though? Furthermore, what would be the need of having this variable accessible after the except block? In the case of a for block, it could be interesting to know at which point the for block was "passed".

So, maybe "None" answers your question?


The answer is: it is unbound. Intellisense will most likely tell you it is `Unbound | <type>` when you try to use the value from a for loop. Would it be possible that it could be default initialized to `None`? Sure, but `None` is a destinctivly different value than a still unbound variable and may result in different handling.


Are you saying that this should result in a name error?

   if <true comparison here>:
       x = 5
   print(x)  # <- should give name error?


I stand corrected, the exception case is definitely an oddity, both as being an outlier and as a strange behaviour wrt Python's semantics. Or is it a strange behaviour?

In the case of an if like in your example, no provision is made about the existence of x. It could have been defined earlier, and this line would simply update its value.

Your example:

   if True:
       x = 5
   print(x)  # 5

Same with x defined prior:

   x = 1
   if False:
       x = 5
   print(x)  # 1
What about this one?

   if False:
       x = 5
   print(x)  # ???


On the other hand, the notation "<exception value> as <name>" looks like it introduces a new name; what if that name already existed before? Should it just replace the content of the variable? Why the "as" keyword then? Why not something like "except <name> = <exception value>" or the walrus operator?

While investigating this question, I tried the following:

    x = 3
    try:
        raise Exception()
    except Exception as x:
        pass
    print(x)  # <- what should that print?


In any sane language it would, yes.


Heisenbug is the word you are looking but may not find.


I may be rusty, but wasn't there a "finally" scope for those situations?

edit: writing from phone on couch and the laptop... looks far, far away...


> `for i in range(5): ...` will leave `i` bound to 4 after the loop. reply

This "feature" was responsible for one of the worst security issues I've seen in my career. I love Python, but the scope leakage is a mess. (And yes, I know it's common in other languages, but that shouldn't excuse it.)


I would love to hear about the security issue if you're able to talk about it


I don't remember the exact details, but it basically involved something along the lines of:

1) Loop through a list of permissions in a for list

2) After the loop block, check if the user had a certain permission. The line of code performing the check was improperly indented and should have failed, but instead succeeded because the last permission from the previous loop was still in scope.

Fortunately there was no real impact because it only affected users within the same company, but it was still pretty bad.


Oof that's a near miss. That's the sort of hard-to-find issue that keeps me up at night. Although maybe these days some ai tool would be able to pick them up


I find it incredibly intuitive and useful that it does that. sometimes it drives me nuts that it doesn't do it for comprehensions but I can see why.

But if something fails in a loop running in the repl or jupyter I already have access to the variables.

If I want to do something with a loop of data that is roughly the same shape, I already have access to one of the the items at the end.

Short circuiting/breaking out of a loop early doesn't require an extra assignment.

I really can't see the downside.


Python 2 actually did let comprehension variables leak out into the surrounding scope. They changed it for Python 3, presumably because it was too surprising to overwrite an existing variable with a comprehension variable.


Oh wow, maybe that's why I expect it to work that way! I can't believe it's been long enough since I used 2 that I'm forgetting it's quirks.


That almost sounds like having the "variables" eax, ebx, ecx, and edx.


Oh yeah, that's a good point.

Python really is a bit of a mess haha.


I cannot tell you how many times I've hit issues debugging and it was something like this. "You should know better" -- I know, I know, but I still snag on this occasionally.


It would be utterly nuts otherwise. For loops over all elements in a sequence. If the sequence is a list of str, as an example, what would the «item after the last item» be?


the issue isn't the value of i, the issue is that i is still available after the loop ends. in most other languages, if it was instantiated by the for-each loop, it'd die with the for-each loop


Maybe Python will get a let one day


There's no block scope in Python. The smallest scope is function. Comprehension variables don't leak out, though, which causes some weird situations:

    >>> s = "abc"
    >>> [x:=y for y in s]
    ['a', 'b', 'c']
    
    >>> x
    'c'
    
    >>> y
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    NameError: name 'y' is not defined
Comprehensions have their own local scope for their local variables, but the walrus operator reaches up to the innermost "assignable" scope.


This is just Pythons scoping, which is not restricted by block, but function. You have the same effect with every other element.


Wow. I had been writing Python for 15 years and I didn't even know that operator exists


It's only existed for 6 of those years so perhaps you can be forgiven :)

The last time I wrote Python in a job interview, one of the interviewers said "wait, I don't know Python very well but isn't this kinda an old style?" Yes, guilty. My Python dates me.


That's literally the first sentence of the article.


Even better is when Standard Ebooks publishes a version: https://standardebooks.org/ebooks/niccolo-machiavelli/discou...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: