Hacker Newsnew | past | comments | ask | show | jobs | submit | YouWhy's commentslogin

Is the question at hand about balancing local building authorization with the government's intent to encourage a specific kind of national infrastructure businesses?

This seems to be supported by this quote:

> Putting arbitrary deadlines on state, local, and Tribal governments to start and finish complicated permit reviews...

I'm not an American but I am alarmed at the recent tendency for bad-faith rule making. However - the above sounds in reasonably good faith - is that indeed the case or am I missing some angle?


Worth reading the letter from local govs as they lay out why this is such a complicated and slow problem: https://cdn.arstechnica.net/wp-content/uploads/2025/11/20251...

No big business in the US acts in good faith so the fact they are cheering this on tells me to be suspect. My read of this is they want to juice returns / timelines by avoiding bureaucracy and the city / local residents will deal with the inevitable mess.


From my perspective depending on where they want to develop it might be trivial to add something time consuming to a proposal such as eminent domain or easement reviews that will run out the shot clock and drown out reasonable questions by local governments. Local governance often is complicated by local (town), regional (county), and state level roles. Additionally depending on the area not all of these roles are even staffed by people working on it full-time.

Laws on public nuisance sound in good faith, its more about whose in positions of enforcement.

GOP would call this the deep state. Regulators and judges have been targets of modern GOP fascism.


There's correlation all right, but is there causation?

A lot of folks I know regard mechanical watches as a type of jewelry, a high value item that's not intended for the everyday.

I concur that that popularity of mechanical watches is on the rise, but having a cool mechanical piece on in the evening does not preclude having a digital watch at all other times.


For everyday use (e.g work), I use Huawei smartwatch, or something affordable like Seiko 5 when in the mood for mechanical watch.

Yep, using one doesn't automatically preclude the other.


"at all other times"

Psh, get with the program. Everyone knows you gotta have three. One mechanical, one smartwatch, one fitness tracker. Don't you want to be stylish?


I find merit with the core point as well the delivery.

I wish it was more commonly accepted that choosing not to act is effectively a stand against one's own value system in favor of the value systems of those who do act.


It annoys me greatly when people say "I didn't choose [elected official] because I didn't vote at all".


I think Python's centrality is a consequence of its original purpose as a language intended for instruction.

Yeah, some of its design decisions required immense cost and time to overcome to make for viable production solutions. However as it turns out, however suboptimal it is a language, this is quite made up by the presence of a huge workforce that's decently qualified to wield it.


Python original purpose was as a scripting language for Amoeba. Yes, it was strongly influenced by ABC, an introduction programming language which van Rossum helped implement, but that was a different language.

https://docs.python.org/3/faq/general.html#why-was-python-cr...

""I was working in the Amoeba distributed operating system group at CWI. We needed a better way to do system administration than by writing either C programs or Bourne shell scripts, since Amoeba had its own system call interface which wasn't easily accessible from the Bourne shell. My experience with error handling in Amoeba made me acutely aware of the importance of exceptions as a programming language feature.

It occurred to me that a scripting language with a syntax like ABC but with access to the Amoeba system calls would fill the need."""


I came to regard YAML as a kind of a syntactic HFC syrup, a bearable idea that was taken too far.

Alas, YAML is just about everywhere, so the chances for a replacement that'll be both better behaved and as ubiquitous are unfortunately slim.


To the extent a random person's evidence on the Internet amounts to proof:

From people at Facebook circa 2018, I know that end user privacy was addressed at multiple checkpoints -- onboarding, the UI of all systems that could theoretically access PII, war stories about senior people being fired due to them marginally misunderstanding the policy, etc.

Note that these friends did not belong to WhatsApp, which was at that time a rather separate suborg.


TL;DR: is the described business viable?

I can see how a tech-centric person would see the described business as viable, but putting on my founder hat, I realize that it faces enormous risks:

- Any competitor could build the same product with less janky UX; users tend to hate even unavoidable usability issues.

- There's no compliance strategy even remotely possible in the described scenario.

- If a capital investment becomes necessary for business scaling, I cannot imagine this organization passing even a perfunctory level of due diligence.

Would be happy to hear out if that makes sense.


I'd like to encourage you consider the following two perspectives --

1. A senior Google leader telling the shareholders "we've asked 1% of our engineers, that's 270 people, costing $80M/year, to work on services that produce no revenue whatsoever." I don't think it would pass that well.

2. A Google middle manager trying to figure out if an engineer working exclusively on non-revenue projects is actually being useful or otherwise; this is made more complex by about 30% of the workforce trying to go for the rest and vest option provided by these projects.


> A senior Google leader telling the shareholders "we've asked 1% of our engineers, that's 270 people, costing $80M/year, to work on services that produce no revenue whatsoever." I don't think it would pass that well.

The business case for this is that Google lose a bunch of money in b2b (cloud mostly, potentially AI in future) because professional users (developers etc) don't believe that products will be supported. Every time Google shut down a service like this, this perception is re-inforced. We're investing this money into these services to change our brand perception and help us make more money in future.

As a bonus, this kind of cultural change would also force them to rebuild their engineering systems (and promotional systems) to make this easier. This may not have mattered for Search/Ads but it will matter if they actually care about winning in cloud and AI.


A Google shareholder that shortsighted might as well ask why they have an HR department or have custodians to maintain the offices, after all, they don't generate income either.

The manager in the trenches can tell if there's actual work happening, to move goo.gl from the internal legacy system to the new supported one doesn't magically happen, code needs to change for it to work after the old system gets shut off.


Proper disclosure: I'm not a US resident, and I'm deeply alarmed by the de facto defunding going on right now.

Furthermore, I concur with the piece that a proper investment in the said fields would further the common good.

However, take objection with equating investment with investment in the way specifically done up until now.

The article opens with longevity; the Alzheimer amyloid hypothesis, which served as a bandwagon for low impact, perhaps even bad faith research, whole syphoning out billions of public spending and blocking out alternate research pathways.

Many of the other domains mentioned exhibit similar dynamics. To my ears, it makes little sense to champion further spending without exploring the reform that needs to be carried out to align that spending with the various notions of public good.

To reiterate, I abhor the populist choice of dismantling everything and putting cronies to feed off the rest. I'd be looking forward for a way forward to make change, because some change is due.


Hey, I've been developing professionally with Python for 20 years, so wanted to weigh in:

Decent threading is awesome news, but it only affects a small minority of use cases. Threads are only strictly necessary when it's prohibitive to message pass. The Python ecosystem these days includes a playbook solution for literally any such case. Considering the multiple major pitfalls of threads (i.e., locking), they are likely to become a thing useful only in specific libraries/domains and not as a general.

Additionally, with all my love to vanilla Python, anyone who needs to squeeze the juice out of their CPU (which is actually memory bandwidth) has a plenty of other tools -- off the shelf libraries written in native code. (Honorable mention to Pypy, numba and such).

Finally, the one dramatic performance innovation in Python has been async programming - I warmly encourage everyone not familiar with it to consider taking a look.


I haven’t been using it that much longer than you, and I agree with most of what you’re saying, but I’d characterize it differently.

Python has a lot of solid workarounds for avoid threading because until now Python threading has absolutely sucked. I had naively tried to use it to make a CPU-bound workload twice as fast and soon realized the implications of the GIL, so I threw all that code away and made it multiprocessing instead. That sucked in its own way because I had to serialize lots of large data structures to pass around, so 2x the cores got me about 1.5x the speed and a warmer server room.

I would love to have good threading support in Python. It’s not always the right solution, but there are a lot of circumstances where it’d be absolutely peachy, and today we’re faking our way around its absence with whole playbooks of alternative approaches to avoid the elephant in the room.

But yes, use async when it makes sense. It’s a thing of beauty. (Yes, Glyph, we hear the “I told you so!” You were right.)


> That sucked in its own way because I had to serialize lots of large data structures to pass around, so 2x the cores got me about 1.5x the speed and a warmer server room.

In many cases you can't reasonably expect better than that (https://en.wikipedia.org/wiki/Amdahl's_law). If your algorithm involves sharing "large data structures" in the first place, that's a bad sign.


That's true, but you can sometimes get a whole lot closer if you can share state between threads. Sometimes you can't help the size of the data. Maybe you have a thread reading frames from a video and passing them to workers for analysis. You might get crazy IO contention if you pass around "foo.vid;frame222" and "foo.vid;frame223" to the workers and make them retrieve that data themselves.

There may be another way to skin that specific cat. My point isn't to solve one specific problem, but to say that some problems are just inherently large. And with Python, today, if those workers are CPU-bound in Python-land, that means running separate processes and passing large hunks of state around (or shoving it through SHM; same idea, just a different way of passing state).


I find python's async to be lacking in fine grained control. It may be fine for 95% of simple use cases, but lacks advanced features such as sequential constraining, task queue memory management, task pre-emption etc. The async keword also tends to bubble up through codebases in aweful ways, making it almost impossible to create reasonably decoupled code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: