Hey, sorry, that might have been me. I did that then left the company.
In my defense, I had to hack around a different Python library also manipulating sys.path, which nobody likes except this one dev team in a different timezone. They somehow got a director to declare that I would fix this issue they self-created before they woke up in 8h, and I wasn't allowed to rip out the library. So, ugly sys.path manipulation in the exact way that library wants. Not proud of it, but it sounds like you were given time to engineer an actually correct solution.
The project is an open source project, so it's likely not your fault :)
I can fully see how and why this might have grown historically in ye olden days of python2, but it's not sustainable to continue adding floors ontop of a rotten foundation. Code needs maintenance like anything else, and far too often there's no budget or time available for it. Even if that maintenance would reduce the overall workload.
I disagree that using feature flags to de-risk deployments is a symptom of bad deployment pipelines.
There's several aspects of deployments that are in contention with each other: safety, deployment latency, and engineering overhead are how I'd break it down. Every deployment process is a tradeoff between these factors.
What I (maybe naively) think you're advocating is writing more end-to-end tests, which moves the needle towards safety at the expense of the other factors. In particular, having end to end tests that are materially better than well-written k8s health checks (which you already have, right?) is pretty hard. They might be flakey, they might depend on a lot of specifics of the application that's subject to change, and they might just not be prioritized. In my experience, the highest value end-to-end tests are based on learned experiences of what someone already saw go wrong once. Writing comprehensive testing before the feature is even out results in many low quality tests, which is an enormous drain on productivity to write them, to maintain them, and to deal with the flakey tests. It is better, I think, to have non-comprehensive end-to-end tests that provides as much value for the lowest overhead on human resources. And the safety tradeoff we make there can be mitigated by having the feature behind a flag.
My whole thesis, really, is that by using feature flags you can make better tradeoffs between these than you otherwise could.
Seconded - I enjoyed the audiobooks for Murderbot by Graphic Audio. I originally found them on a torrent tracker when I was searching for something else, and after enjoying the free trial I bought the series through Graphic Audio's website.
I listened to the version narrated by Kevin R Free, which I enjoyed. It was my first experience with audiobooks so I don't have much to compare but I did read some reviews of the Graphic Audio versions which suggested they were not considered good by some listeners, even those that liked other productions from the same source.
I would guess the Nvidia ConnectX is part of a secondary networking plane, not plugged into Jupiter. Current-gen Google NICs are custom hardware with a _lot_ of Google-specific functionality, such as running the borglet on the NIC to free up all CPU cores for guests.
The cost is a rate, like $2 per hour, not a purchase price.
So faster CPUs get work done more quickly and may justify a higher cost per hour.
Ah, the graph is wacky, but the text makes sense, looks like a disconnect:
1. C4A Axion: $2.16 reported cost per hour, test took average of 9 seconds per run: cost approximately 0.005 dollars.
2. T2A Ampere Altra: $1.85 reported cost per hour, test took average of 17 seconds per run: cost approximately 0.009 dollars.
3. C4 Xeon Platinum EMR: $2.37 reported cost per hour, test took average of 17 seconds per run: cost approximately 0.011 dollars.
So the C4A costs a bit more ($2.16 vs $1.85) but gets approximately perf/$ is around 2x in favor of the C4A.
The number "created" in TFA is just the delta in employed over the time period, with every person no longer employed in-state (for any reason) counting against the people newly employed in CA. Click the link in the article to see the graph of CA jobs, there's seasonal fluctuations but overall it looks flat over the last two years.
1. Live coding, in Zoom or in person. Don't play gotcha on the language choice (unless there's a massive gulf in skill transference, like a webdev interviewing for an embedded C position). Pretend the 13 languages on the candidate's resume don't exist. Tell them it can be any of these x languages, which are every language you the interviewer feel comfortable to write leetcode in.
2. Write some easy problem in that language. I always go with some inefficient layout for the input data, then ask for something that's only one or two for loops away from being a stupid simple brute force solution. Good hygienic layout of the input data would have made this a single hashtable lookup.
3. Run the 45 minute interview with a lot of patience and positive feedback. One of the best hires in our department had first-time interview nerves and couldn't do anything for the first 10 minutes. I just complimented their thinking-out-loud, laughed at their jokes, and kept them from overthinking it.
4. 80% of interviewees will fail to write a meaningful loop. For the other 20%, spend the rest of the time talking about possible tradeoffs, anecdotes they share about similar design decisions, etc. The candidate will think you're writing in your laptop their scoring criteria, but you already passed them and generated a pop-sci personality test result for them of questionable accuracy. You're fishing for specific things to support your assessment, like they're good at both making and reviewing snap decisions and in doing so successfully saved a good portion of interview time, which contributed to their success. If it uses a weasel word, it's worth writing down.
5. Spend an hour (yes, longer than the interview) (and yes, block this time off in your calender) writing your interview assessment. Start with a 90s-television-tier assessment. For example, the candidate is nimble, constantly creating compelling technical alternatives, but is not focused on one, and they often communicate in jargon. DO NOT WRITE THIS DOWN. This is the lesson you want the geriatric senior management to take away from reading your assessment. Compose relatively long (I do 4 paragraphs minimum) prose that describes a slightly less stereotyped version of the above with plenty of examples, which you spent most of the interview time specifically fishing for. If the narrative is contradicted by the evidence, it's okay to re-write the narrative so the evidence fits.
6. When you're done, skim the job description you're hiring for. If there's a mismatch between that and the narrative you wrote, change your decision to no hire and explain why.
Doing this has gotten me eye rolls from coworkers but compliments at director+ level. I have had the CTO quote me once in a meeting. Putting that in my performance review packet made the whole thing worth it.
Getting rid of grading sounds crazy, but it's actually happening. Los Angeles Unified, the second largest school district in America, is moving to "equitable grading", which amounts (imo) to pass/fail with extra pageantry. Teachers are being retrained _right now_ to equitable grading.
I know an equitable grading champion at an LAUSD school, I'll see if I can get material to share. EDIT: I just received [0][1][2][3].
Since this was at LA Unified, I suspect the bar for passing is extremely low. Not commenting on that district specifically, but not graduating from High School on time takes some doing. The system is very good at moving kids through, and it's why a high school diploma means so little.
One of my teachers implemented a system like this. What they ended up doing was making it so that you had to score a (effectively) 9/10 on major assignments to pass the class (minor assignments were graded on completion), but had an infinite number of revisions with which to get this grade with feedback being provided each time you tried. Pretty much everyone passed, with more work required from some than from others. The only issue it ran into was with the final paper, where you (realistically) only had time to receive and make one to two revisions
before the end of the semester and the deadline to submit grades.
According to the equitable grading materials I just received (and posted above), that determination is... entirely up to the individual teacher's discretion? I might be misunderstanding.
At most universities you can talk most classes pass/fail by choice which means A-D is pass
and F is fail.
The nice thing about an all pass/fail system is you can formalize the 'new' way grades are actually done in which A means meets expectations and anything less means did not. Making pass mean A/B takes a lot stress off students and C/D is already failing
for practical purposes as often you can't continue with less than a B.
I don't feel <file is obscure. I've seen shell in that style from coworkers and from open source. Your value judgement against it might just be your experience, rather than something universal.
In my defense, I had to hack around a different Python library also manipulating sys.path, which nobody likes except this one dev team in a different timezone. They somehow got a director to declare that I would fix this issue they self-created before they woke up in 8h, and I wasn't allowed to rip out the library. So, ugly sys.path manipulation in the exact way that library wants. Not proud of it, but it sounds like you were given time to engineer an actually correct solution.