Hacker News new | past | comments | ask | show | jobs | submit | w0rd-driven's comments login

I can’t exactly tell if we’re saying the same thing but my thought was a flag to switch between the 1.2 way for faster startup and the earlier approach for longer running processes. The trade off is added complexity in identifying your binary usage patterns and keeping both methods in the tooling.

These kind of changes may not be breaking in a technical sense but it’s very unexpected behavior if you’re one to notice patterns like file sizes changing in such a significant way over time. An answer of “stick with v1.1x indefinitely if you want the old behavior” only feels like a very temporary answer.


Just so I clearly understand, if I have a lot of files and directories in say /repositories or /projects and go to upgrade, there’s a chance I won’t have that any more because I mistakenly treated it like any other *nix filesystem? That’s... not disconcerting in the least.


I’d be interested in something like this as well. I have a discombobulated list of apps and attempted strategies to take notes and can’t seem to stick to something simple.


The official docs for Elixir or Phoenix framework go a long way. One thing that sticks out above the rest for me is Elixir koans[0]. They’re extremely rudimentary but I think the project showcases Elixir’s hot reloading if I’m not mistaken and it’s very fluid. Other follow along courses may have taken this approach but I was extremely impressed by how everything fits together.

[0] https://github.com/elixirkoans/elixir-koans


I think the biggest to a hurdle is the fact that there is no stdlib so nothing unused has to be shaken out. I think that’s a low barrier now though with more than adequate tooling. Another comment mentions a different stdlib for server vs browser but that’s also not a terribly hard problem.

I think a good first pass would come from studying analytics from npm. What are the most used packages? The most stable? I know lodash makes a lot of sense but there’s also underscore. I think the biggest hurdles are really political over technology as everyone has been so entrenched now that a one-size-fits-all stdlib would be hard. Not impossible, just hard. I do wish someone were working on it and I hate to say it but Google probably has the most skin in the game with v8 and Chrome yet I don’t really trust them not to abandon it. So who else is there? It wouldn’t be a very ‘sexy project’ either but still seems worth it to at least try.


> I think a good first pass would come from studying analytics from npm. What are the most used packages? The most stable?

I think it would also make a lot of sense to look at what's in the Python and Ruby standard libraries.


Rockstar Claw is fixed via the ‘Standard FPS’ setting with toggle to run. You click the left thumbstick once to jog and twice to sprint just like FPS games like COD. Pulling fully in a direction sprints and just barely touching it walks. I liked the change but it felt weird and I worried that I would just run off every cliff I came to.


That only works for on foot though, not on horseback.


It's hard to gauge its looks as unfamiliar with TS as I am but would the equivalent without the spread operator look any better?

I call code like this 'clever' as it uses the language constructs to get usually a very optimal end result at the behest of being harder for n00bs like myself to parse.

I do wonder what a prettier version of this would look like. It certainly wouldn't be this concise but would it be better or harder to follow? I'm okay with something like this if it solves a very specific problem I hardly see. If my entire code base is littered with this? No thanks. Yea eventually it'd be second nature but I'd rather it be more verbose and slightly easier to unpack than this tightly packed.


How do you know you're unable to be a programmer? Are you just basing that on having issues with leetcode or CTCI?

I spent 2 years in college studying CS before I couldn't continue. I had just started getting into harder courses using primarily assembly and my interest faded quick. I had this incorrect assumption that developers lived in terribly small cubicles and it didn't interest me much. I had also quickly burnt out, having a job at 40+ hours a week for an ISP and going to school full time in addition was unattainable. Networking and systems seemed to be more interesting at the time and I took a 10 year detour in IT.

In my IT tenure I slowly developed programs to solve business needs, starting with batch scripts and working my way up to very simple automation tools. Somewhere along the way I no longer cared about the work conditions a developer, doing the work was just too fun and I gained an immense sense of accomplishment when my creations were being utilized. Fast forward to getting my first full time developer-only position as a C#/.NET developer doing primarily Windows apps 8 years ago to now being a full stack web developer. It took being paid solely as a developer before I really felt like one and it took until probably 2-3 years ago before the imposter syndrome started to completely wear off. Where I may lack in algorithm knowledge I make up for it in understanding devops topics to have a more complete understanding of the full technology stack running the web apps I have a hand in. I believe every developer could benefit from a little operational knowledge as it usually makes debugging esoteric issues with a technology platform easier.

I'd describe my capabilities as more of an integrator. I used to write every library I used but I find other developers often have more complete solutions I could bastardize into something to fit a specific use case. While the puzzles I work with primarily involve fitting packages together into solutions, the end result still involves all of the same developer workflow of debugging and automated testing. I'm not incapable of algorithm knowledge, it just bores the ever living fuck out of me. I'm extremely fulfilled in spite of having some less than enjoyable positions, so hopefully you don't let this period try to define your future. There's an amazing breadth to this field to the point of easily having analysis paralysis.

Other comments give good alternatives but there are companies all over the planet paying great money for CRUD/LoB apps solving all sorts of interesting problems. There's also an amazing breadth of jobs for people with the knowledge of computing that comes from a CS degree if being a developer really isn't for you. You don't have to be in a wildly different industry but something else may involve solving interesting problems that keep you more engaged.


I would assume their logs would possibly tell them which tokens were associated with the users that downloaded v3.7.2. npm probably doesn't need credentials to download a package so the number of downloads is likely higher. Determining other packages affected are another matter entirely and no one can say this attack vector is only bound by this specific date window. This could've been way more widespread unless they're unpacking payloads and grepping for key pieces of this specific attack.

I think it would be helpful if they could expose some of those logs but considering the meat of what matters would be the IP addresses to verify if your machine was compromised (or your CI server) that GPDR effectively wiped that possibility off the table. It would almost behoove them to setup a kind of haveibeenpwned service where you can check against stuff like this in the future. It's not like this can't happen again as the hole hasn't been closed completely, only this one set of compromised packages appears clean for now.


I crawl a specific site somewhere up to 50 unique URLs a day. I store both the unparsed full html as a file and the json I'm looking for as another separate file. The idea is if something breaks instead of taking a hit to make the call again, I have the data and I should just process that. It's come in extremely handy when a site redesign changed the DOM and broke the parser.

I do the same at $dayJob where I'm parsing results of an internal API. Instead of making a call later that may not have the same data, I store the json and just process that. I feel like treating network requests as an expensive operation, even though they're not really, helped me come up with some clever ideas I've never had before. It's a premature optimization considering I've had like 0.000001% of failure but being able to replay that one breakage made debugging an esoteric problem waaaaaay simpler than it would've been otherwise.


Off-topic: I so wish I worked for a company where my work involves scraping and storing and analyzing data. :(


Now is a good time to work in this field since data science is hot and companies need web scrapers to provide the data for these models. Atleast that has been my experience in finance. Try applying!


I have zero experience in data science though. I am a pretty solid and experienced programmer and can learn it all but... don't know. Maybe I should just try indeed.

Do you have any recommendations for places and/or interview practices?


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: