Hacker Newsnew | past | comments | ask | show | jobs | submit | mighty_warrior's commentslogin

Agreed, there are a lot of more robust products already in this space.

My biggest concern is its reliance on mysql. There is no way this could be a valid option for high volume messaging when it is essentially database as a queue.


Your own fault if you didnt patch out eternalblue. No sympathy for hacked orgs.


Also uses a client side exploit in Word/Wordpad, although that was patched earlier this month:

https://portal.msrc.microsoft.com/en-US/security-guidance/ad...


Moronic design of the microsoft page. It requires you to acknowledge some bullshit T&C on first visit, then redirects you to the website home page. Which means that someone clicking on the CVE to check if there is anything important will be redirected to a home page with no information. Most sane people will go back to the original link, but if there were only sane people in this world, this second wave of malware would be toothless. That's not exactly helping awareness of the vulnerability.


Apprenticeship Patterns - A great book more focused on being an good software engineer and less on software specific details. Online for free at http://chimera.labs.oreilly.com/books/1234000001813/index.ht...


There are plenty of companies who perform pen testing and security. Their biggest deficiency right now is understanding new emerging cloud technologies. We have been working with some pretty big name security companies around the globe and very few of them understand AWS or Azure adequately.


Hey hows Potsdam? Sounds like it hasn't changed in the decade since I was there. Not that I expected it to.


Hahahaha, was it that obvious? I grew up around here and graduated from one of the four schools (gotta stay a little anonymous) in 2009. I'm not sure why I'm still here, really.


It was a little obvious but probably only for someone who lived there.


Fair enough. To answer your question no, not much has changed. There's a knockoff Chipotle restaurant in both towns now which is pretty good. Pretty sure the Tick Tock is closed for good. And...that's probably it.


Honestly the section about debugging skills needs to be much higher in the article. Debugging skills are essential to learning legacy applications you are thrown into and understanding how your code works in general. It amazes me when I see an engineer with 5+ years experience who cannot hook up a remote debugger to their application.

My number one observation about productivity usually revolves around how an engineer attacks a problem and handles scope creep. There are some programmers who can get a set of requirements, and like a trained surgeon get in, fix the big bleed and get out. While they are in there they might fix a couple close issues but they are not re-architecting the whole application. Then there are others who see all the problems, they notice this problem there, and that problem here and keep asking what does this all mean and it eventually cripples them. They spend some much time seeing all the problems, that they never get around to solving the one they were tasked to fix.

Once you realize you won't understand it all from the beginning and you can't fix every issue you see. You become a much more effective engineer.


Many municipalities disallow laundry chutes in new home constructions. They can create a chimney effect allowing a fire to travel faster between floors. Also kids like to play with them and not all kids are smart enough not to try and slide down it.


Install a proper slide, safe and suitable for kids, that happens to end in the laundry room.

BTW, build it with the assumption that it will get soaked in disgusting things. Drywall isn't strong and isn't really washable. Non-stick coatings are best; metal is OK.


This sounds a little over complicated all in an effort to decrease the amount of code changed at any given time. They took great pains to keep data in sync across A and B datastores and I'm not so sure that extra cost was worth the perceived stability of this approach.


> They took great pains to keep data in sync across A and B datastores and I'm not so sure that extra cost was worth the perceived stability of this approach.

Such great pains come with huge systems. What's the alternative?

Taking the platform offline for a few hours? Management will say no. Or maybe Management will say yes once every three years, severely limiting your ability to refactor.

Doing a quick copy, and hope nobody complains about inconsistencies? Their reputation would suffer severely.


They maintained a replication process across both tables as they updated the read processes before updating the write process. Say for whatever reason their offline replication process broke for 2 hours. For those 2 hours of downtime that replication is broken, the system is reading from a table that is not in sync with the table that is receiving writes. At that point you are displaying incorrect subscription data to your customers.


> They maintained a replication process across both tables as they updated the read processes before updating the write process. Say for whatever reason their offline replication process broke for 2 hours.

From the article I got the impression that both tables were being written to in the same database transaction, so this is not a possible failure scenario at all.


Stability matters when you're dealing with people's bank balances and billions of dollars a year of credit card transactions.


There's only one datastore. I'm not sure what you mean with "perceived stability", it's about data integrity and preventing data loss during a model reshuffle.

I'd like to hear about alternatives if you've had experience.


They provided data integrity with a background sync process. What would have happened had that sync process failed?

In the past I have either flagged records to say where the system should read the data from, or just built logic into the readers to say if there is no data in A, read from B.

Then you update the writers to migrate the data from A to B on every new update and remove the data from A. It is an expensive 1 time write to move the data, but then you don't have to worry about keeping data in sync across two storage locations.

What you end up with is all records that are actively getting worked on move first. At a later date you start migrating all those stale records from A to B with a background process. Once A is empty, remove the logic to read from A, remove the migration writes, remove the A datasource.


I agree with you. As a "tech lead" myself, I find my biggest jobs are fostering communication between team members and clearing roadblocks. Those roadblocks could be making a design decision, clearing up requirements, getting support from external teams, etc. When every team member is responsible for clearing their own obstacles you find a lot less work gets done and often times the same problem gets solved multiple times by multiple team members.


All I could think about after that first image was giving the author a code review.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: