Hacker Newsnew | past | comments | ask | show | jobs | submit | ubercow13's favoriteslogin

xattr -cr <file> should clear the "download" extended attribute and make it as if the software was compiled on the machine itself, bypassing the ever-so-annoying Gatekeeper.

For binary patching: codesign --force --deep -s - <file> (no developer ID required, "ad-hoc signing" is just updating a few hashes here and there). Note that you should otherwise not use codesign as it is the job of the linker to do it.


I learned a lot of this stuff ~15 years ago from reading a book called Reversing: Secrets of Reverse Engineering by Eldad Eilam. The book is old but amazing. It takes you through a whole bunch of techniques and practical exercises. State of the art tooling has changed a bit since then, but the x86 ISA & assembly more generally hasn't changed much at all.

One of my biggest takeaways was learning about "crackmes" - which are small challenge binaries designed to be reverse engineered in order to learn the craft. They're kinda like practice locks in the lockpicking community. The book comes with a bunch on a CD-ROM from memory - but there's plenty more online if you go looking. Actually doing exercises like this is the way to learn.

You don't start trying to reverse engineer COD. You build up to it.


https://archive.ph/tyf5W

Personally I've simply set up archive.today as a custom search engine in my browser exactly for situations like this. Take the original URL, chuck it in the archiver and read away.


I have no idea what or how YouTube’s backend works, but I thought it would be useful to share here that if using ffmpeg one can use the arguments -vsync drop to generate fresh time stamps based on frame rate

Missing from "after" is something that contributed to my career success – when you encounter problem, and you will, many, when you (small bugs etc) or your team (larger issues) solves it - stop and play it from the beginning in your head. Go through every step that you remember that you or collegues went. Pay attention to what you should have done instead, what you should have tried, now that you know where the problem was. What were the obvious things to check. What was waste of time. What kind of binary search you should have done. This alone can boost your performance above average quickly. And the weird part is that almost nobody does it; just do it and you'll see.

My perception is that one drawback of earnestness is that it often comes off as just being boring. The most earnest people I know are not popular. The most popular people I know are more like twinkle-in-the-eye-kind-of-BS-but-not-pathological-liars.

I don't mean to say "grr, why aren't nerds more popular!". I mean more that this specific kind of earnestness seems to preclude the playing of certain conversational games that most people seem to enjoy. Earnestness, for example, is pretty far away from my conception of "playful" or "flirty" or even "fun".


I feel that many times my source of fatigue can be traced back to my tools being laggy pieces of shit for no good reason.

I don't understand what prevents actors like Microsoft from doing a clean, lightweight, native rewrite of tools like Visual Studio for people who are looking forward into the .NET 5 horizon, and don't care about being able to debug VB.NET apps written in 2009. There is no reason there has to be any delay at all in the UI. Graceful degradation of intellisense is acceptable depending on project complexity, but there should never be any sort of perceptible hitching or delays when moving code windows around, scrolling, typing, switching tabs, minimizing/maximizing, etc. If my PC can display the complex 3d scenes of Overwatch at 2560x1440p@180FPS with <5ms input latency, I cannot comprehend any rational argument for my IDE being unable to achieve even 10% of that performance objective.

I understand that use of frameworks like ElectronJS make it virtually impossible to achieve my stated objectives, so perhaps we need to dust off some APIs and re-learn old tricks. Think about the aggregate developer hours that could be saved with 1 heroic implementation effort. Imagine if you could load a VS solution in less than a second and immediately start clicking through the UI in any direction without any fear that it is about to sandbag your ass with frustratingly-arbitrary UI delay soup. That is the kind of UX that inspires confidence and encourages a developer to charge forth, instead of compelling them to fuck off on HN for the 20th time of the day.


I do like org-roam, but wonder if it's over-engineered. The main thing you need is backlinks, and you can get that without all the complexity of an additional package and sqlite dbs.

  (defun buffer-backlinks ()
    (interactive)
    (rg (buffer-name) "*.org" org-directory))
  (add-to-list 'org-mode-hook 'buffer-backlinks)
Net effect of that tiny elisp snippet is what when I open an org file, I get a buffer beside it (powered by ripgrep/rg) with names + context snippets of all the other files in my org dir that link to it.

Before Windows 10 I would just disable the Windows Update service and enable it on my own schedule when I wanted to do updates. I've had my machines stay undisruptive for months on end, sometimes up to a year without rebooting, and the reboots would be initiated by me (hardware changes, power outages, or updates that I actually wanted to install.)

Since Windows 10, Microsoft has caught onto this and added a new service that checks if you disabled the Windows Update service and re-enables it automatically, and sets permissions that initially deny you from tampering with it that then makes this a little more challenging to get around, but still possible to do manually. I later found Windows Update Blocker (WUB) which does all of this for you in one click. I have been using it ever since, and have never again been nagged by any update or experienced an unexpected reboot: https://www.sordum.org/9470/windows-update-blocker-v1-5/


sounds like apples to oranges. you shut down your windows computer at night, but not your mac - i don't get it?

mac updates do take a while, but unless i'm misremembering, they're less frequent. and some windows updates take absolutely ages, too. the difference for me is that windows update keeps breaking stuff, like bluetooth drivers -.-


I've had a similar nightmare on a laptop. Every once in a while it would wake up randomly to do a Windows Update. But since the lid is closed, the laptop would overheat and shutdown in the middle of the update. When I came to use it it would waste an hour or two removing the broken update. After a couple such incidents I just installed Debian instead.

Interesting. I never experienced something like this on my desktop. Actually I always wondered why people are bothered by windows 10 updates at all, because for me it was maximail a bit disturbing.

My xps notebook with windows 10 OTOH, well, it's actually close to a nightmare. I bought it late 2019 and had multiple on occasions the machine decided to WAKE UP from sleep a hour after midnight and running the update Procedere without shutting down. If it was in the sleeve I was greeted by a very very hot notebook at the morning with almost no battery left.

It took a while to turn this of.


This isn't exactly what was asked for, but because I live in emacs I don't get much value being able invoke it from the command line. If that's ok, try this..

  (defvar org-journal-file "~/Dropbox/org/engjournal.org"  
    "Path to OrgMode journal file.")  
  (defvar org-journal-date-format "<%Y-%m-%d %a %I:%M %p>"  
    "Date format string for journal headings.")  

  (defun org-journal-entry ()  
    "Create a new diary entry for today or append to an existing one."  
    (interactive)  
    (switch-to-buffer (find-file org-journal-file))  
    (widen)  
    (let ((today (format-time-string org-journal-date-format)))  
      (beginning-of-buffer)  
      (unless (org-goto-local-search-headings today nil t)  
 ((lambda ()   
    (org-insert-heading)  
    (insert today)  
    (insert " "))))  
      (beginning-of-buffer)  
      (org-show-entry)  
      (org-narrow-to-subtree)  
      (end-of-buffer)))  

  (global-set-key "\C-cj" 'org-journal-entry)
So, C-cj will open the file, create a timestamp entry at the top, and narrow to the subtree. Great for creating timestamped notes. I keep a journal this way (multiple entries per day are fine), but it could be used for any note taking.

I wholeheartedly agree with you. The tests do help a lot.

Funnily enough, I own PBR too (it's in my reading queue), and, having skimmed through it, it seems math heavy. I think I'll leave it for last. My current queue is:

- RTC (for fun),

- 3d Math primer for Graphics and Game Dev by Dunn (to solidify the math part)

- Foundations of Game Engine Dev - Mathematics, by Lengyel (because when it comes to math, overkill is underrated)

- CGPP (to get the basics down)

... not sure about the order for my other books, but then...

- OpenGL SuperBible (second time around, sadly)

- Real-Time Rendering

- PBR

For the math books, I was thinking to do the same thing as you: do the math by hand, and then translate them into tests.


I installed it, and without consenting to sending any notifications or whatever, I started getting Signal messages from long-forgotten people who apparently had ME in THEIR contacts. It apparently notified THEM that I had installed Signal, which I absolutely did not want.

This included a few folks who I wish to distance myself from in every way possible, and have deleted from my contacts. Apparently they still had me in theirs. This also included a few folks who I keep in my contacts only so I'll recognize the number if it rings and know not to answer. Obviously the last thing I want is to remind these people that this I still exist and this is my current number.

The worst part was, for the ones who weren't in my contacts, I had to ask the stranger "Umm, forgive me for asking, but who is this?" in order to ascertain that, nope, I really didn't want to interact with them.

Shit shit shit shit.

Uninstalled it very quickly.


Yeah, context and empathy are two things that I'm only appreciating more and more as my career goes on.

I once had this small but terribly written module written by an inexperienced developer who wasn't given the kind of feedback and code review that he should have been given. It was still running in production years after that person had left because it was in a corner of the code base that was basically never touched. What made it interesting to me was that it was badly written at almost every level from the high level separation of concerns to low level coding practices, while still basically getting the job done.

I started giving this module as an exercise during interviews for a certain position, with the framing of "This was written by an beginner developer on your team. What kind of feedback would you give them to help them improve?" This sort of thing was actually a major part of the job, as it was a position that would be a kind of consulting resource for other teams and would involve many code reviews and encouragement of best practices -- basically providing subject matter expertise to full stack, cross-functional teams.

The results were fascinating to me because it acted like a Rorschach test of sorts and told me a lot more about the focus on the interviewee than the code they were criticizing. More junior candidates immediately jumped on the low level issues like the boolean assignment example, naming conventions, or small snippets of code duplication, and spent all their time there. More experienced folks often mentioned the low level issues but spent more time on the higher level problems -- the class should be broken out into two, extending in the most obvious way would be hard because XYZ, this etc. Some of the best candidates would ask for more context on the situation and overall codebase.

It also helped weed out the jerks who, despite the prompt, decided that the exercise was an opportunity to show off and insult the original author (who was of course anonymous), venting about how stupid something or other was or using it as a springboard to attack their current co-workers. Everyone starts somewhere. It's fine to wince a little at something that's poorly written, but the point is to actually help them improve. The better candidates were there trying to understand what their gaps in understanding were that would cause them to make certain mistakes. The very best candidate was trying to map out a staged plan of more easily digestable things to work on so that they're not overwhelmed all at once -- extrapolating a whole technical mentorship model out of what they could glean from the code review.


I would just add that there is a third "aspect" or "color" to this stuff besides "this is the most absurd irony" and "we definitely believe this hatefully bs". That aspect is "we don't necessarily believe this stuff but look, we've discovered dynamite in a bottle." It's something like "trollocracy" - "by believing anything and saying anything, we're amazing influential and we can use that influence in a calculated way."

But that third attitude isn't something distinct from original fascism. One might say the ideas of fascists from the start involved something like "use your illusions".


Or, best method is to block all ads, beacons, 3rd party cookies, and trackers. Keep it simple, just block anything that is not actual content. Everyone knows online ads are now the major vector for malware online along with bad apps.

Pi-hole, uBlock Origin, Privacy badger, Decentraleyes, Tracking Token Stripper, Neat URL, and No Coin. Set browser to block all HTTP/S referers, disable geo, css links history, fingerprinting, and you are well on your way to never having or seeing an issue. uBlock Origin also kills dead the adblock blockers.

Editing to say that it was my children's absolute frustration with waiting for ads in videos that led me to adopt the Pi-hole. Nothing so far has escaped the event horizon of the Pi-hole. It's great for TVs, too. Everything on the network benefits from the Pi-hole. Setting up a VPN will allow you to pass your mobile device through it while away from home.

One of my upcoming projects is to get a DO Droplet, set up the same and have less "infrastructure" in my house. Benefits remain. Google Cloud also allow this in the free tier. Why not? There is nothing to lose except annoyance.


> But at the world level, it's just some of us choosing to consume more than we produce right now, and others of us choosing to consume less than we produce right now.

I don't think it is that simple. 1) If the world is overleveraged you wind up in a situation where the insolvency of one debtor dominoes over to the lender, who is then unable to pay the next debtor, etc. In one sense some leverage means we are interconnected, which is good because we are likely to pay attention to each other, but if it gets too much, that the effects of bad events can spread.

2) By the nature of compounding interest, having a system that depends on leverage (as ours does now) puts a societal requirement on growth that is enforced through fiscally punitive reinforcement... So that is a fundamental reason why there is a lot of consumption, environmental destruction, and energy demand, even when we try to be active towards conservation -- our current economic structure (and i'm not talking about 'capitalism') simply is not compatible with those goals writ large.

3) when a individual cannot pay their debt, it is because they consensually took on that debt. When a sovereign nation cannot pay its debt, often times the effects are generational, and very much nonconsenual. The current generation cannot go back in time and vote against the profligate spending of the previous generation. One wonders if this is ethically tenable, yet every government does it. Moreover, when a government overspends, the ill effects of that overspending typically hurt the poorest the hardest.


> I agree that it's messed up that Facebook and Apple make all their profits on tax havens, but I'm concerned that we are a bit too narcissistic by believing we can create a complex tax system that will solve this.

The fundamental problem is assigning a jurisdiction to the "profit" from a supply chain that snakes through twelve different countries.

When you pay $50 for a widget, 100% of that is somebody's profit. The retailer sells for $50 and buys for $40. The wholesaler sells for $40 and buys for $30. The manufacturer sells for $30 and pays $20 in salaries and other expenses. The factory employees are paid $20 but pay $15 for rent and food. The landlord and the farmer are paid $10 and $5 and either keep it as profit or have their own expenses that somebody else profits from.

There was originally $50 and, at the end, there is still $50. Somebody has it. It's somebody's profit.

But when the same entity has operations in multiple countries, the "profit" doesn't have a specific country. A US-based company pays $20 to manufacture something in China and sells it for $50 to a customer in Austria. Their total profit was $30, but did they make $10 in each country, or $20 in China and $5 in each of the others, or $25 in the US, $5 in China and nothing in Austria? The assignment is almost totally arbitrary. And when it's completely normal for many types of operations to have single-digit profit margins, that much difference in the arbitrarily assigned price can easily wipe out all the "profit" in any given country.

The answer is to forget about "profit" -- it's all profit. You need to tax the thing that actually happens in your country. If the product is sold there, that's sales tax (or VAT). If the company has a facility there, property tax. If the company employs workers there, payroll taxes. Companies can't avoid this and still do that thing in your country. You can't assign "sales to customers in Austria" to Luxembourg the same way you can assign profits.

Naturally everybody hates this, because countries want companies to sell products and build facilities and hire workers in their country, and don't really want to discourage it (or raise costs / lower wages for their citizens). But you can't tax a company that has no interaction with your country at all. All you can do is tax the interactions it actually has.

Trying to abstract those interactions into "corporate profit" only serves to give the companies an excuse to shuffle the "profit" into a jurisdiction with lower taxes.



To be entirely accurate, you're wrong when you say your definition of OOP is the original one. SIMULA-67 predates Smalltalk and its object system is much closer to Java than it is to Smalltalk, although it helped inspire both.

To be more fair, there are two very different schools which interpreted and continued to developed Simula in two radically divergent ways.

One of the schools is focusing on clear division between rigid, statically defined classes that never change (but may extend other classes) and runtime instances, which can contain only state, not behavior. This school does everything exhaustively codify the interfaces exposed by the classes: ensure that a predefined set of messages with a predefined set of parameters for each message (a.k.a methods) is permitted, control access to these methods and in the most extreme languages (e.g. Eiffel) even verify that object state and parameters conform to a predefined contract. This school is obviously not opposed to modifying behavior patterns based on runtime state, but it believes that the object system should not directly support that, and encourages programmers to implement their own mechanism on top of rigid-class object systems: this is what design patterns are usually meant to achieve.

The other school believes that an object system's first priority is giving objects absolute autonomy in parsing the messages they receive and as a result usually ends up with object systems that focus on powerful runtime extension and arbitrary message parsing functionality instead of compile-time strictness.

We could trace them back to their origins and call them the C++ school and the Smalltalk school. We could pin them to their modern champions and call them the Java school and the Ruby school. We could follow their poster boys and call them the class school and the message school. We could go on with type of languages they tend to thrive in and call them the dynamic school and the static school.

I would call either school "more object-oriented" than the other - they just have entirely orthogonal definition of what objects are. I can't even call either of them "better". I much prefer the Smalltalk school to the rigid version of the Java/C++ school (as was practiced in these two languages through most of the 90s and 2000s), but modern languages in this school have started incorporating some of the ML/Haskell tradition to give you much more powerful (and safer!) abstractions to define the way your objects may change their behavior in runtime. Design patterns are no longer encouraged, and if I'm using a language like Scala, Kotlin, Rust, Swift or even C#, I rarely find it necessary to employ a design pattern anymore.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: