Not necessarily. I used jj for a couple of weeks and found it to be a complete waste of time.
For an advanced user, it did not offer anything I cannot quickly solve in git. Which is probably the wrong thing to optimize in the first place, because even though I frequently rewrite history and split commits during normal worklfow, it takes so little time that improving something else would yield greater returns.
We (not royal we) don't usually go out of our way repeating negative experiences with these tools, so you build a very skewed view of their adoption.
Huh I've noticed that! Opus hallucinates answers and provides contradictory reasoning to me much more regularly during the past couple of weeks than it did in February.
For example, after a question about a script that used taskset and CUDA_VISIBLE_DEVICES to handle NUMA domains on a parallel GLUE evaluation run:
>Round 2 tasks reuse GPU slots that Round 1 may still be occupying. Cola, SST-2, and QQP are small datasets and will likely finish before MNLI, but QNLI is sizable. If QNLI (GPU 3, slot 3) is still running when Round 2 launches, that's fine — they're on different slots. But more importantly, Round 2 uses slots 5/6/7 (GPUs 7/0/1), while Round 1 uses slots 0/1/2/3/4 (GPUs 4/5/2/3/6). So there's no GPU collision — you're good here actually. Disregard, this is clean.
What's the current state of the art for VPN'ing through deep packet inspection firewalls? I have imagined building something around TLS and Websockets that connects to a popular cloud provider which is "too big to block". Of course, if they'll block Cloudflare, or all connections outside of the country, maybe _nothing_ is too big to block. I remember some solutions to this in the 2010s, like obfsproxy and shadowsocks, but are there any newer or better options?
Ah the irony, I'm blocked from viewing that page by Cloudflare
Performing security verification
This website uses a security service to protect against malicious bots. This page is displayed while the website verifies you are not a bot.
Incompatible browser extension or network configuration
Wait... why does Signal need to send notification content to Firebase to trigger a push notification on device? I would instead expect that Signal would send a push to my Android saying nothing more than "wake up, you've got a message in convo XYZ", then the app would take over and handle the rest of it locally.
I also didn't realize that Android stores message history even after I've replied or swiped them away. That's nuts - why!?
If your app needs to send a notification while it's not currently a running process, it must go through Firebase on Google's side and APNS on Apple's side. There is no way for a non running app to send a notification entirely locally, this is by design of both companies.
Signal developer here. Not entirely sure what you're saying. I'm only an Android guy, but FCM messages are certainly one trigger that can allow an app process to run, but it's not the only trigger. You can schedule system alarms, jobs, etc. And the notification does not need to be provided by the FCM message. In our case, the server just sends empty FCM messages to wake up the app, we fetch the messages ourselves from the server, decrypt them, and build the notification ourselves. No data, encrypted or otherwise, is ever put into the FCM payloads.
Sure but it needs to go through Firebase regardless of the content of the notification message, I do not believe there is a way to use a third party notification service which does not depend on Firebase.
It doesn't. The API for displaying a notification is purely local.
Receiving a ping from Firebase Cloud Messaging triggers the app to whatever it does in order to display its notification. In the case of Signal, that probably means something like fetching the user's latest messages from the server, then deciding what to show in the notification based on the user's settings, metadata, and message content.
Sorry I should clarify, by "it" I meant any sort of ping must go through Firebase Cloud Messaging, not that the message content itself goes through Firebase.
Looks like there is a way to bypass Firebase by using something like UnifiedPush which runs a perpetual background process that acts similar to Google Play Services to pick up notifications from the server and calls the local notification API.
It's theoretically possible to just keep an app running in the background all the time and periodically poll a server.
That's unreliable though since some OEM Android builds will kill it for that even if the user disables battery optimizations. Those OEMs sort of have a point; if lots of apps did that it would drain the battery fast.
Not clear what your point is. The Signal server wakes up the app via an empty message. At most the info this conveys is that a Signal app got a message to pull.
My point is there is no reasonable way to remove oneself from Google and Apple even with a fully custom application, they control the notification servers for their devices.
How? I'm not talking about an application backend server but a notification server which Google and Apple have for all apps. I'm not sure besides polling or having a persistent connection to send notifications to an app while that app is not running.
This is more of a fundamental technical limitation of operating systems and networks; I don't think it is possible to design distributed communication between arbitrary service provider infrastructure and end-user devices without an always-online intermediary reachable from anywhere (a bouncer, in IRC terms) that accepts messages for non-present consumers.
Yes, however the fact that it is not customizable is what is annoying, you are forced to rely only on the OS makers' implementations, which I guess should be expected in the day and age.
It sounds like you’re hinting at being unhappy with the lock-in forced by the ecosystem.
The flip side of the coin: any possibly avenue to exfiltrate data and do (advertising) tracking by app developers will be used. The restrictions also protect my privacy.
I’ll note that whatever other reasons it’s also the only way to make this battery efficient. Having a bunch of different TCP connections signaling events at random times is not what you want.
Ideally the app also is responsible for rendering rather than having to disclose the message but that can be challenging to accomplish for all sorts of reasons).
But there is a way to do this encrypted, so that when the notification is received on your iPhone, the process itself needs to decrypt it.
Except you need an entitlement for that, because it requires that your app has the ability to receive a notification without actually showing it (Apple checks this).
Your app gets woken up, decrypts the message, and then shows a local notification.
Markets do not model that especially well. When it comes down to these situations, it's not about the rising price of food motivating producers to enter the market - it's about the people starving. During a war, no amount of money can cause more munitions to appear fast enough. Blast-resistant concrete can take weeks or months to cure, workforces take time to train. These "momentary" disruptions can swamp the whole.
We have a hard division between "Core" repos (those which are deployed to production / customer sites) and everything else. The expectation in Core repos is that everything goes through a PR process where the pull request message is intended to explain the what and why of the change (perhaps with reference to a ticket, but with the key information restated in the PR), and goes through a review just like the code. Changes are then either squashed with that as the commit message or (if they're larger and benefit from a clear separation of commits), may be rebased with `git rebase -i` assuming the final PR body ends up in one of the commit messages.
Non-Core repos are absolute free-form anything goes. You can commit 8 times a day without a PR as long as you're not deploying to production. This is excellent for things like test harnesses, CI/CD nonsense, one-off experiments, demos, prototypes, etc.
My last Core commit had something like 20 to 1 ratio between lines of commit message to lines of code (small change touching something deep that required a lot of explanation). My last non-Core commit message was "hope this works" (it did not).
reply