I feel this in my soul. My work uses Mac and I'm not used to it. Fn and cmd keys block my reach for alt and control in my vim sessions and make it awkward to go through tabs in the browser with the keyboard.
Fun fact : I have an external keyboard, but it's an MS one, so there is no more drivers, and some keys just won't work (or at least I haven't yet found anyway to remap things.)
Honestly, if they have those elements on a page, usually the content is not worthwhile enough for me to suffer through their design. I usually just leave the page.
"So this is pretty cool, but it turns out that the contextual substitution lookup type is really powerful. This is because the table that it references can be itself, which means it can be recursive."
[…]
"So thats a pretty powerful virtual machine. I think the above is sufficient to prove Turing complete-ness."
Postscript fonts? Fully programmable font glyphs in the 80's. Little used, but very fun. I recall a typewriter font that would vary the glyphs position, weight and edges a bit to simulate coming from a manual typewriter.
One could, if so inclined, build a complete COLRv1 font parser in PostScript.
I used a M1 Mac to try to build tensorflow and tensorflow-text and it is very untrue that everything is guaranteed to build on other machines with no additional setup.
The parenthesized comment is funnier to me because I had to download a specific bazel version to build.
That is true, Bazel itself is still evolving, and there have been breaking changes between versions. Sometimes the required version number is placed in a .bazelversion file, which makes Bazelisk your top-level dependency.
I'd expect Tensorflow to have some non-hermetic build actions, but if choosing a specific Bazel version was the only thing that was required to build it, that's awesome!
So, more tools on top of your tools. And this couldn't be a part of bazel proper for the dame reason MS built a separate tool to discover paths for their tools: overengineering
I also stole jump script, but instead of aliasing, I just renamed the function to j instead of jump. I still kept mark as is because I don't use it anywhere near as often as j.
It's not that they aren't trying, it's that when you reinvent the wheel, you have to do more work. Microsoft is introducing `tensorflow-directml` to avoid this problem by implementing a CUDA equivalent in directX. AMD has ROCm, but it's not well supported because it's not integrated upstream in `tensorflow`.
- I found out I could only use it for compute headless. WTF?!?!?! (https://www.phoronix.com/news/Radeon-ROCm-Non-GUI). If it was driving a monitor, my machine would crash hard. There wasn't even an error message.
- A lot of other stuff didn't work and just resulted in odd crashes, or worse performance than CPU. I don't know why.
- Within 9 months, AMD discontinued support for my card. I raised this as a warranty issue (suitability for advertised purpose), but that obviously would go nowhere without a lawsuit. I had a very expensive brick.
- AMD support channels were non-existent. There literally was no way to reach anyone.
I bought an NVidia card, and it's been working well ever since.
ROCm is not well-supported because it's absolute garbage. You have *less* work reinventing wheels, since it's been invented once. You have more work to get community support and network effects, since you're starting out behind. Fundamentally, though, that can't start to happen if your system doesn't work at all.
I agree with you they're trying, but they're trying incompetently.
If ROCm was half the speed of CUDA, and wasn't integrating into the latest-greatest frameworks, but it was stable and working, I'd make it work. It wasn't anywhere close to stable and working.