You can on Apple’s OSs, and some googling suggests you can do it on windows too if you save the image and open in the image viewer. Therefore I’d be surprised if you couldn’t get it working on various linux distros too...
You have to understand how ridiculous this answer sounds.
This is a technical forum. There is an expectation of how things are shared. Images and relying on built in OCR is not anywhere in line with that expectation.
In my experience, no they won’t help with gdpr takedowns. The only way to make things unavailable is actually to file a dmca notice against any url that you want hidden. This was actually the recommended approach from GitHub when I asked them about this. Absurd.
> The only way to make things unavailable is actually to file a dmca notice
Is it costly to do?
> In my experience, no they won’t help with gdpr takedowns
I would have expected that I could say "This is my code, hence this is my data, and I want you to remove my data from your website". I wonder how hard it is to file a complaint to the EU and see what happens.
I'm curious how much review happens in Nix packages. It seems like individual packages have maintainers (who are typically not the software authors). I wonder how much latitude they have to add their own patches, change the source repo's URL, or other sneaky things.
Not a lot in most cases. You’re still just grabbing a package and blindly building whatever source code you get from the web. Unless the maintainer is doing their due diligence nothing.
Goes the same for almost all packages in all distros though.
I’d say most of us have some connection to what we’re packaging but there are plenty of hastily approved and merged “bump to version x” commits happening.
Nixpkgs package maintainers don't usually have commit rights. I assume that if one tried to include some weird patch, the reviewer would at least glance at it before committing.
I’ve never looked at the process of making a nixpkg, but wouldn’t the review process only catch something malicious if it was added to the packaging process? Anything malicious added to the build process wouldn’t show up correct? At least not unless the package maintainer was familiar and looked themself?
I am not sure I understand the distinction between the packaging and build process, at least in the context of nixpkgs. Packages in nixpkgs are essentially build instructions, which you can either build/compile locally (like Gentoo) but normally you download them from the cache.
Official packages for the nixpkgs cache are built/compiled on Nix's own infrastructure, not by the maintainers, so you can't just sneak malicious code in that way without cracking into the server.
What package maintainers do is contribute these build instructions, called derivations. Here's an example for a moderately complex one:
As you can see, you can include a patch to the source files, add custom bash commands to be executed and you can point the source code download link to anywhere you want. You could do something malicious in any of these steps, but I expect the reviewer to at least look at it and build it locally for testing before committing, in addition to any other interested party.
OCaml's opam does have a review process, although I'm not sure how exhaustive. It's got a proper maintenance team checking for package compatibility, updating manifests and removing problematic versions.
I don't think this would be viable if the OCaml community grew larger though.
Some alternative sources for other languages do it. Conda-forge has a process that involves some amount of human vetting. It's true that it doesn't provide much protection against some kinds of attacks, but it makes it harder to just drop something in and suddenly have a bunch of people using it without anyone ever looking at it.
IMO C/C++ is not much better, sure, no central package management system, but then people rewrite everything because it's too hard to use a dependency. Now if you do want to use one of the 1000 rewrites of a library, you'll have a lot more checking to do, and integration is still painful.
Painless package management is a good thing. Central package repositories without any checking isn't. You don't have to throw away the good because of the bad.
I have that in C++: we wrote our own in house package manager. Painless for any package that has passed our review, but since it is our manager we have enforced rules that you need to pass before you can get a new package in thus ensuring it is hard to use something that hasn't been through review.
I'm looking at rust, and that it doesn't work well with our package manager (and our rules for review) is one of the big negatives!
Note, if you want to do the above just use Conan. We wrote our package manager before Conan existed, and it isn't worth replacing, but it isn't worth maintaining our own. What is important is that you can enforce your review rules in the package manager not what the package manager is.
> Painless package management is a good thing. Central package repositories without any checking isn't.
There's a reason why these things come hand in hand, though. If the package management is so painless that everyone is creating packages, then who is going to pay for the thoroughly checked central repository? And if you can't fund a central repository, how do you get package management to be painless?
The balance that most language ecosystems seem to land on is painless package management by way of free-for-all.
>And if you can't fund a central repository, how do you get package management to be painless?
You could host your own package server with your own packages, and have the painless package manager retrieve these painlessly.
Of course we're in this situation because people want to see the painlessness with what other people built. But other people includes malicious actors every once in a while.
Correct me if I'm wrong but the usual advice in the C/C++ world is just grab the source code of any libraries you want and build them yourself (or use built-in OS libs). This is not great if you have a lot of dependencies.
It’s even worse than that, beyond power delivery and data speed which are at least mentioned in the product listing most of the time (even if they’re lying), USB-C also covers things like usb audio, which is never ever mentioned in the product listing and usually doesn’t work on cables that cost as much as $100. The only cables I’ve found that actually work for everything are Apple cables.
Let's not even talk about alt modes other than audio like display port. really if you want decent cables name brands that support thunderbolt are the only option for somewhat quality - though most still break or get kinked up near the ends. Thunderbolt 3/4 especially the latter specs to 100w power delivery, display port support and should be the minimum cables are built to imo.
that said you can mildly ignore at your own risk cable lengths and get 10 or 15ft cables that can and do charge without any noted impact on speed or charger heat but these too are a crapshoot since no name brand carries these due to them being out of spec (I think 3 and/or 6ft are spec, I forget)
You also need to block write access, so they can’t encrypt all your files with an embedded public key. And read access so they can’t use a timing side channel to read a sensitive file and pass that info to another process with internet privileges to report the secret info back to the bad guy. You get the picture, I’m sure.
I get the picture, yes, namely that probably 99% of project dependencies don't need I/O capabilities at all.
And even if they do, they should be controlled in a granular manner i.e. "package org.ourapp.net.aws can only do network and it can only ping *.aws.com".
Having finer-grained security model that is enforced at a kernel level (and is non-circumventable barring rootkits) is like 20 years overdue at this point.
> You also need to block write access, so they can’t encrypt all your files with an embedded public key. And read access so they can’t use a timing side channel to read a sensitive file and pass that info to another process with internet privileges to report the secret info back to the bad guy. You get the picture, I’m sure.
Indeed.
One can think of a few broad capabilities that will drastically reduce the attack surface.
1. Read-only access vs read-write
2. Access to only current directory and its sub-directories
3. Configurable Internet access
Docker mostly gets it right.
I wish there was an easy way to run commands under Docker.
E.g.
If I am running `fd`
1. Mount current read-only directory to Docker without Internet access (and without access to local network or other processes)
2. Run `fd`
3. Print the results
4. Destroy the container
This is exactly what the tool bubblewrap[1] is built for. It is pretty easy to wrap binaries with it and it gives you control over exactly what permissions you want in the namespace.
> 1. Mount current read-only directory to Docker without Internet access (and without access to local network or other processes) 2. Run `fd` 3. Print the results 4. Destroy the container
Systemd has a lot of neat sandboxing features [1] which aren't well known but can be very useful for this. You can get pretty far using systemd-run [2] in a script like this:
Which creates a blank filesystem with no network or device access and only bind mount the specified files.
Unfortunately TemporaryFileSystem require running as a system instance of the service manager rather than per-user instance, so that will generally mean running as root (hence sudo). One approach is to create a suid binary that does the same without needing sudo.
That's a silly comparison: by that metric, Apple "won" against Saudi Aramco and Berkshire Hathaway, and Microsoft also "won" against them.
Except that they aren't in the same business.
On the desktop, Microsoft is still kicking Apple's ass. Even moreso for servers. The only place Apple "won" is on mobile, where Microsoft lost to _everybody_.
I can't find the exact stat right now with some light google, but I recall there was a stat that while Apple doesn't have majority of user base, they essentially have an outsized share of the profit due to the average sales price & associated profit margins.
In Windows space, MSFT gets their license money, and then its a commodity race to the bottom by the hardware makers who need to pay AMD/Intel for chips, MSFT for a license, and compete with 100 no-names OEMs for every penny.
Arguably in the long run, Amazon is winning enterprise in ways Google never did. MSFT owns enterprise desktop / desktop collab use cases (and any virtualization / server side stuff to support it) only.
Sure I guess if you're still living in 2010. Nobody uses an "mp3 player" anymore. Get with the times grampa. Everyone has a cellphone that plays MP3s today.
> They won on music stores.
Spotify is at 36% market share compared to Apple's 30% of the music streaming market.
>They won on mobile.
And Apple did not "win on mobile" - only in the US are they popular, but globally Android has 72% market share. Apple lost the mobile market to Android a long time ago.
>They won on laptops.
No, Apple did not "win on laptops", they are still at about 9% market share.
"As of the third quarter of 2020, HP was cited as the leading vendor for notebook computers closely followed by Lenovo, both with a share of 23.6% each. They were followed by Dell (13.7%), Apple (9.7%) and Acer (7.9%)."
Nothing has really changed since 2020. Apple will always be a tiny portion of the personal computer and laptop market.
>They won on headphones.
huh? There are far better headphones than anything Apple makes. Are you talking about earbuds? There's a difference.
No, Apple has not "won" on anything but having overpriced hardware. $3600 for a VR headset? Yeah, I guess they "won" most ridiculously overpriced hardware ever.
Not likely. It needs to build a profile of what sounds you can’t hear in order to know what to specifically make louder, if there’s nothing to specifically make louder, then the feature cannot work. Just turn up the volume, or increase the noise cancelling, or turn on voice isolation, or whatever you actually want it to do.
That’s an interesting idea, but I wonder, how would it know who you want to hear? Like if I’m sitting at a long table for a big family meal, I’d want to hear everyone, but if I’m sitting on a bus I’d like to only hear the person I know, sitting next to me. I can’t imagine there’s a good way to differentiate those situations.
reply