And it's still wrong, since strncpy() doesn't null terminate: it null pads. That means that if the string is longer than the buffer, it will not be correctly terminated.
> Nice project and makes you think why all programs are given all network access by default.
One trick I learned to negate that is to insert an iptables rule that blocks all out-of-LAN traffic except for specific secondary user-groups. Not primary groups, but ones which you have to manually grant to users.
Then, those applications which you do wish to access the Internet can be run using sg e.g.
sg bobs_internet_access_group firefox
Anything that tries to run as a user's primary group is stopped at the firewall. For example a malicious shell script will run by default with the primary group and will fail.
This is also very useful for stopping anything run by root from talking to the Internet, since that is a thing that should NEVER occur.
It does take a little configuration and it's probably best to create a new secondary group for each user ( and don't forget IPv6! ) but once it's set it just keeps working.
I had not heard of sg(1) before. The sg(1) manpage on Linux says:
>The sg command works similar to newgrp but accepts a command. The command will be executed with the /bin/sh shell. With most shells you may run sg from, you need to enclose multi-word commands in quotes. Another difference between newgrp and sg is that some shells treat newgrp specially, replacing themselves with a new instance of a shell that newgrp creates. This doesn't happen with sg, so upon exit from a sg command you are returned to your previous group ID.
I could not find sg(1) for FreeBSD, neither in base nor in ports, but FreeBSD does have newgrp(1) mentioned above. The FreeBSD manpage for newgrp(1) notes:
>For security reasons, the newgrp utility is normally installed without the setuid bit. To enable it, run the following command:
> chmod u+s /usr/bin/newgrp
The main source file of newgrp(1), /usr/src/usr.bin/newgrp/newgrp.c is 310 lines long so I think creating an sg(1) based on that one and maybe also by looking at doas(1) -- which is in ports, not in base -- should not be too difficult.
However, I think using sg(1) to protect against random malicious binaries and shell scripts having internet access equates roughly to security by obscurity in that it only protects you as long as the malicious code is unaware of sg(1).
Consider the following (which I wrote without testing it with a group limiting firewall but it should work like this):
nw_access_group=
while IFS= read -r curr_group ; do
nw_access_group="$curr_group"
sg "$nw_access_group" 'curl -s http://www.example.com/' >/dev/null
if [[ $? -eq 0 ]] ; then
break
fi
done <<EOF
$( getent group | grep "$USER" | cut -d':' -f1 )
EOF
echo "Would use group $nw_access_group for evil stuff."
I remember using this sort of applications on Windows (a very long time ago; those were the days of Windows 98, whose famous stability drove me to Linux and BSD). Can some of its users help me shed some light on the use case of such a program on an open source system? I mean:
- Signed packages from trusted repos should not need firewalling, at least not if you're using a serious distro rather than a hobby project. This isn't true in the general case, of course (hence things like OpenBSD's auditing of base packages), but this is a personal firewall, it's not exactly intended for server-grade equipment...
- If you install packages from dubious PPAs all over the Interwebs, a puny kernel module is unlikely to stop the two rootkits that you've probably already installed. Same for a system that has already been compromised.
- Untrusted applications (which you're running straight on your system, rather than nicely tucked in a VM with no network access because...?) -- as practical experience on Android and Windows shows -- will generally break as soon as they can't do their snooping because they'll segfault or block waiting for the answer that never came to the package that was never sent anyway.
I see a lot of talk in the Linux desktop field about building lines of defense against untrusted programs. I see why this is relevant to users who are routinely running closed-source programs (no, I don't personally audit every line of code running on my system, but a public source code repository is sort of a stupid place to hide malicious code when there's so much fully closed code being purchased from "app" stores and downloaded from all over the web and whatnot). I find it hard to understand why it would be relevant on an open source desktop.
Things like Wayland's sandboxing, I get to some degree -- it's only a matter of time before JavaScript code in a browser will get access to more stuff from your computer, which will eventually include stuff like keystrokes and mouse events and whatnot, so it'll have to be properly sandboxed. But why a personal firewall? What sort of applications do you find yourself wanting to block, and why for heaven's sake are you running them on your Linux computer, when it's really 2017 and there's plenty of choice in terms of applications.
Signed packages from trusted repos should not need firewalling, at least not if you're using a serious distro rather than a hobby project.
Software has security vulnerabilities. So, even if the software is trusted, there could be a zero-day vulnerability that is exploited. I'd rather have software stopped in its tracks. (For this reason I think something like Little Snitch or Douane is not enough, you also need sandboxing.)
will generally break as soon as they can't do their snooping because they'll segfault or block waiting for the answer that never came to the package that was never sent anyway.
Maybe macOS apps are different, but I never had this experience during while using little snitch for almost 10 years. I recently started using Little Flocker (which is like Little Snitch/Douane, but for filesystem access) and so far no program has crashed as a result of denying access[1].
[1] Including the JDK installer, where I denied writing launch agents and Java itself trying to write to ~/.oracle_jre_usage.
> Software has security vulnerabilities. So, even if the software is trusted, there could be a zero-day vulnerability that is exploited. I'd rather have software stopped in its tracks.
How? When the zero day hits, the program has long been marked as trusted and the firewall will just happily let it go along. Besides, even if you're an experienced user and the firewall is smart enough to figure out that the application is talking to a server that it's never talked before (which isn't even sustainable for a lot of applications), it's very likely that you'll see the alert way before you read the news about the zero-day, and you'll just shrug and allow it to continue because you trust that program.
(Edit: maybe personal firewalls got smarter since I last used one and there's something else I'm missing here?)
> Maybe macOS apps are different, but I never had this experience during while using little snitch for almost 10 years.
The kind of applications that actively snoop on users as a business model -- the ones that you want to block in the first place -- sometimes even do this deliberately (which is something that I know from experience, not something that I suspect). Inexperienced users quickly figure out it's the firewall that gives them trouble, and they'll pick disabling the firewall over not playing with their toy any time. This works for pretty much any sort of permissions.
For example, last time I ran it on my tablet, Instagram's application was crippled to uselessness because I had disabled camera access (my girlfriend only needed to post a photo on an account that she managed): as soon as it opened, it spit out a big fat error message saying it can't access the camera and that you should allow camera access if you want to be able to take photos. As soon as you tapped ok, the same error popped up, and the application never loaded.
macOS apps aren't any different, you're just running the right ones :-).
There is a murky gray zone between actively malicious and fully privacy respecting applications. Applications in this zone are more prevalent in closed source software, and Linux is increasingly being used to run such software.
I think this is what I'm not getting :-). To someone who's so sick of dealing with GTK3 and xdg and everythingd breakage that I'm contemplating getting a Mac more seriously than when I saw the PowerMac G5 specs, the idea that someone who needs to run this sort of applications would not rather run Windows or OS X is unthinkable. I mean, after every point release in GTK 3, I would rather run Windows...
It's a very popular one. However, for the last couple of years, minor releases of a supposedly stable branch included backwards-incompatible changes that broke applications and themes. Basically, upgrading from 3.8 to 3.10 resulted in applications looking funny and some of them crashing. Quite a few application and theme developers ended up calling quits -- stopped maintaining their applications, kept on using GTK 2 or switched to Qt.
This caused a lot of negativity in the open source community. It's a shame, because on a technical level, it's actually very good. Its developers have, more recently, attempted to address this problem and their plan looks like it should work. However, the proof of the pudding is in the eating, and we haven't had much time to eat it yet :-).
It's unfair to blame my frustration with Linux lately solely on GTK, too, I'm sorry if I gave that impression. A lot more factors are at work here. GTK has just been very representative of this mindframe lately.
> minor releases of a supposedly stable branch included backwards-incompatible changes that broke applications and themes.
No, the releases broke only themes and exactly that was communicated - that the CSS engine was work in progress and that themes were going to be broken.
Those, who didn't want to listen complained afterwards. Color me surprised.
Just from memory, changes in the way GTK handles geometry hints broke stuff in a bunch of applications, such as ROX Terminal. I think that was in 3.20. I haven't really followed development after 3.16 or so, I try to avoid GTK 3 applications when I can.
The decision to include "work in progress" code in stable releases is also a little questionable.
A personal firewall is one of many ways to make sure you installed what you meant to. There are signed trusted packages that phone home, or do stuff that you don't necessarily want.
I like the Little Snitch style "allow/deny per binary" thing. It's really unfortunate that it needs a new kernel module because current default firewalls (pf, iptables, etc.) only operate on IP addresses don't know anything about processes.
The traditional way to filter a program's network traffic with netfilter is to give each software its own uid, which can then be filtered. You will need it anyway to set ulimits and file access rights.
Also avoid to decide policy by process name. Even using full path is problematic (where things like hard links can give nasty surprises). Better to do what SELinux does and tag executables with metadata instead. Any role based system will be much more expressive, but also complex, than a uid based one.
On mainline Linux SELinux can be used for this sort of thing. You can either block applications from opening certain network connections straight away, or you can use SELinux in conjunction with netfilter/iptables to filter traffic coming from certain applications. This is very powerful tool, but as always with selinux it's not exactly simple to configure.
Netfilter can delegate the fate of a packet to userland. It can be done for all packets or only the first packet of each connection (thanks to conntrack). Userland can there easily match the packet with a local connection or a local application listening socket.
There is nothing bundled inside Netfilter for this anymore because this is racy: several unrelated processes can use the same socket. The processes may come and go whenever they want.
There is also some prior art (but it's a dead project): NuFirewall.
Edit: Looks like that only matches the process "task command name", so it probably won't work for full paths. I guess that's why they use their own kernel module?
Apparmor certainly has it. It also pretty easy in apparmor if the rules you want to set are permanent but I don't know if a dynamic api exists for apparmor.
A centrally managed app permissions system would go a long way to improving Linux’s desktop experience. For example in Wayland, there's a huge tug-of-war going on between security minded people who don’t want keyloggers and screen capture vs average desktop users that want their old global shortcuts and screen capture/remote access apps to work.
I think a permission system like Douane’s would solve this divide.
Have you tried GUFW or Firewall Builder? Do you not consider them to have a good GUI? I remember about 10 years ago I used to use Firestarter (now defunct it seems), but that seemed acceptable.
The reason I like control like this is the reason I want a plastic shutter/window on all phone and laptop cameras I should trust ur software butttttttt I still want the extra piece of mind. Also I don't trust software since ...ya know...zero days.
Yes, but I don't think that a use-case for this is to identify malware on your system. My understanding is that it is more so focused on disallowing trusted applications from sharing more data than you'd like, or phoning home [more often than you'd like].
*As always, a multi-faceted approach should be taken with security, and this isn't all you should be running if you're trying to defend yourself.