I don’t want to assume bad faith here, but I think Proudmuslim either doesn’t know what they’re talking about, is intentionally making this sound worse than it is, or just hasn’t thought this through fully. Rather than just saying “this really isn’t the problem it looks like”, allow me to take the opportunity to use this as a small course on how to determine what is and isn’t a problem security-wise.
First off, before we even look at the problem, we have to define our threat model. What are we trying to protect? What does a compromise look like? What is necessary to cause a compromise? What needs to be done to prevent that? These are questions only each individual user can answer for themselves, since everyone is protecting something different and for different reasons. But for the most part, you can break up data that needs protected into three general zones, with some overlap:
- Data that you need to remain private. Things like tax records, personally identifying information, passwords, secret encryption keys, chats with your friends or workplace, documents containing business trade secrets, things that could get you put in jail if you’re arrested in the wrong country like religious texts, etc.
- Data that you need to remain unmodified. Reports for your business, bulk data that needs to be analyzed, application source code, stuff like this. Most of this stuff may be semi-public (or even entirely public), but if someone is able to modify it other than you, things could go very wrong (for instance if someone injects malicious code into a software project you’re working on).
- Data that you need to remain accessible. Grocery lists, instructions, notes on how to perform tasks, computer programs, etc. It’s probably not the end of the world if someone managed to get ahold of any of this data, but you’d quickly find yourself in a bind if you couldn’t get access to it.
Each of these is a security problem, and these three aspects are generally referred to with the acronym CIA - Confidentiality, Integrity, and Availability. These are the things you need to look at to decide how you need to protect your data.
For me personally, I use a single-user computer. No one else has a legitimate reason to access to my device without my direct supervision, and everything done on that device is done by me or with me watching. The root account is locked since there’s no reason for anyone to log into it except for me, and I can just use sudo
if I need root access. The data I need to protect is roughly:
- Authentication credentials of various kinds (passwords, TOTP codes for 2FA, a few security keys for Matrix, my browser’s cookies, and my GPG and SSH keys). These are mostly a confidentiality concern - integrity and availability are both important but definitely secondary. It would be a mess if I lost my email password. It would be a much, much bigger mess someone got into my email.
- Source code for a slew of projects, most of which is open-source but some of which are closed-source (internal work stuff). Confidentiality is a concern for the closed-source repos, but for the most part integrity is of the utmost concern here. I absolutely cannot afford for someone to implant malicious code into any of the repos I work on and have it look like I put it there. Availability is a low-priority issue - if I lose the repos, I can just clone them again.
- Financial data. Confidentiality and availability are the highest priorities here, integrity is directly tied to availability here so really it’s a high priority also. Financial data is interesting because the compromise of any of the C, I, A attributes is a disaster. This is part of why there are such strong protections around it.
- Some personal info that I care deeply about, like pictures and documents. Here integrity and availability trump everything else. Confidentiality is nice, but if it was broken I probably wouldn’t panic too much.
- Emails. Confidentiality is the highest priority here, integrity and availability are useful but not as important. I’ll live if some emails are corrupted or if I can’t access my inbox for a few days. I’ll have some serious issues if someone snoops on all my private communications.
- My operating system and applications. I don’t care about confidentiality here but, integrity is a critical priority because if it’s compromised, everything else can be compromised, and availability is a critical priority because if it’s compromised, I can’t work.
Now if you look at all of the above, one thing becomes very clear - if someone gets access to my user account, I’m toast long before they get root access. If someone is even able to get to the point where they can try to get a root shell, I have much bigger problems than a privilege escalation attempt. Another thing is less obvious but still important - if you want to get root access on my system, you have to get user access first. That’s because it’s a single-user system. Putting the two together, the ability to compromise my system’s root account is a total non-issue. No matter what you do to get root on my system, you’ve fully compromised me already.
Now this might just be my use case but how many people do you know who use a desktop computer? What do they do with it? Would they be compromised just by getting access to their user account, like I would? 99% of the time the answer is going to be “absolutely yes”. So for a usual desktop use case, there’s really little to no reason to protect the root account. You want to focus your efforts on keeping unauthorized people out of your user account.
So where is root access a concern? There are probably a lot of good answers here, but the three scenarios are:
- You’re protecting a multi-user workstation. If someone can compromise one user account, they only get the data from that one user. If you want the data from all the users, you either have to compromise every single user individually, or you have to compromise root at which point everyone else is automatically compromised.
- You’re protecting a server. If you compromise a service like a web server, you probably won’t get access to much useful data. Getting root gives you access to much, much more data, which is useful to an attacker.
- You’re using a hypervisor, and are trying to defend against VM escapes. This may come as a surprise, but a lot of VM escape vulnerabilities are only really a problem if malware can get root within the VM first. Protecting the root account of individual VMs is therefore a form of defense-in-depth against VM escape compromises.
Are all of these scenarios important? Absolutely. But for the typical desktop user, they really don’t have to worry about this (with the exception of Qubes OS users, where the last point may be important, but even then it’s not that important).
So far I’ve defined what we’re protecting. The next step would usually be to determine how one would compromise it. I don’t want to go into extreme detail here, but short of a physical attack (someone sitting at your laptop while it’s unlocked), the most severe risk for a typical desktop use case is if someone can execute arbitrary code as my user account. If they can do that, they can read almost anything they want if my account can access it (near-total loss of confidentiality), modify almost whatever my account can access (near-total loss of integrity), and delete anything my account has write access to (total loss of availability). (The qualifier “almost” when it comes to confidentiality and integrity is because I have password protection on my GPG key, and I don’t store that password anywhere, so they wouldn’t be able to sign things with my key or read things encrypted to me with GPG.)
How do people get arbitrary code execution on your machine? There are two very common techniques:
- Trick the user into running code the attacker provides. This is by far the most common strategy, and is sadly extremely effective. Social engineering can break even the best software-level security strategy, Users can and will click “install” on a malicious application or paste malware into their terminal. (Many OSes try to prevent this, but they either fail miserably, or the protections end up causing serious problems for the user.)
- Provide data to the user that, when read by a vulnerable application, tricks the application into running code embedded in the data. This sounds hard (and it is), but it’s a very common way to attack businesses. The exact way in which this works is an entire field of research of its own, so suffice it to say “it can be done in a lot of situations”.
Kicksecure and Whonix attempt to make both of the above attacks very difficult:
- To prevent social engineering, Kicksecure and Whonix provide extensive wikis to teach the user how to keep themselves secure. This mitigates social engineering attacks.
- To prevent application compromise, Kicksecure uses extensive kernel parameters and other security-related settings to make it so that if an application is exploited, it will present a number of obstacles to the attacker that they probably weren’t expecting or can’t get around. In practice, this means that a hacked application will (hopefully) be more likely to crash than to compromise your system. Some applications use AppArmor sandboxing for additional safety, and we also do security reviews on quite a bit of code that we think might be security-sensitive.
Do we worry about protecting the root account? Yes, but in many ways it’s a secondary pursuit to the above. It’s still something we’re pursuing (we actually have a whole project dedicated to this called user-sysmaint-split
that we’ve been working on like crazy lately), but it’s not the end of the world that Proudmuslim is making it sound like.
Lastly, I’d like to illustrate what it would take for the attack Proudmuslim references to actually be exploited. The attack is, in short, “run upgrade-nonroot
, then instruct apt to give you a root shell if it runs into a conffile conflict.” This is not anywhere near as easy as they make it sound. For one, conffile conflicts in Debian and Kicksecure aren’t at all common unless you’re a sysadmin that messes with package conffiles. Even if you do mess with package conffiles, you’re not all that likely to run into conffile prompts since the default configuration for packages isn’t changed very often. For two, anything you could do to intentionally trigger a conffile prompt requires root access already. Either you’d have to be able to install attacker-controlled packages onto the victim’s computer (at which point, just put your malicious code in a package postinst script, boom, compromise complete), or you’d have to be able to see a particular package upgrade is coming through that would cause a conffile conflict prompt if a particular file was modified, then modify that file, and then upgrade the system. Modifying conffiles virtually always requires root access (if it doesn’t, your system is probably broken or a package on the system is very poorly designed), so if you can modify conffiles, you don’t need to bother with getting an apt conffile prompt. You have root access already.
So ultimately, this is really not hardly a problem. Is it a problem at all? I’d say yeah, sorta, and most likely we’ll fix it once we push out user-sysmaint-split
. But this isn’t even remotely close to the end of the world.
If you’re worried about this issue, and want to fix it now, you can run sudo mv /etc/sudoers.d/upgrade-passwordless /etc/sudoers-d-upgrade-passwordless.old
or something similar. This will prevent upgrade-nonroot
(and the apt-get-update-plus
command it uses in the background) from being run as root without a password.