Kicksecure’s current development strategy for the most part has been to harden Debian as much as is practically possible. This seems to have been working well for the most part, but there are some worries about the safety of some of the rather outdated software that Debian provides us. In particular:
Critical, extremely complicated software like Firefox ESR and Thunderbird can sometimes get stuck with a known-vulnerable version in the latest version of Debian Stable for as long as a month.
Software that isn’t well-maintained may be left in a vulnerable state for long after the vulnerability is made public. (This happened with the zuluCrypt vulnerability discovered in Debian’s zuluCrypt packaging a couple weeks ago or so.)
Tangential to the above, bugs that aren’t considered to be that big of a deal may not be fixed in a stable release of Debian ever. This isn’t really a security issue necessarily, but it is a problem from a usability standpoint.
These things would be more-or-less resolved if Kicksecure was a rolling-release distro, for instance by rebasing our code onto a rolling-release variant of Debian (if one were to exist - we sort of almost have one with Debian Testing, but it doesn’t quite serve the desired purpose perfectly). This however is not necessarily more secure:
Rolling-release distros rolled out backdoored XZ code to end-users during the xz-utils attack. Thankfully (most of) those distros don’t seem to have been targeted by the backdoor at all, but in principle a backdoored upstream or similar supply chain attack can have a much worse impact when using a rolling release.
Rolling releases steadily introduce the latest bugs and vulnerabilities from upstream into the distro on a regular basis. (There isn’t really any way around this, this isn’t a criticism half so much of a statement of fact.) For software that isn’t so complex as to be riddled with unknown vulnerabilities like web browsers, this is a potentially increased danger.
Rolling releases can break a lot. An experienced user can go years or even over a decade without breaking a rolling release install beyond repair, but an inexperienced user can end up with a broken system within a few weeks if they accidentally do something like upgrade their system without checking what packages will be removed in the process. (I’d like to think I’m an experienced user, and I’ve nuked Debian Sid virtual machines multiple times doing this. Not fun.)
It might be possible to combine the advantages of both stable and rolling releases, by keeping Kicksecure’s stable base, but offering “rolling” versions of specific packages like web browsers, email clients, (maybe?) media viewers, etc. Anything that has to handle highly complex untrusted data would be a good candidate for this sort of thing, especially software with a rapid-release cycle. We’re already moving in this direction to some decree with Browser Choice, which is in active development.
What are everyone’s thoughts on the above? Where could our docs use improvement? Are there ideas for how to get the best of both worlds (rolling and stable) other than just a stable base with specific rolling apps? What are some other apps we should probably think about making rolling or recommending that people install rolling versions of if they use those apps? Are there potential pitfalls we need to avoid with rolling apps? (On this last point, we probably should not do what KDE neon did to try to make the Qt and KDE stacks rolling on top of base Ubuntu - that way of doing things kept breaking things for people alarmingly frequently.)
You assumed xz has only this vulnerability which didnt pass because it was not on the latest version, ok but how about xz which is already inside debian? just because a vulnerability widely exposed and didnt pass doesnt mean the software you are running is secure to begin with. Let alone how many bugs or vulnerabilities got fixed without debian (or any stable) got notified with (until its too late).
Yep this is the reality but how this point is not against stable as well? because what is stable? stable software (specially in debian context) is a bunch of code which likely still have vulnerabilities/bugs frozen at certain state. So is having stable prevent security issues? no. prevent bugs? no. actually it introduces a new dilemma which is frozen software with bugs (graphical or so) will not be fixed even if its fixed upstream.
Bottom line is that the software you are using either good or a mess, and this is only to blame the upstream developer with. on the edge fixes of features and vulnerability is better than sitting old and dirty waiting for fixes (which some released for years already).
Note about Debian: Debian even modify the software and makes it act stupidly/introduce new bug that the upstream of the package has no idea why its doing what is doing e.g KVM + Debian with nftable (debian modified it to worse/backward solely for its own desire).
Stable base system with rolling software for the user is the industry level technology (Android, IOS, Almost Windows…), even Fedora and Opensuse went into this direction.
Probably having a microkernel + native apps like harmonyos, but this isnt achievable by small development circle or community or hobbyists.
(Although even with this design, separating base from user apps can still give better security specially if they are in VM)
I think you’re confusing bugs and backdoors here. It’s true that vulns that end up in software with Debian or other stable releases are liable to remain unpatched in various ways, and I explicitly list this as a downside of stable releases in the linked document. But the xz-utils “vuln” wasn’t a vuln, it was deliberate malware introduced to cause harm. Debian and other stable-release distros didn’t update to it immediately, so there was time for the security community to notice and raise alerts about the backdoor before it trickled down the pipeline to stable release users. Rolling release users got the backdoored version.
True, and again, I point that out in the linked document.
This is a very valid argument (more-or-less the same one I’m making) for some software like Firefox. But for, say, SSH, do I really want to always pull in the latest shiny new thing, complete with new vulns like regreSSHion? Probably not. The codebase doesn’t move nearly as fast, and so it’s practical to just backport fixes as necessary.
Indeed. Virtually all distros do that, including rolling and semi-rolling releases.
And when an RCE discovered in an app and the developers say oh sorry this is a bad bug, i should assume this wasnt a backdoor because they phrased it nicely? It looks more of a wording/phrasing to me, yes i would believe their claim if there is no bad history or any reason to believe otherwise, but technically speaking i cant verify their claim.
What we should do instead is:
Assume every software is bad, except the few software that we either develop or we know its unlikely to be backdoored.
Sandbox (socially in VM) every software since its bad.
Find an automated mechanism to examine the code whether in the repository or when its installed (with scanner searching for malicious patterns or so)
Add policies which prevent bad apps with certain behaviors like using unicodes in suspicious ways or xyz, or at least flagging them (similar to flatpak package flags).
Thats correct and the stable release got the backdoors which are not discovered yet. e.g:
And if you look into this example they removed a crucial directive that had mitigated an earlier vulnerability. If you tell me this isnt backdoor because of xyz, from technical point view this doesnt differ from xz issue.
Future answer to this question: Yes, there is no better way around.
Sandboxing protects the system from the application.
Sandbox can prevent exploitation of some types of vulnerabilities. (Such as when using a seccomp sandbox and any exploitation attempt requiring a disallowed seccomp syscall.)
Sandboxing (at least upstream/distribution default sandboxing) cannot prevent backdoor exploitation. For example, I don’t see how any sandboxing could have stopped the Bitpay Wallet Malicious Backdoor.
Also nice but wouldn’t have found above 2 examples. In fact, it didn’t find it fast enough.
Static Analysis and what not. But it produces false-positives. Static analysis output often requires manual review to confirm true positives.
So that cannot be fully automated either. Nice to perhaps find vulnerabilities but not backdoors.
Invisible Malicious Unicode Risks - Then we can flag large percentage (guesstimated, could be easily 50-90%) of all source code files and packages.
Some letter such as ó in a developers name in the source code file?
That’s unicode. Applications with mulit-language support? A ton of unicode.
Unfortunately, these are hard engineering problems.
If it was easy to fix, projects would have taken action. Bug reports:
Yeah it “looks” something, its for “sure” something else. regreSSHion how can you verify its not a backdoor?
Whether xz or pegasus or…etc are much more complex/sophisticated than this, we just cant know except by guess/people say/website announcement or so that its not done intentionally, these are unverifiable claims and from technical point view are the same thing (you break it intentionally or not, doesnt matter, only different phrases used but its same end result).
True, true, true, but thats the maximum you can have. Keep improving these areas because there is no better way exist. These solutions are not the 100% but nothing exist better than them specially when you have shattered apps/code developed by different people, different languages used, developers doesnt always follow instructions or steps of best practices…etc, so to somewhat control this mess there isnt in my knowledge better ways to handle that rather than the suggestions i posted.
Thank you for the video, actually it aligned with my suggestions:
Update forward
Roll backward
Scanners
(I even added more measurements above)
All covered. He as well differ between intentional VS non-intentional in terminological way, and didnt show a practical way and thats what i said before.
He talked about other suggestions like pinning certain dependencies or manual remove the infected dependency… but we have seen how this is impractical in case of OS level.
Though im not a fan of sudo/su/root, this stupid tool and concept should be removed altogether. But is this is just critical tiny example, which can ruin many users life because they are depending on this obsolete stable concept.
wha… ok, excuse me for a moment, but this CVE is just patently ridiculous. Since when is it a software security vulnerability if a randomly flipped bit in RAM can cause a bypass? If the hardware fails or behaves in a completely unexpected way, that is firmly the hardware’s fault. I am not about to, and I expect no one else to, start programming effectively radiation-hardened software because of RowHammer, unless that is explicitly marked as my job. Either an OS-level mitigation or a hardware replacement needs to be responsible for this. The solution to RowHammer is not to make software resistant to bit flips, it’s to keep the bits from flipping in the first place.
(sorry for the rant, you do have a decent point in general, just this particular CVE is crazy.)
A ton of software would require massive refactoring to follow that more rowhammer resistant style. A ton of CVEs could be filled in a ton of projects. Makes no sense.
Or a compiler hardening feature to automate this style.
Because of the judgement of the Debian security team.
is more an hardening against hardware bug (rowhammer) than a security fix per se
part of the code in the fix commit are not built because debian use PAM: plugins/sudoers/auth/sudo_auth.[ch]
i.e. not applicable.
Which seems correct and unopposed.
They got talked into it, and changed something on the level of sudo that should be fixed at lower level instead. So none of this makes any sense.
Debian can say whatever they want, its minor or not, unrelated to the concept (though they didnt say like debian is immune to this or so, its vulnerable (no DSA), its like saying yeah, not a big deal… ok… LulzSec).
Focus on the concept, concept is flawed, unrelated to the impact.
It is desirable to include references which uniquely identify the issue, such as a permanent link to an entry in the upstream bug tracker, or a bug in the Debian BTS. If the issue is likely present in unstable, a bug should be filed to help the maintainer to track it.
Lack of CVE entries should not block advisory publication which are otherwise ready, but we should strive to release fully cross-referenced advisories nevertheless.
Read the research paper, the study done by attacking Ubuntu 20.04.01 LTS (yeah with PAM…), they gained root access, period. (the Y intermediary/package manager can BS whatever they want)
Should also go report 100 of CVEs to implement radiation hardening to have better security on airplanes and space vehicles and against radiation based attacks?
You mean lets say a research institute found a vulnerability effecting 1000s of projects because they are using the same library or code issue…etc, the research institute if they can inform all of them this is a bad thing?
And if not by them, someone saw the study and informed other… this is good thing, im not sure hows that can be bad.