The package is only going to be as secure and up-to-date as the upstream developers made it. There must be no interference or external involvement in adding or removing code from any package (no package maintainer, only to make the package work as is from the upstream).
Why
We have the Debian design, and we’ve seen how flawed it is when it comes to security and feature fixes. It’s horrible in every aspect. What we get from Debian is, at best, working software, but that doesn’t mean it’s secure, up-to-date, or even always stable. Issues like packages crashing on shutdown when clicking X or freezing on specific Y action, are common (because they are already in the distro passing freeze timeline without upstream fixes), and such bugs when occurs are unlikely to be addressed in a timely manner.
A major security flaw in Debian and Debian-like philosophies is that if the upstream of a package fixes a security issue and pushes it to their users, Debian remains unaware of it and will not include the fix in their own package unless it is documented or assigned a CVE. As a result, users remain vulnerable for up to maybe entire five years.
Another issue is the complexity trust in the chain of delivery. The more people who can add, remove, or manipulate the code, the more trust becomes a concern. If the code goes from developer X to intermediary Y (or even more intermediaries) before reaching the user, then the user must trust not only X but also Y and anyone else in the chain. Moreover, X doesn’t know Y or their practices, so X must also trust Y not to tamper with the code before reaching the user. This design introduces unnecessary trust assumptions. Code must be delivered directly from X to the user.
Note: Operating system developers should only interact with core/system-level packages, such as the kernel or desktop environment or VM configs or so.
Can add stupid configurations and lower good features, but upstream doing it right and dont know the solution e.g kvm + nftable in bookworm.
Or surrender and do nothing, cant solve adding a package which is not on the keep it old and patch small, rather its continuously updated and comes with many features at once e.g virtualbox.
…So someone has the ability to modify the upstream code before reaching the user, he can do it for good or for bad, and thats wrong to leave this hanging on blind trust in people. Good or bad code is the responsibility of the main/upstream developer not the distro/OS. This is followed in windows, google, apple…etc if they find an app having issue/s they will notify upstream or just remove it and thats it.
upstream issue, if not fixed then flagged as non-reproducible due to upstream 123 issues. tickets for upstream if they dont fix then it is what it is.
If you want to have your pure own distro, you should have the original kernel code and add your own configs to follow the project policy:
One cannot educate oneself based on news alone. News is all about sensational, exceptional, negative, and current events. In that case, one will lack the required foundational knowledge.
Medical analogy:
To be able to research and contribute to improved heart surgery procedures, one needs to be a heart surgeon (or other qualified medical researcher). For example, I am not active in this field, hence cannot talk with certainty, demand heart surgeons times making suggestions on how to change their procedures.
Back to Linux distribution maintenance / operation system development:
To be able to contribute to Linux distribution maintenance, one needs to be a Linux distribution maintainer.
The problem is, that Linux distribution maintainers are not particularly good at marketing, advertising their hard work. They’re not creating engaging texts, flashy graphics, arguments, memes and videos. Hence, their hard work goes easily unnoticed.
What do all Linux distributions no matter how different they are in common? They do packaging. (And there is any that does not, they’ll have scripts or other files which replace this purpose.)
Alright. So you’ll start with a source code folder such as the source code of grub2. Select files from grub2 source code folder:
asm-tests
build-aux
conf
config.h.in
configure
configure.ac
INSTALL
and so forth
But a different structure is required for the binary installable package:
* etc/grub.d/10_linux
* usr/bin/grub-install
* and so forth
How do you get from source package to binary installable package? Answer: packaging. Who provides the package? The distribution maintainer. Who distributes the package? The distribution.
Alternative? Linux From Scratch. Read upstream’s readme, manually install the dependency and follow the upstream’s instruction for compilation, installation - if available.
We want a console login. How to do that? The standard Open Source tool for that is getty. But getty even if compiled and lingering on the hard drive as perfectly compiled binary does nothing. It needs to be started. How to start it? The init system starts it. How does the init system start it?
The init system requires a configuration file.
systemd: a getty systemd unit file
SysVinit: a SysVinit uses service configuration files
Who provides the init system configuration file for getty? Upstream getty? No.
Then who else does the Linux distribution integration work? The distribution maintainer, the Linux distribution.
This process is an absolute necessity. There is no (universally accepted) standard (file structure, compile command, init system, compile options, …) that all upstream projects follow to completely avoid any necessity for packaging work. If there was a standard or convention, there would not be enough participating upstream’s to build a functional Linux distribution.
Unfortunately, all of this necessary packaging work allows for vulnerabilities and/or malicious backdoors to be injected.
There are certainly different opinions on how to package, what should, should not be done and other specifics. That’s why many different Linux base distributions exist. However, unfortunately it’s not possible to completely wipe away.
Multi-billion surveillance capitalism corporations with power to do that. Open Source Linux distributions do not have this power due to the organisational differences, which are documented here: Linux User Experience versus Commercial Operating Systems
That is an unfortunate Arch Linux incident. For context:
As established above, packaging is unavoidable. Packaging files (and packages) must either be trusted and/or audited (preferably).
This issue could not have been prevented by a “no packaging policy”. That option does not exist.
Similar to above.
Packaging will be imperfect, there can be issues, yes, but unavoidable.
If there was a suitable base distribution that does that is of a higher quality then that would be an option.
I haven’t seen any Linux distribution yet that doesn’t add patches. They may or may not patch more minimally and the policy on what is acceptable versus unacceptable is probably different but they do patch.
Manjaro (probably similar to Arch Linux’s packaging): Packages / core / grub · GitLab (refer to the patch files starting with 000)
In that case, we won’t realistically would not reach reproducible builds. I decided on preferring patches + reproducible builds instead of patch-less and unreproducible builds.
In any case, bootstrapping a base distribution at this point is entirely unrealistic. We don’t have even remotely enough developers / funding to make that happen.
Kicksecure for the foreseeable future will remain a derivative of a base distribution.
More or less it was explanation for the issues not solutions for them:
Thats a system base/core thing, its responsibility of the OS maintainers to figure out that.
To make a package working when installed from the repository thats for sure the OS maintainers job, but dont modify the upstream code to remove certain stuff or add or keep it old or so.
Its really achievable, just simple thing, is the upstream doine 123 or not, no then tickets and until fixed bye. Its not hard to remove x browser because its dead or vulnerable or so.
Unrelated, what happened from AUR can happen from official, same power, blind trust based…etc.
Kurt Roeckx was a debian maintainer. Like i said before it doesnt matter, im talking about the flawed concept not who done it.
Unrelated, its not about packaging, debian played with the upstream package to keep it using iptables because they are stuck with it and not yet on nftables, because of keep it old policy. if debian were using latest updates it wouldnt have this issue.
Important Note: App-per-VM will provide the app needs in the VM, meaning if the upstream assumed glibc to be there then it will be there, if musl then it is, and so on. So there wont need to be a patch to make the apps working in the distro, because the apps will not be attached to what the distro is actually using, instead only in the VM containing it.
It is what it is. If the upstream lazy or dump then let it be, we wont interfere to make their code better or worse.
Note: base system/core since its the responsibility of the OS maintainers, then patches are allowed anyway so it must meet the best expectations from reproducibility/bootstrappability.
Indeed, hope one day a distro will change its behavior to a better future.
Note: Fedora is doing some extreme changes to its future, worth to keep an eye on (in linux kernel world) .