Rescue mode is like giving a backdoor to your own OS or software. If you break your system or forget your password, it’s your responsibility, no one else’s and no help will be provided, because there’s nothing anyone can do.
This also encourages users to maintain proper backups of their data, whether on their own hardware or in the cloud.
Additional considerations such as protecting the kernel command line, unikernel, etc. while happen during the implementation of Verified Boot. A proper threat model, holistic concept is planned.
Depends what you mean by rescue mode.
If the operating system is no longer booting, then if the user still remembers their BIOS password and/or full disk encryption password, booting the computer from external storage for the purpose of data recovery will still remain possible and supported in so far as we’re providing documentation or pointers on how to do that.
The feature you don’t might not be the recovery mode as is.
I guess the feature you’re looking for isn’t the absence of a recovery mode. Instead, the feature you’re looking for… I am not sure has a clear name yet.
Do you want…?
A) Hardware-backed Full Disk Encryption (FDE): Encryption keys are stored in hardware (like a TPM or otherwise), meaning even if the disk is removed, it cannot be decrypted without the original device. Plus, (and/or)
B) Boot Device Lockdown: It shall not be possible to boot any other operating system besides the one installed on the internal, non-removeable hard drive? Plus, (and/or)
C) Bootloader Lock / Locked Bootloader: Common in iPhone / Android devices. Prevents flashing or booting of alternative OSes.
This kind of features - if hardcoded - come with issues. This adds obfuscation, makes malware analysis / malicious backdoor hunting almost impossible.
Similar features A), B), C) can potentially implemented with hardware and firmware support but without obfuscating malware / backdoor analysis. The firmware needs a secure BIOS password implementation that safeguards boot device choice. Binding the internal hard drive to the device might be an optional opt-in feature but very far in the future, if at all.
MacOS for example if you cant remember your icloud/mac password or prove you have bought the device from them, its game over and apple wont help what so ever.
My suggestion is the same, just one step more which is: we cant help either even if you prove that you are the one who bought the device (because we must as well be incapable to help otherwise its a backdoor), its like someone encrypted his hard drive with vera or zulu crypt and forgot the password = sorry cant help you.
Once we have our own hardware, yes all of these are the best options as they comes with their own security benefits.
But since we are on the software level at the moment, we can just remove any ability to system rescue mode like entering rescue mode from boot menu or lower the kernel load of services or anything similar. Immutable system base will be identical for everybody and roll back available to go back for the last version (if a rare accident gonna occurs, though it is very unlikely in this design because builds will be unified across all users).
This is already the case. If choosing full disk encryption in the installer, luks will be used. There’s no backdoor. We cannot help with recovery. Nobody can at time of writing except if using weak passwords.
I don’t like this feature. Could also be considered an anti-feature as it prevents raw disk backups, data recovery and makes malware analysis almost impossible.
Can be a separate feature request. If popular, might become opt-in. Faster if contributed.
Unlikely.
Firmware feature. Can be user controlled by a secure, non-resettable BIOS password. This is something I am interested in and it might be realistic.
Can be a separate feature request.
(Why realistic? See verified boot forum discussion.)
Vendor specific lock and no unlock available for device owner: No, not planned. No support for the War on General Purpose Computing as per project values.
The ability for the device owner to physically unmount the root disk, the ability to decrypt their own data if they know their encryption password and the ability to perform malware analysis is a crucial security feature.
An obfuscated blackbox that cannot be scrutinized is the opposite of values and goals such as Open Source, Freedom Software, reproducible builds, bootstrappable builds.
If the hardware is freedom or semi-freedom + freedom OS = locking both its not a bad thing as it look, its like locking root or using read-only immutable distro, locking user from doing insecure/threatening behavior on freedom software doesnt make the software not freedom. Harmful freedom is not realistic nor should be wanted. Also If im freedom hardware provider im not obligated to make my hardware works for every OS, its like im obligated that my OS should run every app, also no.
It is a subversion of Free Software principles. Specifically, one of the core freedoms of free software is the ability to run modified versions of the software on your own hardware. If an operating system (or any system software) cannot be effectively forked - meaning you can modify and rebuild it, but not actually run your modification on your hardware - then the promise of Free Doftware is rendered hollow in practice.
This situation is is an reductio ad absurdum of Free Software. The code is “free” in a legal sense, yet the user’s practical freedom is missing, turning the fundamental philosophy of Free Software into a kind of mockery or contradiction. As a response, version 3 of the GNU General Public License specifically forbids tivoization, requiring that users be able to actually install and use their modified versions on the hardware.
Locked to vendor: harmful, insecure, blocking backdoor hunting, stifels innovation and tinkering. I couldn’t have become a distribution maintainer if I wasn’t allowed to run my own code on my own hardware. Learning starts with small codr modifications for fun, curiosity and personal projects. Not with a grandiose plan to sell vendor-locked hardware.
User controlled: can be secured, permits backdoor hunting, allows innovation and tinkering.
Vendor locked hardware means no freedom and bad security because sophisticated malware infections cannot be investigated because malicious backdoored vendor images cannot be analyzed.
Many Linux distributions are working on reproducible builds because developers or build machines can be compromised. The vendor is taking steps to distrust itself.
When using vendor looked hardware, a compromised software vendor selectively targeting some users (or even all users) with a malicious backdoor rootkit could never be detected breaks the concept of the vendor distruting itself.
Malware that can remain undetectable is bad.
There’s no obligation for many things. There’s no obligation to produce Open Source.
This isn’t reasoning based on obligations. This is reasoning based on chosen project values.
FDE does as well prevent malware analysis (if user forgot the password or the password changed or corrupted), shall we as well backdoor it?
Its like since ransomware attack done by encrypting hard disk, so we should remove the ability of encryption.
If someone want to study something he take the OS with the hardware, or install the OS separately and build it and run it on his own way. If someone cant do something unless lowering the security, not an option.
The points posted above about hardware security restrictions has nothing to do with the software, its a hardware thing. “on your own hardware” unrelated to the free software, because the software is free, obligation to make it running on xyz is not under the 4 freedoms, not even GPL (specially V2).
The OS going to be similar to AOSP, user can download it and modify it and even run it on x device/s, but devices sold on by the OS provider will be similar to pixel or samsung or so, has its own security restrictions and linked to a specific device/s.
Note: There is no usable kernel im aware of based on GPLv3 (Linux and seL4 are based on GPLv2, Redox on MIT…etc).
This point is irrelevant to the software, because even if both software and hardware are fully freedom, this point still say you must not make them into one piece?! it is an insane interference with the development process, and its a harmful freedom to the software/user security unless there are alternatives that im not aware of reaching to the same level of security.
AOSP: Google Android , GrapheneOS, CalyxOS, LineageOS…etc.
And dozens of hardware not made by Google support running AOSP.
So reality evidence prove that you can run your free software in a secure contained devices without actually provide any freedom harmfulness.
ransomware: There will be obvious signs that there is malware. Ransomware is easily detected because it will be asking for a ransom. Easily detected: Yes.
malicious backdoor in vendor image or vendor over the air update on vendor locked hardware: Detection possible: No.
(Detection would require exploitation of a bug to force unlock.)
forced vendor lock without user unlock/relock doesn’t lower security: No, vendor lock reduces security.
no vendor lock or user unlock/relock: Good, backdoor hunting possible.
Not possible on locked hardware that support no user unlock/relock.
It’s a mockery of Free Software. Hence, Free Software invented GPLv3 to contain the spread of tivoization.
Not going to happen by default. This tivoization feature request is declined.
Pixel supports bootloader unlocking at least so alternative operating systems can be installed.
Irrelevant.
These must not be vendor locked and unlockable indeed. This is the viewpoint of the Free Software community. If hardware is vendor locked and the user cannot unlock that’s called tivoization and frowned upon. References:
The second reason why free software licences should prohibit tivoisation is that tivoisation burns the environment in which free software flourishes.
Why can GrapheneOS exist? Because Google [1] was nice enough to allow bootloader unlock and relock with user custom key on Pixel devices.
[1] And theoretically other devices, if bootloader unlockable and OS ported to that device.
Why can these OSs exist? Because:
either vendors support bootloader unlocking, or
bootloader has been force unlocked using security issues.
In a world where only Google, Samsung, Apple and other vendors produce hardware that is locked to vendor with no user unlock/relock, and no hacks, alternative operating systems cannot exist.
As explained, user unlockable is more secure because then the security community can perform offline malware analysis (Basics of Malware Analysis and Backdoor Hunting) and keep the vendor accountable.
And there are alternatives reaching the same and higher security.
Bootloader lock enforcement without user unlock/relock support is a business strategy decision with the goal of selling more hardware in the future and iron grip control of the ecosystem. It’s not a security feature that cannot be accomplished in any other way.
It’s part of Free Software ideology (refer to links above).
Roll the dice one more time: blame the developers who can’t open an encrypted hard disk with a forgotten password.
This article contradicts realistic evidence:
Researchers and analysts have always been able to investigate malware or vulnerabilities in highly restricted environments like iOS, macOS, Android, or even at the CPU level. They discovered Spectre & Meltdown, BIOS/UEFI rootkits, firmware attacks, etc. There are dozens of techniques to extract firmware, perform reverse engineering, conduct memory dumps, and more (known as forensic tools).
If the article claims this will prevent OS developers (like those at Google, Debian, Apple, Fedora…etc) from pushing malware, that’s not correct. With or without this restriction, such behavior won’t be stopped. In fact, these actions are often dismissed as “innocent mistakes” even when they are deliberate backdoors (but nobody can confirm it).
Freedom respecting firmware or hardware may ease analysis, but whether this or that it does not guarantee prevention of such threats and experts techniques needed to do the investigations.
So, what’s being described as “perfect” is far from it.
In case of ransomware the malware itself making itself visible to make the user pay the ransom. Malware makers if they want it for persistent it will be stealthy in a code with millions of lines like the kernel or so (specially linux kernel), vendor or no vendor lock is unrelated.
Though my example was just because ransomware is using encryption, we should not remove encryption because the ransomware is using it wrongly.
Prove that when the OS is locked to a hardware for the benefit of preventing device manipulation, this will reduce security.
I didnt say necessarily on my hardware that i will sell, he can build/port it to his own hardware or install it in a VM or so (similar to AOSP variants example).
Its security feature, its unrelated to software freedom because the software is free (so as the hardware in the future).
Yeah this discussion mostly about the security behind tiding the OS with the hardware while using freedom software. tivoization is bad with only a certain license type for a freedom software, but its not against the freedom of the software.
True, but can be as samsung anytime. We can use the same approach, still better than the chaotic hardware and OS that we have now.
Relevant in away that one of the most core component of the system is already accepting hardware secure restrictions, Linus has pretty good points talking about V3 & tivo.
GrapheneOS can exist on a different hardware locked on grapheneos, can exist in a VM or USB…etc, It has nothing to do with the software, there is no restrictions imposed on the software level.
Bottom line is that, hardware+OS merging into one piece while using freedom license/s is not against software freedom, rather is software freedom developer rights, he want it this way or not. Also proven to be more secure to build devices like this to avoid manipulations.
Forgotten password: blame user. → Reasonable, can be documented, expected.
Implementation of an operating system that allows for The Perfect Malicious Backdoor: blame the developers. → Not reasonable. And OS that wants to be Free Software and secure shouldn’t attempt to obscure malware analysis / hinder rootkit hunting.
What you’re describing is…
A) hard drive cannot be copied for offline analysis → security researchers suspect rootkit backdoor → security researchers find vulnerability to break vendor lock → hard drive can be copied for offline analysis → offline malware analysis possible.
Do you assume it can be locked down or that it cannot be locked down?
B) If hard drive can be copied for offline analysis → offline malware analysis possible.
C) If hard drive cannot be copied for offline analysis → offline malware analysis blocked.
You’re feature requesting towards state C).
Once reported, you or people with similar viewpoint will feature request to close these holes.
And what happens once perfected? State C) is true.
Verbatim citation required because I don’t kmow what you mean by that.
Stopped no. It doesn’t stop,
intentional bugdoors,
intentional malicious backdoors.
But either rootkit hunting is,
“easier” which increases chancing of detection (good),
difficult, obfuscated (required vulnerability to break vendor lock) (bad)
intentional bugdoors: probably yes. Non-obvious.
intentional malicious backdoors: no. Obvious.
This is already acknowledged in the wiki chapter.
Let’s start with simple, absolute language for laymen. We’ll be more nuanced later.
This is what makes the concept of a “perfect malicious backdoor” so dangerous.
“perfect” is quoted for a reason.
Now, let’s clarify the technical nuances.
More accurately, Android makes it very difficult, not literally impossible, to detect well-hidden malicious backdoors. Here’s why:
And if such a malicious backdoor were ever discovered and recovered, it wouldn’t be truly “perfect” because real perfection means it could never be found under any circumstances. In the real world, nothing is perfect. But Android’s design comes alarmingly close when it comes to enabling undetectable malicious implants on non-rooted, encrypted devices.
More than enough context for the “perfect” word.
It’s required to speak in simple language and absolutes to express the importance of the subject and to make it accessible to layman.
I could add thousands of footnotes if I wanted to explain all technical nuances.
When conversing about complex technical subjects, all statements are either incomplete or wrong.
Malware can come form a variety of sources.
unauthorized third-parties finding a zero-day vulnerability, keeping it secret, compromising the system, installing a rootkit
developers turning malicious, bribed, threatened, hacked without knowing and adding a malicious backdoor knowingly or unknowingly.
Whatever the case, it should always be as simple as possible for security researchers to perform offline malware analysis. They shouldn’t be required to break the vendor lock first.
As per above.
Supporting the war on general purpose computing will literally result in that no longer being possible.
vendor lock-in violates Free Software principles, hence related to software freedom.
tivoization is universally considered bad within the Free Software ideology no matter which license.
These people like tinkering.
If they buy a device, they want to be able to modify the software. Either now or maybe in the future but at least in principle.
Of course, they also want to be able to run the modified software on their device. On the same device they already paid for. Not on yet another specifically unlocked device. If running modified software is blocked without any override available, they don’t like that.
Others that are only supporters of Open Source might have more lenient opinions on vendor lock-in.
Incompatible with project values.
Unfortunately, Linus supports only Open Source, not Free Software.
Kicksecure is a Free Software project. Not only Open Source.
As long as hardware is available that allows it.
If all mobile devices had unhackable locked bootloaders, no GrapheneOS would be possible on mobile devices.
Therefore devices that support user unlock/relock should be 1 positive criteria when buying devices.
Nice in addition but mostly besides the point.
vendor locked software with no user unlock/relock opinion is a software restriction.
But it doesn’t matter if it’s a hardware restriction or software restriction. It’s incompatible with project values.
It’s correct that developers are allowed to decide this. I decided against it.
It might in theory have some small advantages in some threat models which however are unimportant due to making it much harder to perform offline malware analysis / malicious backdoor remaining undetected for longer / more difficult rootkit hunting.
Google Pixel doesn’t have a hard vendor lock that cannot be unlocked by the user.
In other words, Google Pixel supports user unlock/relock.
What’s the unfixable security issue(s) that can only be fixed by hard vendor lock to Google only with no user unlock/relock allowed?
Now more or less reasonable? something is reasonable depends on who is judging the situation, what one person finds acceptable, another may not. This makes it difficult to discuss the point further.
An infected OS on hardware can be studied together without separation. If something affects either the OS itself or the firmware or so at an upstream level, these components are free software which can be easily inspected.
For example, take the detection of Pegasus: no jailbreak, no vendor compromise, fully proprietary, inseparably merged OS and hardware stack.
What you’re proposing would make attacks like decapping and other hardware/firmware manipulations trivial. Without signed, verified components working homogeneously with the OS/kernel, security becomes impossible. For anyone serious about security, this approach is unacceptable.
Your description is like:
Malware hunting becomes easier, but so does malware injection. It’s like leaving your door wide open: owners, guests, thieves, and stray animals all walk in freely with no authentication, no barriers.
And then: We’ll just monitor and eject intruders later.
But why not lock the door in the first place??
Prevention Is Better Than Cure - Ibn Sina/Avecenna.
Which licenses prohibit tivoizaiton other than GPLv3? (not even GPLv1/2 prevent this). Im not aware of any other license.
Insecure/testing purposes hardware can be sold separately. OS variants/forkers can take the OS and base it on their own hardware like Samsung or Huawei… done with AOSP… not the end of the world.
You mean more than a dozen threat models? e.g IRATEMONK or JETPLOW or…etc would not work in the airport or in any situation before the device reach the user, because it cant be modified, everything is linked together.
I think for me i have cleared my points with the conclusion:
Mandatory Hardware-OS Binding: Prevents manipulation/tampering at any stage (manufacturing, shipping, or deployment).
Free Software Compliance: Binding software to hardware doesn’t violate freedom, both layers can (and must) use libre licenses like GPLv2 or any Copyleft license.
Inevitable for a Secure Future: No alternative exists to achieve true security without this integration.
But it cannot be inspected on the very device that has is suspected of being infected.
non-vendor lock: For example, if someone suspects their computer being infected, all they need to do is find (or become) a hacker (some attendees commonly found at IT conferences). They can start rootkit hunting using custom kernel modules, blue pill, etc. (There is no need to first break the vendor lock or other acrobatics.)
vendor lock: Analysis of suspected infected device is hindered. (Not completely perfectly prevented. But it can be compared and one state - ease of analysis of rootkits - is better than the other - obfuscation.)
Right. This is actually a great example making my point.
Saying,
“vendor-lock doesn’t prevent rootkit analysis”
is similar to saying
“reproducible builds are unnecessary because compromised can be found through other mechanism”.
But for any Linux desktop/server rootkit we usually have:
The malware binary.
The disassembly - commented assembler code - explaining exactly what is happening.
A detailed technical analysis.
The Pegasus analysis is much more superficial [2] than the analysis of sysinitd. To explain this statement, the following wiki comparison table has been created just now: Technical Malware Analysis: Traditional Linux vs. Android
Public indicators of compromise are insufficient to determine that a device is “clean”, and not targeted with a particular spyware tool. Reliance on public indicators alone can miss recent forensic traces and give a false sense of security.
Reliable and comprehensive digital forensic support and triage requires access to non-public indicators, research and threat intelligence.
Of the currently available mvt-android check-adb modules a handful require root privileges to function correctly. This is because certain files, such as browser history and SMS messages databases are not accessible with user privileges through adb. These modules are to be considered OPTIONALLY available in case the device was already jailbroken. Do NOT jailbreak your own device unless you are sure of what you are doing! Jailbreaking your phone exposes it to considerable security risks!
[2] The word “superficial” shall not be construed as as a criticism. Given the technical limitations to malware analysis on Android, the Pegasus malware analysis is great.
It’s “only” malware. There are even Open Source, modular malware frameworks. These are not an issue by itself - these still require a vulnerability to be able to be used.
Depends on the exact airport situation.
A) In a situation asked to provide the password (key disclosure law) under the threat of indefinite detention… Bad luck. It’s hard for technology to provide solutions.
B) Not even limited to airport situations: If an adversary gets physical access to a computer (alike device) (such as phones), disappears with it and then comes back… That’s also a hard issue to fix due to relay attacks.
Malware injection through which mechanism specifically?
You didn’t answer my question:
I am sure, Google, GrapheneOS and others would be most interested the read your vulnerability report, if this was an indeed an issue.
I didn’t say anything against signed, verified. Just only against vendor lock-in. Signed, verified is the whole point of verified boot, which is planned.
This is not the design that is going to happen once verified boot is implemented.
GPLv3, LGPLv3, and AGPLv3 only. The number of licenses is irrelevant for the statement I made.
License proliferation is to be avoided. If this set of 3 licenses preventing tivoization is sufficient, there is no need to invent more and more licenses just for the sake of it.
Creating a new license that gets approved by OSI and/or FSF and others is a lengthy and costly process. It then has to be presented on their respective mailing lists and commentary by other legal professionals need to be answered before the board makes a decision.
the license is the final result of an unprecedented drafting process that has seen four published drafts in eighteen months
It will lead to only huge companies have access to what you call insecure/testing purposes hardware.
With vendor lock, you’re at the complete mercy of the vendor. The vendor can push an update either by mistake, under coercion, due to being compromised by malware or maliciously:
They can break the updater [1].
Declare the device obsolete, force-push an automatic update, prevent it from booting.
Accidentally push an automatic update that bricks the device forever.
Do you also want forced - no option to turn off - automatic updates? In that case, the perfect disaster scenario can be designed.
I’ve presented references with statements by FSF / FSFE on their position on tivoization, which is abundantly clear. To add yet another reference, see: https://ryf.fsf.org/
Anyone who doesn’t mind tivoization, please just call it Open Source. Not Free Software.
If vendor lock-in is sufficient freedom for you if you cannot replace the software on devices that you purchased, then you’re not sharing the values of this project.
Yes, people should be able to change the software on the same device that they purchased. They shouldn’t need to purchase another device specifically sold as unlocked.
Its like something from reality, like having a door for your home, harder to see inside and harder to enter, but essential for keeping unwanted visitors out. However, very rarely but possible (some people never had it) weak spots like windows, broken walls, or poor door implementation could still let a thief sneak in. Outsiders also can’t easily tell if something’s wrong inside the house. Yet despite these risks, this protection remains absolutely necessary.
Legal limitation according to my investigation for this case.
Full verified boot doesn’t protect you if the device is taken, a chip is decapped, malware firmware is installed, or if it can be applied to an SSD or USB ports changed with BadUSB ports or xyz (Check COTTONMOUTH or HOWLERMONKEY… here). This doesn’t contradict OS-BIOS verification.
So hardware must be a full circle - leaving xyz areas unchecked - then it can be easily taken, played with, and put back again. So only signed software must be interacting with each other.
Note: This probably will also require some software to be burned into the hardware that can’t be changed or removed. Future upgrade to the hardware would be needed to make such core changes, but this is normal anyway - not for the sake of more profits, but for improving security.
Authority of xyz country can access the OS of specific user/s, but they can’t implement anything widely on every device entering their country, for example.
What im saying is how to kill this attack.
Attack scenario: Unlock the bootloader, load a malware custom ROM that looks like Google, lock the bootloader, then send the device to the customer. How many users will notice the difference or reinstall the original OS as a security precaution?
Compare this to Apple - I think Apple is the only one taking pioneering, advanced steps to solve these issues (but sadly all in nonfree way).
This is the unavoidable big picture - the product user is under the product maker. It’s a reality everywhere, every day. The freedom just eases the forking if you want to do it your way.
I’m not sure how that’s possible without upstream making something go horribly wrong with the update. Its a very unlikely scenario to happen.
But having it on for system updates - I dont see it as a bad thing, as it will ensure the system gets updated on time. It can be optional but on by default, not a big deal.
Nobody should support this, because it’s done for profits sake. But on the other hand, you can’t have hardware/software remain secure and working forever, so sane obsolescence is an unavoidable/mandatory step.
I’m a freedom maximalist, meaning I love to have everything as free software because it’s the user’s right to know what they’re using and securely verify the code is safe.
But the freedom must come with a secure environment:
So Step 1: Free the software - good, we have free software
Step 2: But it’s not functioning well (ugly). Okay, let’s have:
Text editors helping, compiler hardenings…
Entire secure programming languages that avoid human mistakes
Great, we’ve pretty much covered that in our time.
Step 3: Secure the code path until it reaches the user: We are here (and we sadly lag far behind when it comes to hardware).
If there is a secure way to do that, i think we gonna have happy world.
This is what we’ll hopefully figure out here one day.
Not much point for me replying to an analogy.
Why am I using analogies myself? I am using these in addition to technical explanations which are hard/impossible to understand for non-programmers but I am not using purely analogies.
The best and only thing hardware buyers can do is purchase devices that support user unlock/relock. Every purchase of a device that does not support unlock is a vote towards the end of general purpose computing access for mere morals.
I couldn’t find any references for this.
But I found a much more detailed report:
Hence, wiki table replaced.
Indeed, Intel Boot Guard or an equivalent such as AMD Platform Secure Boot (PSB) or similar is required. A public key must be fused into Intel Guard or alike.
The processor will then verify the (EFI) BIOS.
So while user unlock/relock can be provided for the level of which operating system to boot, it cannot be provided securely against physical attacks for the firmware itself.
BYOB (bring your own bios) - a system architecture where the device’s essential firmware is not stored in a fixed, soldered chip on the motherboard (as is typical in most modern computers), but instead resides on a removable storage medium like an SD card - might require more work to be secure from the physical attacks that you mentioned.
It may not be impossible to secure that either. But BYOB will probably not happen anytime soon (as no hardware vendor indicated interest) anyhow so there’s no need to develop a concept on how to secure that at this point.
So some things like the firmware can be firmware vendor controlled. However, freedom can be securely provided at the level of the operating system choice.
They can require that devices must be unlocked on demand from select people and if they don’t comply, they’re threatened with detention. I don’t see how technology can solve that.
Supply Chain Attacks (new wiki chapter created just now) are an issue indeed but in any case, vendor lock-in or no vendor lock-in.
How does one now they got an original iPhone from Apple or one that has been tampered with?
The problem with hardware is, it cannot be hashed. With software it’s possible to calculate as has an compare but with hardware this is impossible. See https://www.youtube.com/watch?v=Fw5FEuGRrLE on the topic of hashing (checksumming) hardware.
(S) NightSkies (NS) version 1.2 is a beacon/loader/implant tool for the Apple iPhone 3G v2.1. The tool operates in the background providing upload, download and execution capability on the device. NS is installed via physical access to the device and will wait for user activity before beaconing.
(S) Features:
• Retrieves files from iPhone including Address Book, SMS, Call Logs (when available), etc.
• Sends files and binaries to the iPhone such as future tools
• Executes arbitrary commands on the iPhone
• Grants full remote command and control
• Masquerades as standard HTTP protocol for communications
• Uses XXTEA block encryption to provide secure communications
• Provides self-upgrade capability
The only real way to compare if a firmware image has been modified is to be able to take the firmware from the device, take a hash of it, and compare it to the hash of a known good copy.
The image taken from the device must be obtained in such a way that any malware on the device can’t tamper with the image being exported (i.e. obtain the image before the system boots). I’m not too familiar with iPhone forensic tools, so I couldn’t tell you the cost to do something like this.
So once a user receives a vendor locked device that has been compromised through a supply chain attack, they cannot even find out that the device has been compromised.
BYOB (bring your own BIOS) - where one can easily extract, checksum and/or replace the firmware by replacing the sd card sounds not so bad after all.
In any case, when someone receives a device, how can they be really sure they received a real device and not a fake device? Ok, it’s not so easy to copycat the latest, fastest device on the market. Maybe it’s just a thin client combined with a relay attack.
Hardware wallets also have this issue. Is it a real hardware wallet, have malicious parts been added?
Supply chain attacks, malicious parts are a hard one. Hardware implants, hardware replacements, and whatnot.
We can refer to this as the following:
Forged Verified Boot Attack: Re-locks bootloader using attacker’s key to fake a secure boot state. Device appears secure but isn’t. A variant of phishing where the OS mimics a trusted UI (e.g., “Google Pixel” lookalike with backdoors).
Relocking Attack: synonym, same as above
Malicious Bootloader Re-lock Attack: synonym, same as above
Solution?
Tamper-Evident Verified Boot: Makes any modification to the boot chain detectable to the user. See:
User Notified of Changes: During boot, if the device detects enrollment of a non-stock key, it shows a distinct warning screen and reveals the public key fingerprint. This makes faked stock ROMs with re-locked bootloader keys highly detectable.
Remote Attestation: Potentially, in addition, but not as a primary defense.
I think verified boot for the forseeable future will be primiarily useful against remote, software based attacks. Not against hardware implants, fake hardware.
Verified boot can:
Verified boot makes sure that the boot process has not been tampered with. (So if previously malware modified the system, it can be spotted and boot aborted before executing the malicious code.) And;
Depending on the availability of funding, one day we might formalize the threat model, discuss this with a firmware / hardware vendor and see what can be done.
Complex supply chain attacks: At this point, we have much lower hanging fruit to pick. That is, Verified Boot on traditional Linux.
That’s what the suggestion is saying: leaving the doors open/no doors is better than having closed doors to secure your home. But vendor means upstream, and upstream is the only entity to be trusted/responsible for the OS/hardware they provide anyway.
General-purpose computers are fun but insecure, a sad reality, but true. The wind does not blow as the ships desire, as they say.
“Though the lawsuit was filed in California, NSO Group made its source code available to view only in Israel by an Israeli citizen, which the judge called “simply impracticable”, according to the ruling.”
“NSO’s Pegasus code, and code for other surveillance products it sells, is seen as a closely and highly sought state secret. NSO is closely regulated by the Israeli ministry of defence, which must review and approve the sale of all licences to foreign governments.”
…Legal barriers.
Yeah sadly i dont see how that can be secure.
Nothing can be done to solve this. What we can do is ensure only the user can unlock it, nobody else (including the vendor/developer) should have this capability.
Similar to serial number verification or so:
Note: Multiple layers of protection/tampering detection will be used.
The question is: shall we have purely burned/fixed firmware where they are irreplaceable, non-upgradable which will be impossible to change by design (Mask ROM, OTP…), or upgradable with signed mechanism (more complexity, more attack surface to protect, but better usability).
Bottom line to solve this, we need one complete ring in the chain of verified/signed software. Nothing will work unless process 123 is met. But yeah, a sealed circle means there is not much room to play with.
Yep hopefully one day. The good thing is we now have this maturity of information to understand how future security will be shaped.
Once general-purpose computers are no longer available, that’s the end of small Linux distributions such as this one.
None of the following quotes seem to be a legal barrier.
…these quotes do not imply that security researchers are prohibited from or scared to release malware artifacts such as disassembles due to fear of copyright laws. Specifically not in the case of Pegasus. Malware samples are shared and analyzed in public all the time.
This isn’t easy for sure. You’d probably need to be able to at least:
understand/write for example the following documents:
Might be some sort of remote attestation. Not easy and maybe even impossible to come up with a perfect implementation. Refer to the same wiki page Verified Boot.
As for end-user, desktop computers and notebooks, there does not seem to be any hardware vendors on Intel/AMD64 interested to provide such functionality. This might be due to lack of customer interest. The x86 market is shrinking.
On Intel/AMD64 we can be glad if we get at least OpenSIL. That might be the next realistic step after Sovereign Boot.
And if we’re really fortunate, maybe one day we’ll have Open Source Hardware. I’d prefer that any day about any proprietary co-processor.
The security co-processor design is to be considered in 10 to 20 years. That’s far away from anything realistic we’ll be working on anytime soon.
Due to lack of hardware vendor interest… And due to no decent performance fully Open Source Hardware RISC-V being available yet… And due to Kicksecure not being a hardware development project… This is too far fetched for now.