2.9 KiB
Trusting Trust
In computer security trusting trust refers to the observation (and a type of attack exploiting it) that one cannot trust the technology he didn't create 100% from the ground up; for example even a completely free compiler such as gcc with verifiably non-malicious code, which has been compiled by itself and which is running on 100% free and non-malicious hardware may still contain malicious features if a non-trusted technology was ever involved in the creation of such compiler in the past, because such malicious technology may have inserted a self-replicating malicious code that's hiding and propagating only in the executable binaries. It seemed like this kind of attack was extremely hard to detect and counter, but a method for doing exactly that was presented in 2009 in a PhD thesis called Fully Countering Trusting Trust through Diverse Double-Compiling. The problem was introduced in Ken Thompson's 1984 paper called Reflections on Trusting Trust.
Example: imagine free software has just been invented and there are no free C compilers yet, only a proprietary (potentially malicious) C compiler propC. We decide to write the first ever free C compiler called freeC, in C. freeC code won't contain any malicious features, of course. Once we've written freeC, we have to compile it with something and the only available compiler is the proprietary one, propC. So we have to compile freeC with propC -- doing this, even if freeC source code is completely non-malicious, propC may sneakily insert malicious code (e.g. a backdoor or telemetry) to freeC binary it generates, and it may also insert a self-replicating malicious code into it that will keep replicating into anything this malicious freeC binary will compile. Then even if we compile freeC with the (infected) freeC binary, the malicious self-replicating feature will stay, no matter how many times we recompile freeC by itself. Keep in mind this principle may be used even on very low levels such as that of assembly compilers, and it may be extremely difficult to detect.
For a little retarded people: we can perhaps imagine this with robots creating other robots. Let's say we create plans for a completely nice, non-malicious, well behaved servant robot that can replicate itself (create new nice behaving robots). However someone has to make the first robot -- if we let some potentially evil robot make the first "nice" robot according to our plans, the malicious robot can add a little malicious feature to this othrwise "nice" robot, e.g. that he will spy on its owner, and he can also make that "nice" robot make pass this feature to other robots he makes. So unless we make our first nice robot by hand, it's very hard to know whether our nice robots don't in fact posses malicious features.