guile-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Dijkstra's Methodology for Secure Systems Development


From: Panicz Maciej Godek
Subject: Re: Dijkstra's Methodology for Secure Systems Development
Date: Sun, 21 Sep 2014 11:04:07 +0200

2014-09-20 14:46 GMT+02:00 Taylan Ulrich Bayirli/Kammer <address@hidden>:
Panicz Maciej Godek <address@hidden> writes:

> [...]

First of all let me say I agree with you; guile-devel is the wrong place
to discuss these things.


Having this settled, let's proceed with our discussion :)
 
I also feel uncomfortable about having been painted as the only person
agreeing with Ian.  According to him I was able to understand his idea
at least, but I'm not clear on how it ties in with the rest of reality,
like the possibility of hardware exploits...

Still:

> [...] the back doors can be implemented in the hardware, not in the
> software, and you will never be able to guarantee that no one is able
> to access your system.

Hopefully hardware will be addressed as well sooner or later.

How can we know that the enemy isn't using some laws of physics that we weren't taught at school (and that he deliberately keeps that knowledge out of schools)? Then our enemy will always be in control! This reasoning, although paranoid, seems completely valid, but it does abuse the notion of "enemy", by putting it in an extremely asymmetical situation.

>  On the
meanwhile, we can plug a couple holes on the software layer.

Also, if the hardware doesn't know enough about the software's workings,
it will have a hard time exploiting it.  Just like in the Thompson hack
case: if you use an infected C compiler to compile a *new* C compiler
codebase instead of the infected family, you will get a clean compiler,
because the infection doesn't know how to infect your new source code.

So if I get it right, the assumption is that the infected compiler detects some pattern in the source code, and once we write the same logic differently, we can be more certain that after compilation, our new compiler is no longer infected?

And couldn't we, for instance, take e.g. the Tiny C Compiler, compile it with GCC, and look at the binaries to make sure that there are no suspicious instructions, and then compile GCC with TCC?
Or do we assume that the author of the Thompson virus was clever enough that all the programs that are used for viewing binaries (that were compiled with infected GCC) are also mean and show different binary code, hiding anything that could be suspicious? [But if so, then we could detect that by generating all possible binary sequences and checking whether the generated ones are the same as the viewed ones. Or could this process also be sabotaged?]
 
I think it's quite difficult to find a good balance between being too
naive, and entering tinfoil-hat territory.  I've been pretty naive for
most of my life, living under a feeling of "everything bad and dark is
in the past" and that only some anomalies are left.  That's seem to be
wrong though, so I'm trying to correct my attitude; I hope I haven't
swayed too much into the tinfoil-hat direction while doing so. :-)

Actually the direction the discussion eventually took surprised me a bit.
So maybe to discharge the atmosphere, I shall include the reference to XKCD strip (plainly it was made up to lull our vigilance): http://xkcd.com/792/


reply via email to

[Prev in Thread] Current Thread [Next in Thread]