guile-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Dijkstra's Methodology for Secure Systems Development


From: William ML Leslie
Subject: Re: Dijkstra's Methodology for Secure Systems Development
Date: Mon, 22 Sep 2014 11:03:17 +1000

I wasn't really sure if I should reply to this thread again, but I guess I should clear up some of my thoughts and experiences here.

On 20 September 2014 22:46, Taylan Ulrich Bayirli/Kammer <address@hidden> wrote:
Panicz Maciej Godek <address@hidden> writes:
> [...] the back doors can be implemented in the hardware, not in the
> software, and you will never be able to guarantee that no one is able
> to access your system.

Hopefully hardware will be addressed as well sooner or later.  On the
meanwhile, we can plug a couple holes on the software layer.

​Yes.  While paying the right people at a cash-strapped AMD, Kensington or Seagate to include something that sets up a backdoor is feasable, with software, existing systems can be compromised.  Vectors that are very difficult to detect are especially dangerous.

 

Also, if the hardware doesn't know enough about the software's workings,
it will have a hard time exploiting it.  Just like in the Thompson hack
case: if you use an infected C compiler to compile a *new* C compiler
codebase instead of the infected family, you will get a clean compiler,
because the infection doesn't know how to infect your new source code.
Similarly, if some hardware infection hasn't been fine-tuned for a piece
of software, it will just see a bunch of CPU instructions which it has
no idea how to modify to make an exploit.

Sometimes you have to have specific knowledge of the instruction sequence used, other times not.  Programatically searching for common memory safety bugs is not infeasable.  Further, depending on what interface the hardware uses to the system, it could already have access to arbitrary memory or some part of the trusted path (such as a hard disk serving up an infected kernel).  Arbitrary code execution at root or better is sometimes just a matter of clobbering a pointer if you're connected via PCI.  Or, you could convince the user to install a propriatary driver.

 

Which I think brings us back to the "semantic fixpoint" thingy.  If we
define some semantics that can be automatically turned into very
different series of CPU instructions which nevertheless do the same
thing in the end, it will get increasingly difficult for the infection
algorithm to understand the semantics behind the CPU instructions and
inject itself into this series of CPU instructions.
​​

​​
 
​​Maybe.  There are really levels of this kind of attack.  It may have been easier to acheive a semantic form of this crack - rather than recognising a huge range of syntactic variations of "this is the C compiler", they may have used some semantic mechanism.

Equivalence of (possibly stateful) functions is a very interesting problem, and while it's impossible to do generally, it is also one of the​ common exercises (over a specific set of functions) when learning a proof assistant.  Nevertheless - semantic versions of the Richie crack are not simply a matter of "this looks like it's writing an ELF header" or "this seems to be register allocation", and while that does seem a different class of exploit to one that requires figuring out if gcc will insert an imul or a lea, there are difficulties with that kind of attack that might make it easier to detect:

​Firstly, there is performance.  Doing that kind of semantic match on every compilation may have some overhead (in terms of compilation time) that we can detect.​  Then again, we don't know exactly what that analysis might be looking for, so we can't say how simple it might be.  Maybe it's doing abstract interpretation on glibc's write?

Secondly, the analysis would need to be sophisticated enough to recognise where it can insert itself without otherwise changing the semantics of the compiler.

​​Nevertheless, I don't have an immediate solution to that potential problem.


--
William Leslie

Notice:
Likely much of this email is, by the nature of copyright, covered under copyright law.  You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in.  Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]