guile-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Dijkstra's Methodology for Secure Systems Development


From: Taylan Ulrich Bayirli/Kammer
Subject: Re: Dijkstra's Methodology for Secure Systems Development
Date: Sat, 20 Sep 2014 14:46:01 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux)

Panicz Maciej Godek <address@hidden> writes:

> [...]

First of all let me say I agree with you; guile-devel is the wrong place
to discuss these things.

I also feel uncomfortable about having been painted as the only person
agreeing with Ian.  According to him I was able to understand his idea
at least, but I'm not clear on how it ties in with the rest of reality,
like the possibility of hardware exploits...

Still:

> [...] the back doors can be implemented in the hardware, not in the
> software, and you will never be able to guarantee that no one is able
> to access your system.

Hopefully hardware will be addressed as well sooner or later.  On the
meanwhile, we can plug a couple holes on the software layer.

Also, if the hardware doesn't know enough about the software's workings,
it will have a hard time exploiting it.  Just like in the Thompson hack
case: if you use an infected C compiler to compile a *new* C compiler
codebase instead of the infected family, you will get a clean compiler,
because the infection doesn't know how to infect your new source code.
Similarly, if some hardware infection hasn't been fine-tuned for a piece
of software, it will just see a bunch of CPU instructions which it has
no idea how to modify to make an exploit.

Which I think brings us back to the "semantic fixpoint" thingy.  If we
define some semantics that can be automatically turned into very
different series of CPU instructions which nevertheless do the same
thing in the end, it will get increasingly difficult for the infection
algorithm to understand the semantics behind the CPU instructions and
inject itself into this series of CPU instructions.

Unfortunately we have in-hardware AES implementations, and neither
crypto software nor C compilers are very diverse, so you possibly get a
similar-enough bunch of CPU instructions every time for malicious
hardware to inject an exploit into.  (Or maybe not; maybe binary outputs
of common crypto software are already diverse enough every time you
change a compiler flag or update your compiler so that this attack is
implausible.  In that case we *only* need to plug software holes; the
rest is taken care of by full-disk encryption etc. in software so the
hardware never sees your data and never understands what you do.)

(By the way, while a clean C compiler can be used to compile a clean GCC
so we get back all its features which aren't in our new C compiler
that's been kept super-simple, the same can't be done for hardware;
instead we will have to keep running more and more different series of
CPU instructions if we want to be safe, which will mean we will suffer
performance, since these instructions are probably not the optimal ones
to get the job done...)

> If there are some people accessing my files, why should I feel
> unfomfortable with that?  Why can't I trust that someome with such
> great power isn't going to be mean and evil?

I always like to say, "there are no James Bond villains on Earth."  The
Hollywood trope of a sociopathic villain who's consciously evil for the
sake of it is a big distraction from the fact that groups like the Nazi
party or people like Joseph Stalin have in fact existed and came into
positions of power in our very real world.  And they didn't feel they
were evil; they genuinely believed they were doing the right thing.  How
long has it been since such a "scandal" of humanity last happened?  Is
it really thoroughly implausible that it would happen again?  Has it
even really stopped happening entirely, or is one of the most powerful
countries on the world supporting (even if indirectly) the bombardment
of civilians and the torture of captives in the Middle East right now?

I think it's quite difficult to find a good balance between being too
naive, and entering tinfoil-hat territory.  I've been pretty naive for
most of my life, living under a feeling of "everything bad and dark is
in the past" and that only some anomalies are left.  That's seem to be
wrong though, so I'm trying to correct my attitude; I hope I haven't
swayed too much into the tinfoil-hat direction while doing so. :-)

Taylan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]