chicken-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Chicken-users] Askemos again; … better news ; still: Needs some sup


From: Jörg F . Wittenberger
Subject: Re: [Chicken-users] Askemos again; … better news ; still: Needs some support/donation.
Date: 25 Jan 2013 13:12:50 +0100

On Jan 25 2013, ianG wrote:

On 25/01/13 11:07 AM, Jörg F. Wittenberger wrote:
On Jan 24 2013, Daniel Leslie wrote:
I can possibly get you shell accounts in Austria. I can't guarantee the security or robustness, I don't use them myself because they tend to migrate faster than I can keep up.

One already secured; so one more whom I can trust not to embarrass me by
leaking info taking as a "remembrance to myself".  Preferable outside
Germany.

I'm not sure if any of those are met ;)

I know that you know the answer already!

Thanks for your offer!  I'd love to accept it and include it
as yet another replica for the public stuff.  I'd really like
to figure where the practical limit for byzantine replication is.

(Before we need to layer a hierarchical replication atop for
the sake of felt performance.)

So shell access is fine?  No VM needed?  Do you need root?

Not in the minimal case.  And a "public notary" would be
the minimum case.

Though for security reasons I'm normally run some code
via a suid-wrapper using the most primitive unix-assertions.
It should just switch to an alternate user like "nobody"
and exec the next binary with nothing but C in between.

This "some code" would be the SSL-implementation
(either openssl or gnutls) and any other external binary,
which might be called by MIME conversion and alike.

Under the caveat of running at just one user, the
binary, source and external helpers could be tampered with
from the running process.  Possible, but worst practice.

So any three shell accounts really do:  The "binary and source",
the "running state", and the "external helpers".

Also the immidiate need is fixed. The problem was a "not well
considered business decision".


Speaking of well considered business decisions - I have been mucking around with servers on the net for decades now, and have suffered and suffered from their unreliability from all many continuous causes. I recently made a break and did something a bit weird. I purchased a Mac Mini and I now host that as my personal server from my desk. This eliminates most all of my sysadmin, reliability, availability issues. The only issue left is whether I can host it on an IP# on the house connection. Although this has been up&down, the availability of the data/service *to me* has trumped all.


Speaking of personal hosting:  I'm running an OpenRD on my personal
desk at home.  (Unfortunately I'm unable to fix the graphics driver,
thus it does not maintain the screen as intended.
Any help appreciated ;-)  And a Segate Docstar to run my personal
Askemos peer when I'm working.  Both behind the same router.
One of them is part of the development networks as documented
as ball.askemos.org.  The other one not routed outside.

When I need to change some source code, I mount the WebDAV
share vial ssh tunnel from the docstar, use emacs, change
and save. Then I test the changes with those apps configured
to use the draft code.  Once it works I visit the snapshot page
and push the "follow" button.  Now the change is effective
at the websites.  Since this can still reveal bugs, both
the first level and the second level directory have a
button "go back one step".  (Similar the app behind
askemos.org has a snapshot button.  That one allows
to kinds of snapshots: immutable and mutable.)

Once a year I keep a backup of those copies on disconnected
media within a steel case.  Otherwise I rely on the self-
healing via replication and fail-stop notice on lower bound.

The normal thing is that for whatever reason there is almost
always one of the replica not available.  Be the cable pulled
of the wall by the cat.  Another hoster has a dog…
Be it a mishap when upgrade the OS.  Once there was a fire in
the power supply in Mittweida (a town where askemos2.tc-mw.de
used to live before, one of those now in a single room).
Whole town in the dark.  You see… I notice from "more timeouts",
visit the "connectivity listing" and try to fix the reason
within the next days.  You know, sometimes people even get
confused about billing status.  There is little I can do
to speed up fixing the resulting disconnect.

The worst thing, which happened over the past decade was
this case.  More than a third machines unavailable at once.
I those had been gone forever, I would had to dig out
the last backup from the steel case, insert a fresh certificate
start somewhere and wait for the synchronisation to complete.

But that doesn't change the principled version of the problem:
there's an SPOF, which can not force the cloud off, but it *can*
prevent further updates until fixed.


There are two solutions to the SPOF: replication, or buying a highly reliable machine and doing backups. I'm using the second atm.

See: I'm using the former.

anyway, enough rambling, to work.

;-)  just my 0.02€

A friend has done the same as me with a Raspberry Pi, which is an order of magnitude cheaper, but is still a Linux box, so one trades capital investment for sysadm time.

Precisely.  However, which capital investment are you talking about?
The more reliable box takes the same sysadm time, doesn't it?

Or it takes to trust the sysadm who delivered the full configured
system or operates the VM.  Great, but a different thing to have.
No trade in money can by you trust.  Ever.  Otherwise is where no
money at all.


Best regards

/Jerry
.................








reply via email to

[Prev in Thread] Current Thread [Next in Thread]