mediagoblin-userops
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Userops] Why is it hard to move from one machine to another? An ana


From: Christopher Allan Webber
Subject: Re: [Userops] Why is it hard to move from one machine to another? An analysis.
Date: Thu, 09 Apr 2015 15:59:04 -0500

Hey Jessica, some strong points in here..

Jessica Tallon writes:

> Hey,
>
> Firstly, I agree moving from one server to another is difficult. Moving 
> from workstation to workstation is also difficult though. You can just 
> rsync your home directory if you've opened your firewall and know how 
> to use rsync which for both of us and likely the rest of this ML is 
> fine, but not for most people. There is also the problem of moving 
> software which can be a huge problem, you seem not to find this an 
> issue but for me I still find months later I'll be missing software I 
> need. Then you have the problem that systems get very complex, you have 
> postgres running on your workstation, you have several users, wifi 
> network credentials, you probably have a bunch of different 
> repositories enabled, python virtualenv's breaking.

[snip]

> Finally and this isn't entirely the problem you're describing here but 
> if you want to have multiple workstations which we do, we have our 
> laptop and our desktop and maybe a test machine or whatever, if you 
> want to try and work between them it's a nightmare, you install and 
> setup something on one computer and then you have to re-invest that 
> time doing it on the other. You have issues having the files synced 
> across both machines you can try and use rsync but I'm not sure it's a 
> great tool for the job especially if you have different sized HDD's and 
> don't want identical drives. Then if you happen to use multiple OS's 
> but for the same things (it happens - I do it all the time for example) 
> then you're really struggle getting it so i can be working on my 
> desktop and then pick up my laptop to go somewhere and be able to just 
> pick up where i left off.
>
> But all of those are problems on the workstation that in my opinion, 
> aren't fixed.

You're right of course, I've simplified the "deploying between desktops
stuff" and the "backing things up" story.  Things are easier *for me*
because:

 - I know how to use rsync (not everyone does, and that's technical
   privilege as a prerequisite... not exactly userops!)
 - I've simplified my setup to only use mostly one workstation.  I used
   to use multiple and sync tons of directories all the time between git
   and git-annex, but it was a chore.

This is another reason, often overlooked, for the rise of "cloud" stuff:
people have multiple machines, and keeping them in sync is hard.  Easier
to have one server host those files in the first place and hope the
company never shuts down, or use a virtual directory a-la dropbox (the
free and much better alternative is probably git-annex, but that could
be a bit easier to set up yourself... it's probably the easiest solution
we currently have though... also, it can't scale up to millions of
files, as I found out when looking into using git-annex for my maildir
syncing to replace offlineimap, due to a limitations in git's indexing
behavior... not git-annex's fault! :))

But anyway, yes, it's not actually easy for workstations.  I do think
it's *comparatively* easier, if you tie down constraints to "knowing how
rsync works" and "using just one machine".  That's a lot to ask though.

> Servers for as difficult as work stations but i'm not sure the problem 
> is actually harder, I think i just find it as difficult because it's a 
> lot worse if I don't so something correctly on my server and also my 
> servers tend to be left a long time so i'll be moving to different 
> versions of the OS with entirely different versions of apache, bind, 
> SELinux, etc. so everything needs changing to work but that would occur 
> on my desktop too if I wasn't so happy to be on the bleeding edge.

It's interesting you raise this... "long term support" distros are often
raised as a solution to not have to update your stuff very often, but
that runs into its own sack of problems, eg it can be harder to deploy
newer applications for them, and it's a pain to port security patches
backwards...

> In general though i start by moving my configurations across. I do pick 
> and choose out of /etc/ I grab my httpd configs and then usually work 
> just fine. I grab the dovecot and postgres and they require a little 
> changing but not a huge amount for the most part. I copy over the home 
> directories which have the mail and other user files.  I then go 
> through my /var/www and copy my websites and yes dump my databases. A 
> lot of steps but there is often a lot less stuff on my servers than my 
> workstations - so there is that on our side :)
> 
> The thing which makes this so hard is if you forget about something 
> with your workstation or do something wrong it's not a huge problem. If 
> you forget anything or make a mistake your services are down for you 
> and the other users you probably support. You often don't know until 
> there is some urgency or lost data or whatever. My proposal to fix this 
> is unit tests for our servers. In code we have them so that we know 
> when you've changed things or everything works, on your server you can 
> use nagios but that doen't deal with all your jabber, postfix, dovecot, 
> IRCd or whatever services you have running. It's usually when the 
> frantic person not able to access their stuff is sending you a bunch of 
> messages that you realise you forgot to setup nagios for your new 
> server you moved to.

Yes, that's probably why it's harder, I agree.

> There is also the fact you have things like your glue records to update 
> if you're DNS moves unless you bring your IPs with you which then means 
> you can't have a smooth handover and extra configuration. I've had 
> people who don't need to run their mail server as I do it for them but 
> all their mail is bouncing and it's because their MX records ended up 
> pointing at something else so even when technical users don't even need 
> to manage their own server problems can go wrong because you have 
> either external DNS or glue records or whatever which are just one 
> extra thing.
>
> This seems pretty rambley so I think i'll stop here.

:)

> TLDR; yep it's hard but so are workstations. We should make unit tests 
> for our servers!

I actually think this is true: we should probably have "service
monitoring" along the lines of nagios actually built into the same tools
we're doing our deployment stuff with.  The tooling is similar for doing
checking as for doing updates, and you might be able to derive some of
that from the deployment information given.

But anyway, your point is made: deploying to workstations is actually
not necessarily easy for most users, and that's worth keeping in mind.
It's especially true that it's not easy for most users to *back up*
their data.  This is a good vision of what backups *should be* like for
users:

  http://blog.liw.fi/posts/debian-backups-by-defaut/

(Written, as it turns out, by the author of an encrypted backup
solution, obnam, but not about that software.)

It's probably a good idea that userops solutions also be able to apply
to desktops, not just servers.  I simplified my argument to "why are
servers harder" because I've been able to move between workstations much
easier (while making certain compromises) but haven't done the same for
my servers.  And partly it's constraints in my workstation setup that
have permitted that.  Worth keeping that in mind!

Thanks for the insightful post, Jess!
 - Chris


reply via email to

[Prev in Thread] Current Thread [Next in Thread]