S3FS on CentOS

So, we’re using CentOS 5 for some of our servers; the ones we need cPanel for.  These are our shared setups with people running blogs, Joomla, Drupal, and such.

I’ve never liked FTP for anything due to its insecurity, slowness, and its inability to recover from even the simplest  of errors.

So I finally got our server provider to build a kernel with FUSE support built in so that I could use s3fs to mount an Amazon S3 bucket as a normal mount.

It was a little annoying to set up the first time but, when I had do it a second time, and had to go all the way back to the beginning, I figured I’d better write it down this time.

Install Subversion

First step is to get s3fs from its site at: http://code.google.com/p/s3fs/wiki/FuseOverAmazon.

I prefer to check out from Subversion but Subversion was not installed on my server.

A simple:

	# yum install subversion

 

gave me an error about a missing dependency:

Error: Missing Dependency: perl(URI) >= 1.17 is needed by package subversion-1.4.2-4.el5_3.1.x86_64 (base)

 

To make a long story short, I ended up downloading and installing the RPM directly with:

# wget http://mirror.centos.org/centos/5/os/i386/CentOS/perl-URI-1.35-3.noarch.rpm
# rpm install perl-URI*

 

Download and Install s3fs

Once I had subversion installed, I checked out and built s3fs:

# svn checkout http://s3fs.googlecode.com/svn/trunk/ s3fs-read-only
# cd s3fs-read-only/s3fs
# make install

 

There are a handful of warnings from the compiler, but I ignored them since I wasn’t particularly interested in working on the code.

Setting up Keys

You can invoke s3fs with your Amazon credentials on the command line, in the environment, or in a configuration file. Since command lines and environments are too easy for bad guys to find, I opted for the configuration file approach.

Create a file /etc/passwd-s3fs with a line containing a accessKeyId:secretAccessKey pair.

You can have more than one set of credentials (i.e., credentials for more than one amazon s3 account) in /etc/passwd-s3fs in which case you’ll have to specify -o accessKeyId=aaa on the command line.

Once that’s all set up, you can mount the S3 bucket mybucket at the mountpoint /mnt/mybucket, the command line is:

	# /usr/bin/s3fs mybucket /mnt/mybucket

Now, you can treat /mnt/mybucket as a regular copy destination including using it for rsync!

If you ever want to get rid of the mount, the normal unix umount command does the trick:

	# umount /mnt/mybucket

Enjoy!

Bitbucket offline for hours, reminded me of backups…

BitBucket Was Down. For a Long Time. Relatively Speaking.

So, as many of you will already know, BitBucket, our favorite Mercurial hosting service, went down.

For a while.

Seemed like a long time.

This reminded me of two things, in particular:

  1. DVCS — why we only use DVCS now
  2. Backups — making sure it’s ALL backed up AUTOMATICALLY.

DVCS, ONLY

What if BitBucket had never come back up?

With Mercurial (or Git, or Bazaar, or, I think, darcs), you have a complete copy of the repository. Not just the latest, everything. So, if the external repository blows up, everyone working on the project has a copy of everything as of the last time they sync’d with another copy.

Interestingly, this is exactly how DropBox works; you (and everyone else) has a complete copy of all of the files in the DropBox giving you fresh copies of everything, on every machine, as of the last time it was connected to the network.

Backups!

Ok, I have backups of everything, in lots of places, for almost everything.

But, I noticed, in what would have been an “Oh, crap, too late” type of way, that I didn’t (and don’t) have backups of everything on BitBucket.

IOW, while my repository would have been as safe as my last pull or update but I would have lost the issue tracker, and wiki.

I don’t have a solution right this second, but I’d like to collect community comments about this so we can develop and post (on BitBucket) a solution to “How do I make my DVCS hosting on BitBucket cover all of the things I have up on BitBucket, not just the source code repository.”

Thanks for any comments, I’d love a solution so that, if BitBucket ever were to fail completely, we are all sure that we’ve got one or more copies of everything, and it has to be completely automatic.

S

DropBox Rocks!

I gave up on using iDisk for anything meaningful a long time ago. It never synched right, would throw random errors on files that synch’d just fine last time, took forever and tons of processor to fail, and just basically sucked, and never worked, and sucked. Always has. Sucks.

It still sucks, what, 5, 8 years later? Whatever, I’ve been paying for my (puke) Mobile Me subscription to keep my mac.com address but I’m not going to renew. “Mobile Me”!?!?!? Gag me with a bicycle.

So…I was more than a little skeptical when I heard about DropBox.

The good news is that it actually “just works.”

Really.

Doesn’t suck.

Works.

You put stuff in, it synchs in the background, and the next time you look for it on another computer, it’s there.

It runs on Mac, Windows, and Linux (though I’m not sure how integrated it is for command line use which is the only way I use Linux).

You get 2GB for free and, when you get completely addicted, you can upgrade to 50GB (!) for 9.99/month.

It’s all stored on Amazon’s S3 storage system and only costs 4 cents per GB more than raw S3.

IOW, it’s about two bucks more a month, for 50 GB of secure storage, than raw S3 storage.

Fifty GB (50GB) of raw S3 storage can be had for $7.50/month, Fifty GB (50GB) of DropBox. can be had for 9.95/month.

It’s what iDisk should have been except without the OS lock-in, five (5) times more storage, Windows and Linux clients, and “the working.”

Ok, you don’t get the pretty e-mail address and the crappy sync of other things that also never work right but who cares?

Actual working, synching storage — YAY!

DropBox.

Create tar.gz of current directory and all subdirectories

Just another quick note to myself since I always forget to remember this.

To compress the whole of the entire current directory into a compressed tar/gz file:

	# tar cvz [--remove-files] -f mytarfile.tgz .

c == create
v == verbose
f == output file
z == compress
–remove-files == optionally remove files after adding them to the archive
. == current directory contents

To see what you’ve done:

	# tar --list -f mytarfile.tgz

I always forget to use the -f switch so it just sits there waiting for some magic input to arrive. What, am I supposed to type the contents of an archive for it to show me?

I always use tar vs. zip since tar remembers file permissions whereas zips do not (that may have changed, don’t care).