[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ccp4bb]: Re: [o-info] Centralized binaries



***  For details on how to be removed from this list visit the  ***
***          CCP4 home page http://www.ccp4.ac.uk         ***

In response to Christopher A. Waddling who wrote about centralizing OSX binaries:

* Gordon Webster <gordon_webster@hms.harvard.edu> [2002-04-24 14:13] wrote:
 
> This is a good idea and it certainly works as long as your network is healthy 
> ...
> 
> The downside of this arrangement is that if your network crashes or suffers 
> gridlock, nobody can do any work at all (been there, done that). In addition, 
> there can be a considerable time overhead in loading large binaries over a 
> network and you need to be very careful how you set up things like shareable 
> libraries, scratch files and working directories, so that this network 
> overhead does not also cause your applications to continue dragging their 
> heels once they are launched.

We do this -- all of the crystallographic (etc) software is on a single
disk served up by an SGI server (probably to be replaced by a Linux
server or network attached storage in the future).   This disk is
mounted via NFS (autofs) onto all of our umpteen workstations (IRIX and
Linux).

I don't recall hearing complaints about the slowness of loading large
binaries over the network (more likely to be about loading diffraction
images over the network! -- of course one of the local users that
follows this list is liable to complain later today or tomorrow :) ).
Maybe we're just lucky, but we haven't had serious network troubles in
a long time. Yes one has to be careful about shareable libraries, but
scratch files also all go on one disk on the server and it gets cleaned
periodically. On some workstations, the user's working directory is on
the large server and on some it is local to the workstation (depending
mostly on the local disk capacity). None of this has been really any
more of a headache than it would be if all software was installed on all
of the machines that might access it.

> Since disk space is cheap and time is money, why not set up a maintained, 
> central repository of binaries as you suggest, but then use something like the 
> very nice (and free) utility "rsync", to mirror this disk partition on each 
> user's machine. Using some very (very) simple scripts, your central machine 

Well... you can do this if you like. :) Our total supply (no doubt
bloated and not all used) of software amounts to more than 15Gbytes.
Not all of our computers have such large disks (some older ones are
only 1 or 2Gb), nor is there much point in equipping them with big
disks (the computers are too old to make it worthwhile). In fact even
the newer SGI's only came with 9Gb drives. We have over 50 SGI's, 24
Linux machines and 24 other (Suns and Xterms). Now not all of these need
to access this software, but I know for a fact that it is at least 30
machines that access this software regularly. 30 x 15 = 450Gb of *extra*
disk space that starts to become not so inexpensive!

> can update each mirror periodically (probably once a day at midnight is enough 
> unless you are a software company in the throes of a beta release) and every 
> user's machine will have a partition that automatically mirrors the layout of 
> your central applications disk.
> 
> This system allows everybody to carry on working even if the network throws a 
> wobbly, as well as saving time and reducing the traffic on your network. The 
> redundancy of information also makes this system a lot more robust and 
> flexible, especially when s**t happens, as we all know it does ;)

We have thought about using cached NFS to accomplish something like this
(mirroring) without copying the whole stash of binaries. Even if all the
software were to be mirrored on each machine I would still need to set
up environment variables and aliases to allow the software to run. The
users here just have to do "source /prog/setup <program_name>" and all
of that gets set for them whether they are running on an old MIPS-2 SGI
Indigo, or a newer MIPS-4 Octane2 or a Linux PC. This setup file is now
1788 lines long!  It was a little bit of a pain to get started but now
is pretty straightforward to maintain.  If anyone cares to see the
script it is accessible from the "scripts" link on my home page (see
below).

We have backups and we have a backup disk that could be put into action
to restore the software.  Now if the SGI is dead we're pretty much SOL
but that has been rare as well.

Cheers,
Robert
-- 
Robert L. Campbell, Ph.D.               http://biophysics.med.jhmi.edu/rlc
rlc@k2.med.jhmi.edu                                    phone: 410-614-6313
Research Specialist/X-ray Facility Manager
HHMI/Dept. of Biophysics & Biophysical Chem., The Johns Hopkins University
    PGP Fingerprint: 9B49 3D3F A489 05DC B35C  8E33 F238 A8F5 F635 C0E2