Tuesday, 14 October 2014

Tired of full /var ?

This is how I prevent /var from getting full on any of our servers. I wrote these two scripts, spacemonc.py and spacemond.py. spacemonc.py is a client, and it is installed on each grid system and worker node as a cronjob:
# crontab -l | grep spacemonc.py
50 18 * * * /root/bin/spacemonc.py
Because it's going to be an (almost) single threaded server, I use puppet to make it run at a random time on each system (I say "almost" because it actually uses method level locking to hold each thread in a sleep state, so it's actually a queueing server, I think; it won't drop simultaneous incoming connections, but it's unwise to allow too many of them to occur at once.)
        cron { "spacemonc":
          #ensure => absent,
          command => "/root/bin/spacemonc.py",
          user    => root,
          hour    => fqdn_rand(24),
          minute  => fqdn_rand(60),
        }
And it's pretty small:
/usr/bin/python

import xmlrpclib
import os
import subprocess
from socket import gethostname

proc = subprocess.Popen(["df | perl -p00e 's/\n\s//g' | grep -v ^cvmfs  | grep -v hepraid[0-9][0-9]*_[0-9]"], stdout=subprocess.PIPE, shell=True)
(dfReport, err) = proc.communicate()

s = xmlrpclib.ServerProxy('http://SOMESERVEROROTHER.COM.ph.liv.ac.uk:8000')

status = s.post_report(gethostname(),dfReport)
if (status != 1):
  print("Client failed");
The strange piece of perl in the middle is to stop a bad habit in df of breaking lines that have long fields (I hate that; ldapsearch and qstat also do it.) I don't want to know about cvmfs partitions, nor raid storage mounts.

spacemond.py is installed as a service; you'll have to pinch a /etc/init.d script to start and stop it properly (or do it from the command line to start with.) And the code for spacemond.py is pretty small, too:
#!/usr/local/bin/python2.4

import sys
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
import time
import smtplib
import logging

if (len(sys.argv) == 2):
  limit = int(sys.argv[1])
else:
  limit = 90

# Maybe put logging in some time
logging.basicConfig(level=logging.DEBUG,
  format='%(asctime)s %(levelname)s %(message)s',
  filename="/var/log/spacemon/log",
  filemode='a')

# Email details
smtpserver = 'hep.ph.liv.ac.uk'
recipients = ['sjones@hep.ph.liv.ac.uk','sjones@hep.ph.liv.ac.uk']
sender = 'root@SOMESERVEROROTHER.COM.ph.liv.ac.uk'
msgheader = "From: root@SOMESERVEROROTHER.COM.ph.liv.ac.uk\r\nTo: YOURNAME@hep.ph.liv.ac.uk\r\nSubject: spacemon report\r\n\r\n"

# Test the server started
session = smtplib.SMTP(smtpserver)
smtpresult = session.sendmail(sender, recipients, msgheader + "spacemond server started\n")
session.quit()

# Restrict to a particular path.
class RequestHandler(SimpleXMLRPCRequestHandler):
  rpc_paths = ('/RPC2',)

# Create server
server = SimpleXMLRPCServer(("SOMESERVEROROTHER.COM", 8000), requestHandler=RequestHandler)
server.logRequests = 0
server.register_introspection_functions()

# Class with a method to process incoming reports
class SpaceMon:
  def post_report(address,hostname,report):
    full_messages = []
    full_messages[:] = []            # Always empty it

    lines = report.split('\n')
    for l in lines[1:]:
      fields = l.split()
      if (len(fields) >= 5):
        fs = fields[0]
        pc = fields[4][:-1]
        ipc = int(pc)
        if (ipc  >= limit ):
          full_messages.append("File system " + fs + " on " + hostname + " is getting full at " + pc + " percent.\n")
    if (len(full_messages) > 0):
      session = smtplib.SMTP(smtpserver)
      smtpresult = session.sendmail(sender, recipients, msgheader + ("").join(full_messages))
      session.quit()
      logging.info(("").join(full_messages))
    else:
      logging.info("Happy state for " + hostname )
    return 1

# Register and serve
server.register_instance(SpaceMon())
server.serve_forever()
And now I get an email if any of my OS partitions is getting too full. It's surpising how small server software can be when you use a framework like XMLRPC. In the old days, I would have needed 200 lines of parsing code and case statements. Goodbye to all that.

Thursday, 3 July 2014

APEL EMI-3 upgrade

Here are some notes from Manchester upgrade to EMI-3 APEL. The new APEL is much simpler as it is a bunch of python scripts with a couple of key=value configuration files, rather than java scripts with XML files. It doesn't have YAIM to configure it but since it is much easier to install and configure it doesn't really matter anymore. As an added bonus I found that it's also much faster when it publishes and doesn't require any tedious tuning of how many records at the time to publish.

So Manchester starting point to upgrade was
  • EMI-2 APEL node
  • EMI-2 APEL parsers on EMI-3 cream CEs
    • We have 1 batch system per CE so I haven't tried a configuration in which there is only 1 batch system and multiple CEs
  • In few months we may move to ARC-CE so configuration was done mostly manually
I didn't preserve the old local APEL database since all the records are in the central APEL one anyway.  So the steps to carrie out were the following:
  1. Install a new EMI-3 APEL node
  2. Configure it 
  3. Upgrade the CEs parsers to EMI-3 and point them the new node
  4. Disable the old EMI-2 APEL node and backup its DB
  5. Run the parsers and fill the new APEL node DB
  6. Publish all records for the previous month from the new APEL machine

Install a new EMI-3 APEL node

Installed a vanilla VM with
  • EMI-3 repositories
  • Mysql DB
  • Host certificates
  • ca-policy-egi-core
I did this with puppet since all the bits and pieces were already there for other type of services I just put together the profile for this machine. Then manually I've installed the rpms for APEL
  • yum install --nogpg emi-release
  • yum install apel-ssm apel-client apel-lib

Configure EMI-3 APEL node

I followed the instructions on the official EMI-3 APEL server guide.

There are no tips here I've only changed the obvious fields Like site_name and password plus few others like the top BDII because we have a local one and the location of the hostcertificate because we have a different name.

I didn't install install the publisher cron job at this stage because the machine was not ready yet to publish

Upgrade the CEs parsers to EMI-3 and point them the new node

The CEs as I said are already on EMI-3, only the APEL parsers were still EMI-2 so I disabled the EMI-2 cron job
  • rm /etc/cron.d/glite-apel-pbs-parser  
Installed the EMI-3 APEL  parsers rpm
  • yum install apel-parser
Configured the parsers following the instructions on the official EMI-3 APEL parser guide setting the obvious parameters and installing also the cron job after a trial parsing test.

NOTE: the parser configuration file for me is a bit confusing regarding the batch system name it states

# Batch system hostname.  This does not need to be a definitive hostname,
# but it should uniquely identify the batch system.
# Example: pbs.gridpp.rl.ac.uk
lrms_server =


It seems you can use any name. You are of course better off using your batch system server name. We have one for each CE so the configuration file on each contains that. In the database this will identify the records from each machine CE. I'm not sure about what happens with 1 batch system and several CEs. Following literally one should put only the batch system but then there is no distinction between CEs.

Disable the old EMI-2 APEL node and backup its DB

Just removed the old cron job the machine is still running but it isn't doing anything while waiting to be decomissioned.

Run the parsers and fill the new APEL node DB

You will need to publish an entire month prior to when you are installing. For example for us it was publish all the June records, but since I didn't want to republish everything we had in the log files I moved the batch system and blah log files prior to mid May to a backup subdirectory and parsed only the log files for end of May June. May days were needed because some jobs that finished in June early days had started in May and one wants the complete record. The first jobs to finish in June in Manchester started on the 25th of May so you may want to go back a bit with the parsing.

Publish all records for the previous month from the new APEL machine

Finally on the new machine now filled with the June records plus some May I've done a bit of DB clean up as suggested by the APEL team. If you don't do this step the APEL team will do it centrally before stitching the old EMI-2 record and the new ones 
  • Delete from JobRecords where EndTime<"2014-06-01";
  • Delete from SuperSummaries where Month="5";
After all this I modified the configuration file (/etc/apel/client.cfg) to publish a gap from the 25th of May until the day before I published i.e. 1st of July. I then modified again to put back "latest". I finally installed the cron job also on the new APEL to publish regularly every day.

Wednesday, 7 May 2014

Planning for SHA-2

Timeline


The voms servers at CERN will be transferred to new hosts that use the newer SHA-2 certificate standard. The changes are described in this post:

CERN VOMS service will move to new hosts

The picture below lays out the timeline for the change.


Timeline for Cern Voms Server Changes
The picture shows no change to the BNL server, vo.racf.bnl.gov, as none has been announced AFAIK. The changes will be to those servers with the cern.ch domain name.


New VOMS Server Hosts


The VOs associated with these changes are alice, atlas, cms, lhcb and ops. Sites supporting any of those will have to make a plan to update.

The new hosts have been set up already and entered against the related VOs in the ops portal.  The  table below summarises the current set up (ignoring  vo.racf.bnl.gov) as advertised in the operations portal (as of 7th May 2014).


VO Vomses Port Old Server Is admin? New Server IsAdmin?
atlas15001lcg-voms.cern.chNolcg-voms2.cern.chYes
atlas15001voms.cern.chYesvoms2.cern.chYes
alice15000lcg-voms.cern.chNolcg-voms2.cern.chYes
alice15000voms.cern.chYesvoms2.cern.chYes
cms15002lcg-voms.cern.chNolcg-voms2.cern.chYes
cms15002voms.cern.chYesvoms2.cern.chYes
lhcb15003lcg-voms.cern.chNolcg-voms2.cern.chYes
lhcb15003voms.cern.chYesvoms2.cern.chYes
ops15009lcg-voms.cern.chNolcg-voms2.cern.chYes
ops15009voms.cern.chYesvoms2.cern.chYes

Notes: The IsAdmin flag tells whether the server could be used to download used to create the DN grid-map file. The port numbers are unaffected by the change.

VOMS Server RPMS

As described in the announcement (see link at the top), a set of rpms have been created, one per WLCG-related VO:

  • wlcg-voms-alice
  • wlcg-voms-atlas
  • wlcg-voms-cms
  • wlcg-voms-lhcb
  • wlcg-voms-ops

The rpms are hosted in the yum repository WLCG repository. To install, e.g.

$ cd /etc/yum.repos.d/
$ wget http://linuxsoft.cern.ch/wlcg/wlcg-sl6.repo

Local Measures at Liverpool

At Liverpool, the configuration of the following servers will need to be changed:
  • Argus
  • Cream CE
  • DPM SE
  • WN and
    ...
  • UI (eventually)

There will be a gap of some weeks (see the picture) between the deadline for sites to update their services which consume certificates  (e.g. Argus, Cream CE, DPM SE, and WN etc.) and the deadline for sites to update their  UIs. This is to prevent the use  of new-style certificates that cannot be interpreted.

So, to effect this change, Liverpool will apply the RPMS on our consuming service nodes in early May. As soon as the all-sites deadline has passed (2nd June) Liverpool will update its UIs in a similar manner.

If all goes well, Liverpool will remove reference to the old servers after the final deadline, 1st July. The plan in this case is to effect the change using the traditional yaim/site-info.def/vo.d method as these changes will need to be permanently maintained.

Effects on Approved VOs, VomsSnooper etc.

For tracking proposes, the GridPP Approved VOs document will attempt to remain synchronised with the operations portal, but the VomsSnooper process is asynchronous so there may be discrepancies around the deadlines. Sites are advised to watch out for these race conditions.

Note: while the servers are being changed (i.e from now until 2nd June for certificate consuming services, and from 2nd June to 1 July (for consuming producing services, e.g. UIs) there can no canonical form of the VOMS records because different sites have their own implementation schedule and may use different settings temporarily, as described in my post above.


Monday, 28 April 2014

Snakey - a mindless way to reboot the cluster

Introduction

I'm fed up with all the book-keeping when I need to reboot or rebuild our cluster.

First I need to set a subset of nodes offline. Then I have to monitor them until some are drained. Then, as soon as any is drained, I have to reboot it by hand, then wait for it to build, then test it and finally put it back online, Then I choose another set (maybe a rack) and go through the same thing over and over until the cluster is done.

So, to cut all that, I've written a pair of perl scripts, called snakey.pl and post_snakey.pl. I run each (at the same time) in a terminal and they do all that work for me, so I can do other things, like Blog Posts. Start snakey.pl first.

Note: all this assumes the use of the test nodes suite written by Rob Fay, at Liverpool.

Part 1 – Snakey

This perl script, called snakey.pl, reads a large list, and puts a selection offline with testnodes. It drains them, and reboots them once drained. For each one that gets booted, another from the list is offlined. In this way, it "snakes" through the selected part of the cluster. Our standard buildtools+puppet+yaim system takes care of the provisioning.

Part 2 – Post Snakey

Another script, post_snakey.pl, tells if the nodes have been rebooted by snakey, and if they pass the testnodes test script. Any that do are put back on , so they come online. The scripts have some safety locks to stop havoc breaking out. They usually just stop if anything weird is seen.

Part 3 – Source Code

You've seen all the nice blurb, so here's the source code. I've had to fix it up because HTML knackers the "<", ">" and "&" chars - I hope I haven't broken it.

Note: not the cleanest code I've ever written, but it gets the job done.

Good luck!


----- snakey.pl ----------------------
#!/usr/bin/perl

use strict;
use Fcntl ':flock';
use Getopt::Long;

sub initParams();

my %parameter;

initParams();

my @nodesToDo;

open(NODES,"$parameter{'NODES'}") or die("Cannot open file of nodes to reboot, $!\n");
while() {
  chomp($_);
  push(@nodesToDo,$_);
}
close(NODES);

checkOk(@nodesToDo);

my @selection = selectSome($parameter{'SLICE'});
foreach my $n(@selection) {
  print "Putting $n offline\n";
  putOffline($n);
}

while( $#selection > -1) {

  my $drainedNode = '';
  while($drainedNode eq '') {
    sleep( 600 );
    $drainedNode = checkIfOneHasDrained(@selection);
  }
 
  @selection = remove($drainedNode,@selection);

  print("Rebooting $drainedNode\n");
  my $status = rebootNode($drainedNode);
  print("status -- $status\n");

  my @nextOne = selectSome(1);
  if ($#nextOne == 0) {
    my $nextOne = $nextOne[0];
    print "Putting $nextOne offline\n";
    putOffline($nextOne);
    push(@selection,$nextOne);
  }
}
#-----------------------------------------
sub putOffline() {
  my $node = shift();
  open(TN,"/root/scripts/testnodes-exemptions.txt") or die("Could not open testnodes.exemptions.txt, $!\n");
  while() {
    my $l = $_;
    chomp($l);
    $l =~ s/#.*//;
    $l =~ s/\s*//g;
    if ($node =~ /^$l$/) {
      print ("Node $node is already in testnodes-exemptions.txt\n");
      return;
    }
  }
  close(TN);
  open(TN,">>/root/scripts/testnodes-exemptions.txt") or die("Could not open testnodes.exemptions.txt, $!\n");
  flock(TN, LOCK_EX) or die "Could not lock /root/scripts/testnodes-exemptions.txt, $!";
  print (TN "$node # snakey.pl put this offline " . time() . "\n");
  close(TN) or die "Could not write /root/scripts/testnodes-exemptions.txt, $!";
}
#-----------------------------------------
sub remove() {
  my $drained = shift();
  my @poolOfNodes = @_;

  my @newSelection = ();
  foreach my $n (@poolOfNodes) {
    if ($n !~ /$drained/) {
      push(@newSelection,$n);
    }
  }
  die("None removed\n") unless($#newSelection == ($#poolOfNodes -1));
  return @newSelection;
}

#-----------------------------------------
sub checkIfOneHasDrained(@) {
  my @nodesToCheck = @_;
  foreach my $n (@nodesToCheck) {
    my $hadReport = 0;
    my $state = "";
    my $jobCount = 0;

    open(PBSNODES,"pbsnodes $n|");
    while() {
      my $l = $_;
      chomp($l);
      if ($l =~ /state = (.*)/) {
        $state = $1;
        $hadReport = 1;
      }
      if (/jobs = (.*)/) {
        my $jobs = $1;
        my @jobs = split(/,/,$jobs);
        $jobCount = $#jobs + 1;
      }
    }
    close(PBSNODES);
   
    print("Result of check on $n: hadReport - $hadReport, state - $state, jobCount - $jobCount\n");
    if (($hadReport) && ($state eq 'offline') && ($jobCount ==0)) {
      return $n;
    }
  }
  return "";
}

#-----------------------------------------
sub selectSome($) {
  my $max = shift;
  my @some = ();
  for (my $ii = 0; $ii < $max; $ii++) {
    if (defined($nodesToDo[0])) {
      push(@some,shift(@nodesToDo));
    }
  }
  return @some;
}

#-----------------------------------------
sub checkOk(){
  my @nodes = @_;
 
  foreach my $n (@nodes) {
    my $actualNode = 0;
    my $state      = "";
    open(PBSNODES,"pbsnodes $n|") or die("Could not run pbsnodes, $!\n");
    while() {
      if (/state = (.*)/) {
        $state = $1;
        $actualNode = 1;
      }
    }
    close(PBSNODES);
    if (! $actualNode) {
      die("Node $n was not an actual one!\n");
    }
    if ($state =~ /offline/) {
      die ("Node $n was already offline!\n");
    }
  }
  return @nodes;
}

#-----------------------------------------
sub initParams() {

  GetOptions ('h|help'       =>   \$parameter{'HELP'},
              'n:s'          =>   \$parameter{'NODES'} ,
              's:i'          =>   \$parameter{'SLICE'} ,
              );

  if (defined($parameter{'HELP'})) {
    print <
Abstract: A tool to drain and boot a bunch of nodes

  -h  --help                  Prints this help page
  -n                 nodes    File of nodes to boot
  -s                 slice    Size of slice to offline at once

TEXT
    exit(0);
  }

  if (!defined($parameter{'SLICE'})) {
    $parameter{'SLICE'} = 5;
  }

  if (!defined($parameter{'NODES'})) {
    die("Please give a file of nodes to reboot\n");
  }

  if (! -s  $parameter{'NODES'} ) {
    die("Please give a real file of nodes to reboot\n");
  }
}
#-----------------------------------------
sub rebootNode($) {
  my $nodeToBoot = shift();
  my $nodeToCheck = $nodeToBoot;
  my $pbsnodesWorked = 0;
  my $hasJobs        = 0;
  open(PBSNODES,"pbsnodes $nodeToCheck|");
  while()  {
    if (/state =/) {
      $pbsnodesWorked = 1;
    }
    if (/^\s*jobs = /) {
      $hasJobs = 1;
    }
  }
  close(PBSNODES);
  if (! $pbsnodesWorked) { return 0; }
  if (  $hasJobs       ) { return 0; }

  open(REBOOT,"ssh -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=10 $nodeToBoot reboot|");
  while() {
    print;
  }
  return 1;
}

----- post-snakey.pl ----------------------

#!/usr/bin/perl

use strict;
use Fcntl ':flock';
use Getopt::Long;

my %offlineTimes;

while ( 1 ) {
  %offlineTimes = getOfflineTimes();
  my @a=keys(%offlineTimes);
  my $count = $#a;

  if ($count == -1 ) {
    print("No work to do\n");
    exit(0);
  }
 
  foreach my $n (keys(%offlineTimes)) {
 
    my $uptime = -1;
    open(B,"ssh -o ConnectTimeout=2 -o BatchMode=yes $n cat /proc/uptime 2>&1|");
    while() {
      if (/([0-9\.]+)\s+[0-9\.]+/) {
        $uptime = $1;
      }
    }
    close(B);
    if ($uptime == -1) {
      print("Refusing to remove $n because it may not have been rebooted\n");
    }
    else {
      my $offlineTime = $offlineTimes{$n};
      my $timeNow = time();
      if ($timeNow - $uptime <= $offlineTime ) {
        print("Refusing to remove $n. ");
        printf("Last reboot - %6.3f  days ago. ", $uptime / 24 / 60 /60);
        printf("Offlined    - %6.3f  days ago.\n", ($timeNow - $offlineTime)  / 24 / 60 /60);
      }
      else {
        print("$n has been rebooted\n");
        open(B,"ssh -o ConnectTimeout=2 -o BatchMode=yes $n ./testnode.sh|");
        while() { }
        close(B);
        my $status = $? >> 8;
        if ($status == 0) {
          print("$n passes testnode.sh; will remove from exemptions\n");
          removeFromExemptions($n);
        }
        else {
          print("$n is not passing testnode.sh - $status\n");
        }
      }
    }
  }
  sleep 567;
}

#-----------------------------------------
sub getOfflineTimes() {
  my %offlineTimes = ();
  open(TN,"
  while() {
    if (/(\S+)\s+\# snakey.pl put this offline (\d+)/) {
      $offlineTimes{$1} = $2;
    }
  }
  close(TN);
  return %offlineTimes;
}

#-----------------------------------------
sub removeFromExemptions($) {

  my $node = shift();

  open(TN,"
  my @lines = ;
  close( TN );
  open(TN,">/root/scripts/testnodes-exemptions.txt") or die("Could not open testnodes.exemptions.txt, $!\n");
  flock(TN, LOCK_EX) or die "Could not lock /root/scripts/testnodes-exemptions.txt, $!";
  foreach my $line ( @lines ) {
    print TN $line unless ( $line =~ m/$node/ );
  }
  close(TN) or die "Could not write /root/scripts/testnodes-exemptions.txt, $!";
}

Tuesday, 15 April 2014

Kernel Problems at Liverpool

Introduction

Liverpool recently updated its cluster to SL6. In doing so, a problem occurred whereby the kernel would experience lockups during normal operations. The signs are unresponsiveness, drop-outs in Ganglia and (later) many "task...blocked  for 120 seconds" msgs in /var/log/m.. and dmesg.


Description

Kernels in the range 2.6.32-431* exhibited a type of deadlock when run on certain hardware with BIOS dated after 8th March 2010.

This problem occured on Supermicto hardware, main boards:
  • X8DTT-H
  • X9DRT

Notes:

1) No hardware with BIOS dated 8th March 2010 or before showed this defect, even on the same board type.

2) The oldest kernel of the 2.6.32-358 range is solid. This is corroborated by operational experience with the 358 range.

3) All current kernels in the 2.6.32-431 range exhibited the problem on our newest hardware, and a few nodes of the older hardware that had had unusual BIOS updates.

Testing

The lock-ups are hard to reproduce, but after a great deal of trail and error,  a ~ 90% effective predictor was found.

The procedure is to:

  • Build the system completely new in the usual way and 
  • When yaim gets to "config_user", use a script (stress.sh) to run 36 threads of gzip and one of iozone. 

On a susceptible node, this is reasonably certain to make it lock up after a minute. The signs are unresponsiveness and (later) "task...blocked  for 120 seconds" msgs in /var/log/m.. and dmesg.

I  observed that if the procedure is not followed "exactly", it is unreliable as a predictor. In particular, if you stop Yaim and try again, the predictor is useless.

To test that, I isolated the config_users script from Yaim, and ran it separately along with the stress.sh script. Result: useless - no lock-ups were seen.

Note: This result was rather unexpected because the isolated config_users.sh script works in the same way as the original.





Unsuccessful Theories

A great many theories were tested and rejected or not pursued further (APIC problems, disk problems, BIOS differences,various kernels, examination of kernel logs, much googling etc. etc.) Eventually, a seemingly successful theory was stumbled upon which I describe below.

The Successful Theory

All our nodes had unusual vm settings:

# grep dirty /etc/sysctl.conf
vm.dirty_background_ratio = 100
vm.dirty_expire_centisecs = 1800000
vm.dirty_ratio = 100


These custom settings facilitate the storage of atlas "short files" in RAM. Basically, they force files to remain off disk for a long time, allowing very fast access.

The modification had been tested almost exhaustively for several years on earlier kernels - but perhaps some change (or latent bug?) in the kernel had invalidated them somehow.

We came up with the idea that the issue originates in the memory operations that occur prior to Yaim/config_users. This would explain why anything but the exact activity created by the procedure might well not trigger the defect. We thought this could  tally with the idea of the ATLAS "short file" modifications in sysctl.conf. The theory is that these mods set up the problem during the memory/read/write operations (i.e. the asynchronous OS loading and flushing of the page cache).

 To test this, I used the predictor on susceptible nodes , but without applying the ATLAS "short file" patch.  Default vm settings were adopted instead.

Result

Very satisfying at last - absolutely no sign on the defect. As the ATLAS "short file" patch is not very beneficial given the current data traffic, we have decided to go back to default "vm.dirty" settings and monitor the situation carefully.



Wednesday, 26 February 2014

Central Argus Banning at Liverpool

Introduction

Liverpool uses an ARGUS server, hepgrid9.ph.liv.ac.uk,  for user authentication from the CEs and WNs. A requirement came down from above to implement central banning and this is how we went about it. Most of this came from Ewan's TB_SUPPORT email (title: NGI Argus requests for NGI_UK) and from this description here: 
 
http://wiki.nikhef.nl/grid/Argus_Global_Banning_Setup_Overview 

Central Banning Architecture



The ban policies flow from the central WLCG server through the NGI one and down to the site. This is a feature of ARGUS.

Setup at Liverpool


When we build (or change) our ARGUS server, we use a script (argus.pol.sh) to load our argus policies from a file (argus.pol). The script looks like this now we've added central banning:

#!/bin/bash
/usr/bin/pap-admin rap
/usr/bin/pap-admin apf /root/scripts/argus.pol

pap-admin add-pap ngi argusngi.gridpp.rl.ac.uk "/C=UK/O=eScience/OU=CLRC/L=RAL/CN=argusngi.gridpp.rl.ac.uk"
pap-admin enable-pap ngi
pap-admin set-paps-order ngi default
pap-admin set-polling-interval 3600

/etc/init.d/argus-pdp reloadpolicy
/etc/init.d/argus-pepd clearcache
touch /root/scripts/done_argus.pol.sh
 

The first few lines just load our standard site policies. The last bit flushes some buffers. The middle bit is the part you need.

Basically, it adds polices from the NGI ARGUS server. We've also reduced the polling interval. When you run the script, you'll connect the local ARGUS server to the NGI one and periodically download the remote (central) banning policies.

Note: Ewan thinks the caching delay is too much - it was 4 hours. So we changed /etc/argus/pdp/pdp.ini, setting "retentionInterval = 21", i.e. 21 minutes.

After running the script, it's best to restart the Java daemons.

Testing

It's best to tell Ewan and Orlin about this as they can send tests over. To check if your site "looks" OK, try this:

pap-admin lp --all

And you should see the "remote" policies, e.g.

ngi (argusngi.gridpp.rl.ac.uk:8150):

resource ".*" BLAH BLAH BLAH



Friday, 3 May 2013

A thing of beauty Digital R81

We needed to make space for a couple of new racks and decided to get rid of our 'workbench'. The workbench consisted of an old Digital VAX system which was mostly stripped of it's innards to leave a few sturdy steel frames. Here are some nostalgic pics of the last remaining unit being gutted. Note the gorgeous circuit boards so easily accessible. And yes, that last one is a 500MB harddrive (so I'm told). The motor seems more suited to a washing machine. They don't make 'em like they used to.

 

Monday, 24 September 2012

Manchester network improvements in Graphs

As I posted here and here we have upgraded the network infrastructure within the Manchester Tier2. Below are some of the measured benefits of this upgrade so far.

Here is the improvement of the outgoing traffic in the atlas sonar tests between Manchester and BNL after we upgraded the cisco blades and replaced the rack switches with the 10G ones

Here instead is the throughput improvement after I have enabled the 10Gbps interface on the perfsonar machine. The test case is Oxford which also has the 10Gbps interfaces enabled.


and here more general rates with different sites. The 10Gbs is evident with sites that have enabled it.


The perfsonar tests have helped also to debug the poor atlas FTS rates the UK had with FZK (https://ggus.eu/ws/ticket_info.php?ticket=84008) in particular Manchester (and Glasgow) had  tried already to investigate last year with iperf within atlas what the problem was without much success due to measures not being taken systematically. This year the problem was finally pinned on FZK firewall and the improvement given by bypassing it is below here.


which reflects also in the improved rates in the atlas sonar tests between Manchester and FZK since also the data servers subnets bypass the firewall now.


Finally here is a increased throughput in data distribution to/from other sites as an atlas T2D. August rates were down due to a combination of problems with the storage but there is a growing trend since the rack switches and the cisco blades were upgraded.


Thursday, 9 August 2012

10GBE Network cards installation in Manchester

This is a collection of recipes I used to install the 10GBE cards. As I said in a previous post we chose to go 10GBASE-T so we bought X520-T2. They use the same chipset as the X520-DA2 so many things are in common.

The new DELL R610 and C6100 were delivered with the cards already installed. Although due to the fact that DA2 and T2 share the same chipset the C6100 were delivered with the wrong connectors so we are now waiting for a replacement. For the old Viglen WNs and storage we bought additional cards that have to be inserted one by one.

I started the installation process from the R610 because a) they had the cards and b) the perfsonar machines are R610. The aim is to use these cards as primaries and kickstart from them. By default pxe booting is not enabled. So one has to get bootutil from the intel site . What one downloads is for some reason a windows executable but once it is unpacked there are directories for other operating systems. The easiest thing to do is what Andrew has done to zip the unpacked directory and use bootutil from the machine without fussing around with USBs or boot disks. Said that it needs the kernel source to compile. You need to make sure you install the same version as the running kernel.

yum install kernel-devel(-running-kernel-version)
unzip APPS.zip
cd APPS/BootUtils/Linux_x86/
chmod 755 ./install
./install
./bootutil64e -BOOTENABLE=pxe -ALL
./bootutil64e  -UP=Combo -FILE=../BootIMG.FLB -ALL

The first bootutil command enables pxe the second updates the firmware.
After this you can reboot and enter the bios to rearrange the order of the network devices to boot from. When this is  done you can put the 10GBE interface mac address in the dhcp and reinstall from there.

At kickstart time there are some problems with the machine changing the order of the cards you can solve that using ipappend 2 and ksdevice=bootif as suggested in the RH docs in the pxelinux.cfg files. Thanks to Ewan for pointing that out.

Still the machine might not come back up with the interface working. There might be two problems here:

1) X520-T2 interface take longer to wake up than their little 1GBE sisters. It is necessary to insert a delay after the /sbin/ip command in the network scripts. To do this I didn't have to hack anything, I could just set

LINKDELAY=10

in the ifcfg-eth* configuration files and it worked.

2) It is not guarantueed  to have the 10GBE interface as eth0. There are many ways to stop this from happening.

One is to make sure HWADRR if ifcfg-eth0 is assigned the mac address value of the card the administrator want and not what the system decides. It can be done at kickstart time but this might mean to have a kickstart file for each machine which we are trying to get away from.

Dan and Chris suggested this might be corrected with udev The recipe they gave me was this

cat /etc/udev/rules.d/70-persistent-net.rules
KERNEL=="eth*", ID=="0000:01:00.0", NAME="eth0"
KERNEL=="eth*", ID=="0000:01:00.1", NAME="eth1"
KERNEL=="eth*", ID=="0000:04:00.0", NAME="eth2"
KERNEL=="eth*", ID=="0000:04:00.1", NAME="eth3"


and uses the pci device ID value which is the same for the same machine types (R610, C6100...). You can get the ID values using lspci | grep Eth. Not essential but if lspci returns something like Unknown device 151c (rev01) in the description it is just the pci database that is not up to date use update-pciid to update the database. There are other recipes around if you don't like this one, but this simplifies a lot the maintenance of the interfaces naming scheme.

The udev recipe doesn't work if HWADDR are set in the ifcfg-eth* files.  If they are you need to remove them to make udev work. A quick way to do this in every file is

sed -i -r '/^HWADDR.*$/d' ifcfg-eth*

in the post kickstart and then install the udev file.

10GBE cards might need different TCP tuning in /etc/sysctl.conf for now I took the perfsonar machine one which is similar to something already discussed long time ago.

net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 87380 16777216
net.core.netdev_max_backlog = 30000
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_congestion_control = htcp


The effects of moving to 10GBE can be seen very well in the perfsonar tests.

Friday, 20 July 2012

Jobs with memory leaks containment

This week some sites suffered from extreme memory hungry jobs using up to 16GB of memory and killing the nodes. These were most likely due to memory leaks. The user cancelled all of them before he was even contacted but not before he created some annoyance.

We have had some discussion about how to fix this and atlas so far has asked not to limit on memory because their jobs use for brief periods of time more than what is officially requested. And this is true most of their jobs do this infact. According to the logs the production jobs use up to ~3.5GB mem and slightly less than 5GB vmem. See plot below for one random day (other days are similar).

To avoid killing everything but still putting a barrier against the memory leaks what I'm going to do in Manchester is to limit for mem to 4GB and a limit for vmem to 5GB.

If you are worried about memory leaks you might want to go through a similar check. If you are not monitoring your memory consumption on a per job basis you can parse your logs. For PBS I used this command to produce the plot above

grep atlprd /var/spool/pbs/server_priv/accounting/20120716| awk '{ print $17, $19, $20}'| grep status=0|cut -f3,4 -d'='| sed 's/resources_used.vmem=//'|sort -n|sed 's/kb//g'
 

numbers are already sorted in numerical order so the last one is the highest (mem,vmem) a job has used that day. atlprd is the atlas production group which you can replace with other groups.  Atlas users jobs have up to a point similar usage and then every day you might find a handful crazy numbers like 85GB vmem and 40GB mem. These are the jobs we aim at killing.

I thought the batch system was simplest way because it is only two commands in PBS but after lot of reading and a week of testing it is not possible to over allocate memory without affecting the scheduling and ending up with less jobs on the nodes. This is what I found out:

There are various memory parameters that can be set in PBS:

(p)vmem: virtual memory. PBS doesn't interpret vmem as the almost unlimited address space. If you set this value it will interpret it for scheduling purposes as memory+swap available. It might be different with later versions but that's what happens in torque 2.3.6.
(p)mem: physical memory: that's you RAM.

when there is a p in front it means per process rather than per job

If you set them what happens is as follows:

ALL: if a job arrives without memory settings the batch system will assign these limits as allocated memory for the job not only as a limit the job doesn't have to exceed.
ALL: if a job arrives with memory resources settings that exceed the limits it will be rejected.
(p)vmem,pmem: if a job exceeds the settings at run time it will be killed as these parameters set limits at OS level.
mem: if a job exceeds this limit at run time it will not get killed. This is due to a change in the libraries apparently.

To check how the different parameters affect the jobs you can submit directly to pbs this csh command and play with the parameters

echo 'csh -c limit' | qsub -l vmem=5000000kb,pmem=1GB,mem=2GB,nodes=1:ppn=2

If you want to set these parameters you have to do the following

qmgr
qmgr: set queue long resources_max.vmem = 5gb
qmgr: set queue long resources_max.mem = 4gb
qmgr: set queue long resources_max.pmem = 4gb

These settings will affect the whole queue so if you are worried about other VOs you might want to check what sort of memory usage they have. Although I think only CMS might have a similar usage. I know for sure Lhcb uses less. And as said above this will affect the scheduling.

Update 02/08/2012

RAL and Nikhef use a maui parameter to correct the  the over allocation problem

NODEMEMOVERCOMMITFACTOR         1.5

this will cause maui to allocate up to 1.5 times more memory than there is on the nodes. So if a machine has 2GB memory a 1.5 factor allows to allocate 3GB. Same with other memory parameters described above. The factor can of course be tailored to your site.

On the atlas side there is a memory parameter that can be set in panda. It sets a ulimit on vmem on a per process basis in the panda wrapper. It didn't seem to have an effect on the memory seen by the batch system but that might be because forked processes are double counted by PBS which opens a whole different can of worms.

Thursday, 5 April 2012

The Big Upgrade in pictures

New Cisco blades, engines and power supplies

DELLs boxes among which new switches
Aerial view of the old cabling
Frontal view of the mess
Cables unplugged from the cisco
Cisco old blades with services racks still connected
New cat6a cisco cabling aerial view nice and tidy
Frontal view of the new cisco blades and cabling nice and tidy
Old and new rack switches front view
Old and new rack switches rear view
Emptying and reorganising the racks
Empty racks ready to be filled with new machines
Old DELLs cemetery
Old cables cemetery. All the cat5e cables going under the floor from the racks to the cisco half of the cables from the rack switches to the machines and all the patch cables in front of the cisco shown above have gone.
All the racks but two have now the new switches but the machines are still connected with cat5e cables. Upgrading the network cards will be done in Phase two one rack at the time to minimize service disruption.









The downtime lasted 6 days. Everybody who was involved did a great job and the choice of 10GBASE-T was a good one because the ports auto-negotiation is allowing us to run at 3 different speeds on the same switches: PDU 100Mbps, old WN and storage at 1Gbps, and the connection with the cisco is 10Gbps. We also kept one of the old cisco blades for connections that don't require 10Gbps such as the out-of-band management cables plus two racks of servers that will be upgraded at a later stage are still connected at 1Gbps to the cisco. And we finished perfectly in time for the start of data taking (and Easter). :)

Saturday, 31 March 2012

So long and thanks for all the fish


In 2010 we had already decommissioned half of the original mythical 2000 (1800 for us) EM64T CPUs Dell cluster that allowed us to be the 4th of the top 10 countries in EGEE in 2007.

 




















This year we are decommissioning the last 430 machines that served us so well for 6 years and 2 months. So... so long and thanks for all the fish.

Saturday, 14 January 2012

DPM database file systems synchronization

The synchronisation of the DPM database with the data servers file systems has been a long standing issue.  Last week we had a crash that made more imperative to check all the files and I eventually wrote a bash script that makes use of the GridPP DPM admin tools. I don't think this should be the final version but I'm quicker with bash than with python and therefore I  started with that. Hopefully later in the year I'll have more time to write a cleaner version in python that can be inserted in the admin tools based on this one. It does the following:

1) Create a list of files that are in the DB but not on disk
2) Create a list of files that are on disk but not in the DB
3) Create a list of SURLs from the list of files in the DB but not on disk to declare lost (this is mostly for atlas but could be used by LFC administrators for other VOs)
4) If not in dry run mode proceed to delete the orphan files and the orphan entries in the DB.
5) Print stats of how many files were in either list.

Although I put few protections this script should be run with care and unless in dry run mode shouldn't be run automatically AT ALL. However in dry run mode it will tell you how many files are lost and it is a good metric to monitor regularly as well as when there is a big crash.

If you want to run it, it has to run on the data servers where there is access to the file system. As it is now it requires a modified version of /opt/lcg/etc/DPMINFO that point to the head node rather than localhost because one of the admin tools used does a direct mysql query. For the same reason it also requires dpminfo user to have mysql select privileges from the data servers. This is the part that really could benefit from a rewriting in python and perhaps a proper API use as the other tool does. I also had to heavily parse the output of the tools which weren't created exactly for this purpose and this could also be avoided in a python script. There are no options but all the variables that could be options to customize the script with your local settings (head node, fs mount point, dry_run) are easily found at the top.

To create the lists it takes really little time no more than 3 minutes on my system but it depends mostly on how busy is your head node.

If you want to do a cleanup instead it is proportional to how many files have been lost and can take several hours since it does one DB operation per file. The time to delete the orphan files also depends on how many and how big they are but should take less than DB cleanup.

The script is here: http://www.sysadmin.hep.ac.uk/svn/fabric-management/dpm/dpm-synchronise-disk-db.sh

Wednesday, 30 November 2011

DPM upgrade 1.7.4 -> 1.8.2 (glite 3.2)

Last week I upgraded our DPM installation. It was a major change because I upgraded not only the DPM version but also the hardware and the backend mysql version.

I didn't take any measures this time before and after. I knew that becoming an alpha site in atlas was taking its toll on the old hardware and many of the timeouts were from gridftp but there had been a reappearance of the mysql ones I talked about in previous posts at the level that even restarting the service was hard.

[ ~]# service mysqld restart
Timeout error occurred trying to stop MySQL Daemon.

Stopping MySQL: [FAILED]

Timeout error occurred trying to start MySQL Daemon.


So I decided that the situation had become unsustainable and it was time to move to better hardware and software versions.

* Hardware: 2 cpu, 4GB mem, 2x250 GB raid1 -> 4 cores (HT on = 8 job slots), 24GB mem, 2x2TB raid1

There is no why here it was ok when we had limited access but the recent load was really too much for the old machine even with all the tuning. Suspected bad blocks on disks could be possible but no red leds nor hardware errors were reported by the machine.

* Mysql: 5.0.77 -> 5.5.10

Why mysql 5.5? Because InnoDB is the default engine and they have improved performance and instrumentation. On top of other things that we might actually start to use. A good blog article about the 5 reasons to move is this one: 5 good reasons to upgrade to mysql 5.5.

MySQL 5.5 is not in EPEL yet, but I found this CentOS community site that has the rpms and the instructions to install them.

After the installation I've also optimized the database partially with what I had already done in July, partly running a handy script mysqltuner.pl. This last one helps with variable you might not even know and even if you know them it tells you if they are too small. You need to be patient and let pass few hours before run it again.

* DPM: 1.7.4 -> 1.8.2

Why DPM 1.8.2 from glite 3.2? I would have gone for the UMD release or even the EMI one but then glite 3.2 was moved to production earlier than those and since I waited for this release since at least April I didn't think about it twice when I saw the escape route. It was really good timing too as it happened when I really couldn't postpone an upgrade anymore. You can find more info in the release notes. Among other reasons to upgrade: srmv2.2 in 1.7.4 has a memory leak which wasn't noticeable until the load was contained but for us exploded in October and is the reason I had to restart it every two days in the past few weeks.

Below the steps I took to reinstall the head node

On the old head node

* Set the site in downtime, drain the queues and kill all the remaining jobs.

* Turn off all the dpm and bdii services on the old head node

* Make a dump of the current database for backup

mysqldump -C -Q -u root -p -B dpm_db cns_db > dpm.sql-20111125.gz

* Download dpm-drop-requests-tables.sql supplied by Jean Philippe last July

wget http://www.sysadmin.hep.ac.uk/svn/fabric-management/dpm/dpm-drop-requests-tables.sql

* Drop the requests tables. This step is really useful to avoid painful reload times as I said in this other post about DPM optimization and because it drastically reduces the size of ibdata1 when you reload which has also benefits (my ibdata1 was reduced from 26GB to 1.7GB). Still you need to plan because it might take few hours depending on the system. On my old hardware it took around 7 hours.

mysql -p < dpm-drop-requests-tables.sql

* Dump reduced version of the database

mysqldump -C -Q -u root -p -B dpm_db cns_db > dpm.sql-20111125-v2.gz


* Copy both to a WEB server where they can be downloaded from in a later stage.

* Update the local repository for DPM head node and DPM disk servers. Since it is still glite I just had to rsync the latest mirror to the static area.

On the new head node
* Install the new machines with a DPM head node profile. This was again easy since it is still glite no changes were required in cfengine.

* Most of the following is not standard and I put it in a script. If you have problems with users IDs created by avahi packages you can uninstall them with yum removing all the dependencies and let them be reinstalled by the bdii dependency chain. It should work also uninstalling them with rpm -e --nodeps. This leaves redhat-lsb (which is what the bdii depends on) untouched but I haven't tried this last method. Here are the commands I executed:

# Get the dpm DB file
rm -rf dpm.sql-20111125-v2.gz*
wget http://ks.tier2.hep.manchester.ac.uk/T2/tmp/dpm.sql-20111125-v2.gz


# Install mysql5.5
rpm -Uvh http://repo.webtatic.com/yum/centos/5/latest.rpm
yum -y remove libmysqlclient5 mysql mysql-*
yum -y clean all

yum -y install mysql55 mysql55-server libmysqlclient5 --enablerepo=webtatic

service mysql stop

rm -rf /var/lib/mysql/*

# Get the local my.cnf
cfagent -vq

service mysqld start


# Install the DPM rpms
yum -y remove cups avahi avahi-compat-libdns_sd avahi-glib
yum -y install glite-SE_dpm_mysql lcg-CA


# Modify sql scripts for mysql5.5

cd
/opt/lcg/share/DPM/
for a in create_dp*.sql; do sed -i.old 's/TYPE/ENGINE/g' $a;done
grep ENGINE *


# Run YAIM and upload old DB

cd

/opt/glite/yaim/bin/yaim -c -s /opt/glite/yaim/etc/site-info.def -n glite-SE_dpm_mysql


mysql -u root -p -C < /root/dpm.sql-20111125-v2.gz


# NECESSARY FOR THE FINAL UPDATES

/opt/glite/yaim/bin/yaim -c -s /opt/glite/yaim/etc/site-info.def -n glite-SE_dpm_mysql


* You will need to install the dpm-contrib-admintool rm because it is not in the glite repository it might be in the EMI one. Last time I heard it made it to ETICS. If you can't find it there's still the sysadmin repo version and related notes on the GridPP wiki (Sam or Wahid welcome to leave an update on this one).

* To upgrade the disk servers I just updated the repository, upgraded the rpms and rerun yaim.

Friday, 9 September 2011

cvmfs upgrade to 2.0.3

Last week I upgraded the cvmfs on all the WN to cvmfs-2.0.3. The upgrade for us required two steps.

1) change of repository: since Manchester was the first to use the new atlas setup we were pointing to CERN repository. The new setup has now become standard so I just had to remove the override variable CVMFS_SERVER_URL from atlas.cern.ch.local. The file is distributed by cfengine so I just changed it in cvs.

2) rpms upgrade: I had some initial difficulties because I was following the instructions for atlas T3 - which normally work also for T2 - that suggested to install cvmfs-auto-setup rpm. This rpm runs service cvmfs restartautofs and in the instructions it was suggested also to rerun it manually. This on busy machines causes the repositories to disappear and requires a service cvmfs restartclean which wipes the cache off and is not really recommended in production. In reality none of this is really necessary and a simple

yum -y update cvmfs cvmfs-init-scripts

is sufficient. I could add the rpms version in cfengine and that was enough. The change from one version to another happens at the first unmount. Forcing this with a restartautofs is counterproductive (thanks to Ian for pointing this out).

Next week there should be a bug fix version that will take care of slow mount and some slow client tools routines on busy machines.

http://savannah.cern.ch/bugs/?86349
But since the upgrade procedure is so easy and the corrupted files problem

http://savannah.cern.ch/support/?122564

is fixed in cvmfs >2.0.2 I decided to upgrade anyway on Wednesday to avoid further errors in atlas and possibly lhcb.

NOTE: Of course I tested each step on few nodes to check everything worked before rolling out with cfengine on all nodes. Always a good practice not to follow recipes blindly!

Wednesday, 6 July 2011

cvmfs installation

Last week after few months delay I finally installed cvmfs. It's since 2002-2003 that I advocate the use of a shared file system for the input sandbox with locally cached data. AFS was successfully used in grid and non grid environment by BaBar users and is still used by local non-LHC users in Manchester for small work. So I'm pretty happy that a light weight caching file system is now available for more robust traffic. This is a really good moment to install cvmfs for two reasons:

1) Lhcb asked for it too.
2) Atlas has moved its condb files from the HOTDISK space token to cvmfs.

And it should reduce drastically errors for both NFS and SE load.

These are my installation notes:

* Install cernvm.repo: you can find it here or you can copy the rpms in your local and install from there. I distribute the file with cfengine but otherwise

cd /etc/yum.repos.d/
wget http://cvmrepo.web.cern.ch/cvmrepo/yum/cernvm.repo


* Install the gpg key: yum didn't like the key and was giving errors. I don't know if the problem is only mine (possible) I anyway told the developers and in the meantime I had to remove the key check from the repo file and trust the rpms. But if you want to try it, it might work for you:

cd /etc/pki/rpm-gpg/
wget http://cvmrepo.web.cern.ch/cvmrepo/yum/RPM-GPG-KEY-CernVM


* Install the rpms. In the documents there is an additional rpm cvmfs-auto-setup which is not really necessary and was also causing problems due to some migration lines devised for upgrades. Other than that it runs a setup and a restart command that can be run by your configuration tool of choice. S. Traylen also suggested to install SL_no_colorls to avoid ls /cvmfs mounting all the file systems that's why it's in the list.

yum install -y fuse cvmfs−keys cvmfs cvmfs−init−scripts SL_no_colorls

* Install configuration files. Below is what I added. For atlas there is in the docs a nightlies repository but that's not ready yet and isn't going to work. The default QUOTA_LIMIT set in default.local can be overridden in the experiment configuration. For each of this files there is a .conf file and a .local you should edit only .local. If they are not there just create them.
You need to override the CVMFS_SERVER_URL for atlas otherwise you don't get the new setup. While in cern.ch.local I simply inverted the order of the server to get RAL first and then the other two if RAL fails. I also removed CERNVM_SERVER_URL which appears in cern.ch.conf otherwise it goes to CERN first even though it's not apparently defined anywhere.

/etc/cvmfs/default.local
CVMFS_REPOSITORIES=atlas,atlas-condb,lhcb
CVMFS_CACHE_BASE=/scratch/var/cache/cvmfs2
CVMFS_QUOTA_LIMIT=2000
CVMFS_HTTP_PROXY="http://[YOUR-SQUID-CACHE]:3128"

/etc/cvmfs/config.d/atlas.cern.ch.local
CVMFS_QUOTA_LIMIT=10000
CVMFS_SERVER_URL=http://cvmfs-stratum-one.cern.ch/opt/atlas-newns

/etc/cvmfs/config.d/lhcb.cern.ch.local
CVMFS_QUOTA_LIMIT=5000

/etc/cvmfs/domain.d/cern.ch.local
CVMFS_SERVER_URL="http://cernvmfs.gridpp.rl.ac.uk/opt/@org@;http://cvmfs-stratum-one.cern.ch/opt/@org@;http://cvmfs.racf.bnl.gov/opt/@org@"
CVMFS_PUBLIC_KEY=/etc/cvmfs/keys/cern.ch.pub


* Create the cache space. By default it's in /var/cache. However I moved it to the /scratch partition which is bigger.

mkdir -p /scratch/var/cache/cvmfs2
chown cvmfs:cvmfs /scratch/var/cache/cvmfs2
chmod 2755 /scratch/var/cache/cvmfs2


* Run the setup. These are the commands the cvmfs-auto-setup would run at installation time. They also configure fuse although that's only one line added to fuse.conf.

/usr/bin/cvmfs_config setup
service cvmfs restartautofs

chkconfig cvmfs on
service cvmfs restart


* Some parameters need to change for squid. Below is what the documentation suggests. I tuned it to the size of my machine. For example the maximum_object_size and cache_mem were too big and I checked which other parameters were already set to evaluate if it was the case to change them.

collapsed_forwarding on
max_filedesc 8192
maximum_object_size 4096 MB
cache_mem 4096 MB
maximum_object_size_in_memory 32 KB
cache_dir ufs /var/spool/squid 50000 16 256


* Apply changes for Lhcb the VO_LHCB_SW_DIR needs to point to cvmfs. You can change it in YAIM and rerun it or you can do as I've done (still making sure to change YAIM so that freshly installed nodes don't need this hack). Lhcb with this change is good to go.

sed -i.sed.bak 's%/nfs/lhcb%/cvmfs/lhcb.cern.ch%' /etc/profile.d/grid-env.sh
mv /etc/profile.d/grid-env.sh.sed.bak /root


* Apply changes for Atlas. A similar change to VO_ATLAS_SW_DIR is required and you need to set an additional variable that is not handled by YAIM. For now I added it to grid-env.sh but it be better placed in another file not touched by YAIM or a snippet should be added to YAIM to handle the variable. This is enough for the jobs to start using the software area. However you still have to contact the atlas sw team to do their validation tests and enable the condb use. They'll propose a long way and a short way. I took the short because I didn't want to go in downtime and jobs were already running using the new setup.

sed -i.sed.2 's%"/nfs/atlas"%"/cvmfs/atlas.cern.ch/repo/sw"\ngridenv_set "ATLAS_LOCAL_AREA" "/nfs/atlas/local"%' /etc/profile.d/grid-env.sh
mv /etc/profile.d/grid-env.sh.sed.bak /root


* Always for Atlas remove some installed .conf files which install a link in /opt which is not necessary anymore. Second file might not exist, but there is an atlas-nightly.cern.ch.conf. This will surely change in future cvmfs releases.

service cvmfs stop
rm /etc/cvmfs/config.d/atlas.cern.ch.conf
rm /etc/cvmfs/config.d/atlas-condb.cern.ch.conf
service cvmfs start


Update 12/7/2011: Using YAIM

cfengine only installs the rpms and the configuration files (*.local). All the rest is now carried out by a YAIM function I created (config_cvmfs). I put a tar file here.To make it work I also added a node description in node-info.d/cvmfs (also in the tar file) that contains it. In this way I don't have to touch any already existing YAIM files and I can just add -n CVMFS to the YAIM command line we use to configure the WNs. It requires ATLAS_LOCAL_AREA and CVMFS_CACHE_DIR variables to be set in your site-info.def.

CVMFS docs are here

Release Notes
Init Scripts Overview
Examples
Technical Report
RAL T1
Atlas T2/T3 setup
Atlas latest changes