Wednesday, 30 November 2011

DPM upgrade 1.7.4 -> 1.8.2 (glite 3.2)

Last week I upgraded our DPM installation. It was a major change because I upgraded not only the DPM version but also the hardware and the backend mysql version.

I didn't take any measures this time before and after. I knew that becoming an alpha site in atlas was taking its toll on the old hardware and many of the timeouts were from gridftp but there had been a reappearance of the mysql ones I talked about in previous posts at the level that even restarting the service was hard.

[ ~]# service mysqld restart
Timeout error occurred trying to stop MySQL Daemon.

Stopping MySQL: [FAILED]

Timeout error occurred trying to start MySQL Daemon.


So I decided that the situation had become unsustainable and it was time to move to better hardware and software versions.

* Hardware: 2 cpu, 4GB mem, 2x250 GB raid1 -> 4 cores (HT on = 8 job slots), 24GB mem, 2x2TB raid1

There is no why here it was ok when we had limited access but the recent load was really too much for the old machine even with all the tuning. Suspected bad blocks on disks could be possible but no red leds nor hardware errors were reported by the machine.

* Mysql: 5.0.77 -> 5.5.10

Why mysql 5.5? Because InnoDB is the default engine and they have improved performance and instrumentation. On top of other things that we might actually start to use. A good blog article about the 5 reasons to move is this one: 5 good reasons to upgrade to mysql 5.5.

MySQL 5.5 is not in EPEL yet, but I found this CentOS community site that has the rpms and the instructions to install them.

After the installation I've also optimized the database partially with what I had already done in July, partly running a handy script mysqltuner.pl. This last one helps with variable you might not even know and even if you know them it tells you if they are too small. You need to be patient and let pass few hours before run it again.

* DPM: 1.7.4 -> 1.8.2

Why DPM 1.8.2 from glite 3.2? I would have gone for the UMD release or even the EMI one but then glite 3.2 was moved to production earlier than those and since I waited for this release since at least April I didn't think about it twice when I saw the escape route. It was really good timing too as it happened when I really couldn't postpone an upgrade anymore. You can find more info in the release notes. Among other reasons to upgrade: srmv2.2 in 1.7.4 has a memory leak which wasn't noticeable until the load was contained but for us exploded in October and is the reason I had to restart it every two days in the past few weeks.

Below the steps I took to reinstall the head node

On the old head node

* Set the site in downtime, drain the queues and kill all the remaining jobs.

* Turn off all the dpm and bdii services on the old head node

* Make a dump of the current database for backup

mysqldump -C -Q -u root -p -B dpm_db cns_db > dpm.sql-20111125.gz

* Download dpm-drop-requests-tables.sql supplied by Jean Philippe last July

wget http://www.sysadmin.hep.ac.uk/svn/fabric-management/dpm/dpm-drop-requests-tables.sql

* Drop the requests tables. This step is really useful to avoid painful reload times as I said in this other post about DPM optimization and because it drastically reduces the size of ibdata1 when you reload which has also benefits (my ibdata1 was reduced from 26GB to 1.7GB). Still you need to plan because it might take few hours depending on the system. On my old hardware it took around 7 hours.

mysql -p < dpm-drop-requests-tables.sql

* Dump reduced version of the database

mysqldump -C -Q -u root -p -B dpm_db cns_db > dpm.sql-20111125-v2.gz


* Copy both to a WEB server where they can be downloaded from in a later stage.

* Update the local repository for DPM head node and DPM disk servers. Since it is still glite I just had to rsync the latest mirror to the static area.

On the new head node
* Install the new machines with a DPM head node profile. This was again easy since it is still glite no changes were required in cfengine.

* Most of the following is not standard and I put it in a script. If you have problems with users IDs created by avahi packages you can uninstall them with yum removing all the dependencies and let them be reinstalled by the bdii dependency chain. It should work also uninstalling them with rpm -e --nodeps. This leaves redhat-lsb (which is what the bdii depends on) untouched but I haven't tried this last method. Here are the commands I executed:

# Get the dpm DB file
rm -rf dpm.sql-20111125-v2.gz*
wget http://ks.tier2.hep.manchester.ac.uk/T2/tmp/dpm.sql-20111125-v2.gz


# Install mysql5.5
rpm -Uvh http://repo.webtatic.com/yum/centos/5/latest.rpm
yum -y remove libmysqlclient5 mysql mysql-*
yum -y clean all

yum -y install mysql55 mysql55-server libmysqlclient5 --enablerepo=webtatic

service mysql stop

rm -rf /var/lib/mysql/*

# Get the local my.cnf
cfagent -vq

service mysqld start


# Install the DPM rpms
yum -y remove cups avahi avahi-compat-libdns_sd avahi-glib
yum -y install glite-SE_dpm_mysql lcg-CA


# Modify sql scripts for mysql5.5

cd
/opt/lcg/share/DPM/
for a in create_dp*.sql; do sed -i.old 's/TYPE/ENGINE/g' $a;done
grep ENGINE *


# Run YAIM and upload old DB

cd

/opt/glite/yaim/bin/yaim -c -s /opt/glite/yaim/etc/site-info.def -n glite-SE_dpm_mysql


mysql -u root -p -C < /root/dpm.sql-20111125-v2.gz


# NECESSARY FOR THE FINAL UPDATES

/opt/glite/yaim/bin/yaim -c -s /opt/glite/yaim/etc/site-info.def -n glite-SE_dpm_mysql


* You will need to install the dpm-contrib-admintool rm because it is not in the glite repository it might be in the EMI one. Last time I heard it made it to ETICS. If you can't find it there's still the sysadmin repo version and related notes on the GridPP wiki (Sam or Wahid welcome to leave an update on this one).

* To upgrade the disk servers I just updated the repository, upgraded the rpms and rerun yaim.

Friday, 9 September 2011

cvmfs upgrade to 2.0.3

Last week I upgraded the cvmfs on all the WN to cvmfs-2.0.3. The upgrade for us required two steps.

1) change of repository: since Manchester was the first to use the new atlas setup we were pointing to CERN repository. The new setup has now become standard so I just had to remove the override variable CVMFS_SERVER_URL from atlas.cern.ch.local. The file is distributed by cfengine so I just changed it in cvs.

2) rpms upgrade: I had some initial difficulties because I was following the instructions for atlas T3 - which normally work also for T2 - that suggested to install cvmfs-auto-setup rpm. This rpm runs service cvmfs restartautofs and in the instructions it was suggested also to rerun it manually. This on busy machines causes the repositories to disappear and requires a service cvmfs restartclean which wipes the cache off and is not really recommended in production. In reality none of this is really necessary and a simple

yum -y update cvmfs cvmfs-init-scripts

is sufficient. I could add the rpms version in cfengine and that was enough. The change from one version to another happens at the first unmount. Forcing this with a restartautofs is counterproductive (thanks to Ian for pointing this out).

Next week there should be a bug fix version that will take care of slow mount and some slow client tools routines on busy machines.

http://savannah.cern.ch/bugs/?86349
But since the upgrade procedure is so easy and the corrupted files problem

http://savannah.cern.ch/support/?122564

is fixed in cvmfs >2.0.2 I decided to upgrade anyway on Wednesday to avoid further errors in atlas and possibly lhcb.

NOTE: Of course I tested each step on few nodes to check everything worked before rolling out with cfengine on all nodes. Always a good practice not to follow recipes blindly!

Wednesday, 6 July 2011

cvmfs installation

Last week after few months delay I finally installed cvmfs. It's since 2002-2003 that I advocate the use of a shared file system for the input sandbox with locally cached data. AFS was successfully used in grid and non grid environment by BaBar users and is still used by local non-LHC users in Manchester for small work. So I'm pretty happy that a light weight caching file system is now available for more robust traffic. This is a really good moment to install cvmfs for two reasons:

1) Lhcb asked for it too.
2) Atlas has moved its condb files from the HOTDISK space token to cvmfs.

And it should reduce drastically errors for both NFS and SE load.

These are my installation notes:

* Install cernvm.repo: you can find it here or you can copy the rpms in your local and install from there. I distribute the file with cfengine but otherwise

cd /etc/yum.repos.d/
wget http://cvmrepo.web.cern.ch/cvmrepo/yum/cernvm.repo


* Install the gpg key: yum didn't like the key and was giving errors. I don't know if the problem is only mine (possible) I anyway told the developers and in the meantime I had to remove the key check from the repo file and trust the rpms. But if you want to try it, it might work for you:

cd /etc/pki/rpm-gpg/
wget http://cvmrepo.web.cern.ch/cvmrepo/yum/RPM-GPG-KEY-CernVM


* Install the rpms. In the documents there is an additional rpm cvmfs-auto-setup which is not really necessary and was also causing problems due to some migration lines devised for upgrades. Other than that it runs a setup and a restart command that can be run by your configuration tool of choice. S. Traylen also suggested to install SL_no_colorls to avoid ls /cvmfs mounting all the file systems that's why it's in the list.

yum install -y fuse cvmfs−keys cvmfs cvmfs−init−scripts SL_no_colorls

* Install configuration files. Below is what I added. For atlas there is in the docs a nightlies repository but that's not ready yet and isn't going to work. The default QUOTA_LIMIT set in default.local can be overridden in the experiment configuration. For each of this files there is a .conf file and a .local you should edit only .local. If they are not there just create them.
You need to override the CVMFS_SERVER_URL for atlas otherwise you don't get the new setup. While in cern.ch.local I simply inverted the order of the server to get RAL first and then the other two if RAL fails. I also removed CERNVM_SERVER_URL which appears in cern.ch.conf otherwise it goes to CERN first even though it's not apparently defined anywhere.

/etc/cvmfs/default.local
CVMFS_REPOSITORIES=atlas,atlas-condb,lhcb
CVMFS_CACHE_BASE=/scratch/var/cache/cvmfs2
CVMFS_QUOTA_LIMIT=2000
CVMFS_HTTP_PROXY="http://[YOUR-SQUID-CACHE]:3128"

/etc/cvmfs/config.d/atlas.cern.ch.local
CVMFS_QUOTA_LIMIT=10000
CVMFS_SERVER_URL=http://cvmfs-stratum-one.cern.ch/opt/atlas-newns

/etc/cvmfs/config.d/lhcb.cern.ch.local
CVMFS_QUOTA_LIMIT=5000

/etc/cvmfs/domain.d/cern.ch.local
CVMFS_SERVER_URL="http://cernvmfs.gridpp.rl.ac.uk/opt/@org@;http://cvmfs-stratum-one.cern.ch/opt/@org@;http://cvmfs.racf.bnl.gov/opt/@org@"
CVMFS_PUBLIC_KEY=/etc/cvmfs/keys/cern.ch.pub


* Create the cache space. By default it's in /var/cache. However I moved it to the /scratch partition which is bigger.

mkdir -p /scratch/var/cache/cvmfs2
chown cvmfs:cvmfs /scratch/var/cache/cvmfs2
chmod 2755 /scratch/var/cache/cvmfs2


* Run the setup. These are the commands the cvmfs-auto-setup would run at installation time. They also configure fuse although that's only one line added to fuse.conf.

/usr/bin/cvmfs_config setup
service cvmfs restartautofs

chkconfig cvmfs on
service cvmfs restart


* Some parameters need to change for squid. Below is what the documentation suggests. I tuned it to the size of my machine. For example the maximum_object_size and cache_mem were too big and I checked which other parameters were already set to evaluate if it was the case to change them.

collapsed_forwarding on
max_filedesc 8192
maximum_object_size 4096 MB
cache_mem 4096 MB
maximum_object_size_in_memory 32 KB
cache_dir ufs /var/spool/squid 50000 16 256


* Apply changes for Lhcb the VO_LHCB_SW_DIR needs to point to cvmfs. You can change it in YAIM and rerun it or you can do as I've done (still making sure to change YAIM so that freshly installed nodes don't need this hack). Lhcb with this change is good to go.

sed -i.sed.bak 's%/nfs/lhcb%/cvmfs/lhcb.cern.ch%' /etc/profile.d/grid-env.sh
mv /etc/profile.d/grid-env.sh.sed.bak /root


* Apply changes for Atlas. A similar change to VO_ATLAS_SW_DIR is required and you need to set an additional variable that is not handled by YAIM. For now I added it to grid-env.sh but it be better placed in another file not touched by YAIM or a snippet should be added to YAIM to handle the variable. This is enough for the jobs to start using the software area. However you still have to contact the atlas sw team to do their validation tests and enable the condb use. They'll propose a long way and a short way. I took the short because I didn't want to go in downtime and jobs were already running using the new setup.

sed -i.sed.2 's%"/nfs/atlas"%"/cvmfs/atlas.cern.ch/repo/sw"\ngridenv_set "ATLAS_LOCAL_AREA" "/nfs/atlas/local"%' /etc/profile.d/grid-env.sh
mv /etc/profile.d/grid-env.sh.sed.bak /root


* Always for Atlas remove some installed .conf files which install a link in /opt which is not necessary anymore. Second file might not exist, but there is an atlas-nightly.cern.ch.conf. This will surely change in future cvmfs releases.

service cvmfs stop
rm /etc/cvmfs/config.d/atlas.cern.ch.conf
rm /etc/cvmfs/config.d/atlas-condb.cern.ch.conf
service cvmfs start


Update 12/7/2011: Using YAIM

cfengine only installs the rpms and the configuration files (*.local). All the rest is now carried out by a YAIM function I created (config_cvmfs). I put a tar file here.To make it work I also added a node description in node-info.d/cvmfs (also in the tar file) that contains it. In this way I don't have to touch any already existing YAIM files and I can just add -n CVMFS to the YAIM command line we use to configure the WNs. It requires ATLAS_LOCAL_AREA and CVMFS_CACHE_DIR variables to be set in your site-info.def.

CVMFS docs are here

Release Notes
Init Scripts Overview
Examples
Technical Report
RAL T1
Atlas T2/T3 setup
Atlas latest changes

Wednesday, 22 June 2011

How to remove apel warnings and avoid nagios alerts

Quite few sites have few entries in APEL that don't quite match. They can appear with two messages

OK [ Minor discrepancy in even numbers ]
WARN [ Missing data detected ]


They don't look good on the Sync page and nagios also sends alerts for this problem which is even more annoying.

The problem is caused by few records with the wrong time stamp (StartTime=01-01-1970). These records need to be deleted from the local database and the period were they appear republished with the gap publisher. To delete the records connect to your local APEL mysql and run:

mysql> delete from LcgRecords where StartTimeEpoch = 0;

Then for each month were the entries appear rerun the gap publisher. And finally rerun the publisher in missing records mode to update the SYNC page or you can wait the next proper run if you are not impatient.

Thanks to Cristina for this useful tip she gave me in this ticket.

Tuesday, 14 June 2011

DPM optimization next round

After I applied 3 of the mysql parameters changes I talk about in this post I didn't see the improvement I was hoping with atlas jobs time outs.

This is another set of optimizations I put together after further search

First of all I started to systematically count the time TIME_WAIT connections every five minutes. I also correlated them in the same log file to the number of concurrent threads the server keeps mostly in sleep mode. You can get the last bit running mysqladmin -p proc stat or from within a mysql command line. The number of threads was near to the max allowed default value in mysql so I doubled that in my.cnf

max_connections=200

then I halved the kernel time out for TIME_WAIT connections

sysctl -w net.ipv4.tcp_fin_timeout=30

the default value is 60 sec. If you add it to /etc/sysctl.conf it becomes permanent.

Finally I found this article which explicitly talks about mysql tunings to reduce connection timeouts: Mysql Connection Timeouts and I set the following

sysctl -w net.ipv4.tcp_max_syn_backlog=8192
sysctl -w net.core.somaxconn=512


again add to /etc/sysctl.conf to make it permanent; and added in my.cnf

back_log=500

I calculated my numbers on 500 connections/s because that's what I have observed when I did all this (I obeserved even larger numbers). Admittedly now they are stable at 330 connections per second but we haven't had any heavy ramp up since Saturday. Only a mild one but that didn't cause any time out. I'm waiting for a serious ramp as definitive test. Said that since Saturday we haven't seen any timeout errors not even the low background that was always present. So there is already an improvement.

Update 16/06/2011

Today there was an atlas ramp from almost 0 to >1400 jobs and no time outs so far.

Few timeouts were seen yesterday but they were due to authentication between the head node and a couple of data servers which I will have to investigate but they are a handful, nowhere near the scale observed before and not due to mysql. I will still keep things under observation for a while longer. Just in case.

Friday, 10 June 2011

DPM Optimization

My quest to optimize DPM continues. Bottlenecks are like Russian dolls and hide behind each other. After optimizing the data servers increasing the block device read ahead; enabling lacp on network channel bonding and multiplying the atlas hotdisk files there is still a problem with mysql on the head node which causes time outs.

When atlas ramps up there is often a increase of connection in TIME_WAIT. I observed >2600 at times. The mysql database becomes completely unresponsive and causes the time outs. Restarting the database causes the connections to finally close and the database to resume normal activity. Although a restart might alleviate the problem as usual it's not a cure. So I went on a quest. What follows might not alleviate my specific problem, I haven't tested in production yet, but it certainly helps with another: DB reload.

Sam already wrote some performance tuning tips here: Performance and Tuning most notably the setting of innodb_buffer_pool_size. After a discussion on the DPM user forum and some testing this is what I'd add:

I set "DPM REQCLEAN 3m" when I upgraded to DPM 1.7.4 and this, after a reload, has reduced Manchester DB file size from 17GB to 7.6GB. Dumping the db took 7m34s. I then reloaded it with different combinations of suggested my.cnf innodb parameters and the effects of some of them are dramatic.

The default parameters should definitely be avoided. Reloading a database with the default parameters takes several hours. Last time it took 17-18 hours, this time I interrupted after 4.

With a combination of the parameters suggested by Maarten the time is drastically reduced. In particular the most effective have been setting innodb_buffer_pool_size and innodb_log_file_size. Below are the results of the upload tests I made in decreasing order of time. I then followed Jean Philippe suggestion to drop the requests tables. Dropping the tables took several minutes and it was slightly faster with a single db file. After I dropped the tables and the indexes ibdata1 size dropped to 1.2GB and using combination 4 below it took 1m7s to dump and 5m7s to reload. With one file per table configuration reloading was slightly faster but after I dropped the requests tables there was no difference and it is also balanced by the fact that deletion seems slower and the effects are probably more visible when the database is bigger so these small tests don't give any compelling reason in favour nor against for now.

This are steps that help reducing the time it takes to reload the database:

1) Enable REQCLEAN in shift.conf (I set it to 3 months to comply with security requirements.)
2) set innodb_buffer_pool_size in my.cnf (I set it at 10% of the machine memory and I couldn't see much difference eventually when I set it to 22.5% but in production it might be another story with repeated queries for the same input files)
3) set innodb_log_file_size in my.cnf (didn't give much thought to this, Maarten value of 50MB seemed good enough. Binary log files need to be removed to enable this and the database restarted but check the docs this might not be a valid strategy if you make heavier use of the binary logs.)
4) set innodb_flush_log_at_trx_commit = 2 in my.cnf (although this parameter seems less effective during reload it might be useful in production 2 is slightly safer than 0).
5) Use the script Jean-Philippe gave me to drop the requests tables before an upgrade.

Hopefully they will help stop also the time outs.

Tests:

COMBINATION 1

innodb_buffer_pool_size = 400MB
# innodb_log_file_size = 50MB
innodb_flush_log_at_trx_commit = 2
# innodb_file_per_table

real 167m30.226s
user 1m41.860s
sys 0m9.987s

============================
COMBINATION 2
innodb_buffer_pool_size = 900MB
# innodb_log_file_size = 50MB
# innodb_flush_log_at_trx_commit = 2
# innodb_file_per_table

real 155m2.996s
user 1m40.843s
sys 0m9.935s

===========================
COMBINATION 3
innodb_buffer_pool_size = 900MB
innodb_log_file_size = 50MB
# innodb_flush_log_at_trx_commit = 2
# innodb_file_per_table

real 49m2.683s
user 1m39.137s
sys 0m9.902s
===========================
COMBINATION 4
innodb_buffer_pool_size = 400MB
innodb_log_file_size = 50MB
innodb_flush_log_at_trx_commit = 2 <-- test also with 0 instead of 2 but it didn't change the time it took and 2 is slightly safer
# innodb_file_per_table

real 48m32.398s
user 1m40.638s
sys 0m9.733s
===========================
COMBINATION 5
innodb_buffer_pool_size = 900MB
innodb_log_file_size = 50MB
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table

real 47m25.109s
user 1m39.230s
sys 0m9.985s
===========================
COMBINATION 6
innodb_buffer_pool_size = 400MB
innodb_log_file_size = 50MB
innodb_flush_log_at_trx_commit = 2
innodb_file_per_table

real 46m46.850s
user 1m40.378s
sys 0m9.950s
===========================

Friday, 20 May 2011

BDII again

A couple of weeks ago I upgraded the site BDII and top BDII from a very old version without reinstalling as described in this post. Few days ago I noticed that not all was working as well as I thought and the BDII was reporting stale numbers in the dynamic attributes causing few problems among which biomed submitting an unhealthy 12k jobs.

There were two reasons for this:

1) the unprivileged user that runs the BDII is edguser anymore but ldap. Consequently there were some ownership issues in /opt/glite/var subdirectories and files. This was highlighted in /var/log/bdii/bdii-update.log by permission denied errors which I overlooked for a bit too long. Permissions should be as follow: /opt/glite/var /opt/glite/var/lock, /opt/glite/var/tmp and /opt/glite/var/cache should belong to root and anything below them should belong to ldap. You can check if there is anything that doesn't belong to ldap running

find /opt/glite/var/ ! -user ldap -ls


this will include the top directories above which you can ignore.

2) bdii-update doesn't use anymore glite-info-wrapper and glite-info-generic which used to write the .ldif files in the same directory tree above. It now writes what it needs in /var/run/bdii databases and one unique file new.ldif file calling directly the scripts in /opt/glite/etc/gip/provider and /opt/glite/etc/gip/plugin. I upgraded from an older version and the old providers weren't deleted but continued to be executed by bdii-update. Some of them still read what now are obsolete .ldif. files under /opt/glite/var/cache tree. I deleted all the .ldif files with an additional numeric extension under /opt/glite/var.

With these two changes, i.e. fixing the ownership of the directories and deleting osolete .ldif files (or the old providers if one is sure of which ones) the site bdii restarted to update correctly the dynamic attributes.

Finally a note on making it easier to reinstall: in the previous post I suggested to add manually SLAPD=/usr/sbin/slapd2.4 to change slapd version to the newly installed /opt/bdii/etc/bdii.conf. However an easier way to maintain the service in case it needs reinstallation is to add SLAPD=/usr/sbin/slapd2.4 to site-info.def so that when YAIM runs it gets added to /etc/sysconfig/bdii and doesn't need a manual step is the machine is reinstalled.

Wednesday, 4 May 2011

BDII follow up

To decrease the need of restarting the BDII and following the discussion on tb-support I decided to upgrade to openldap2.4. Since I was at it I also updated both glite-BDII_site and glite-BDII_top (below the list of new rpms) to the latest repositories division since we still had the older common glite-BDII repo. The newest version of BDII has also new paths for most things. For example some config files have been moved to /etc/bdii and /var/run/bdii is the new SLAPD_VAR_DIR. The setting up of the repos are peculiar to Manchester where we mirror a latest version every day but the machines pick up from a stable repository that is updated when needed.

1) rsync glite-BDII_site and glite-BDII_top from Glite-3.2-latest to Glite-3.2 stable

2) Added the rpm to the local external repository from the BDII_top RPMS.external dir so it can be picked up also by BDII_site and if the case also CEs and SE.

3) Create new repo files and added them to cvs

4) Edited cf.yaim-repos to copy them

5) Installed manually (yum install) the rpms openldap2.4 openldap2.4-servers and their dependencies lib64ldap2.4 openldap2.4-extraschemas on BDII_site. In the glite-BDII_top case they are called in as dependencies so there is no need for this.
# This step can be added in cfengine at a later stage if needed.

6) mv /opt/bdii/etc/bdii.conf.rpmnew /opt/bdii/etc/bdii.conf
# Contains the pointer to the new bdii-slapd.conf which contains the new paths. bdii/slapd won't restart with the old bdii.conf.

7) Add SLAPD=/usr/sbin/slapd2.4 to the new /opt/bdii/etc/bdii.conf
# This can go in yaim post function if one really wants.

8) Rerun YAIM

9) Reduced the rate the cron job checks the bdii from 5 to 20 mins. Top bdii seemed to take longer to rebuild probably due to an expired cache causing a loop.

Crossing fingers it will work and stop the BDII periodically hanging.

New Site BDII RPMS

bdii-5.1.22-1
bdii-config-site-0.9.1-1
glite-BDII_site-3.2.11-1.sl5
glite-yaim-bdii-4.1.12-1

New Top BDII RPMS

bdii-5.1.22-1
bdii-config-top-0.0.9-1
glite-BDII_top-3.2.11-1.sl5
glite-yaim-bdii-4.1.12-1

Openldap2.4 RPMS

lib64ldap2.4_2-2.4.22-1.el5
openldap2.4-2.4.22-1.el5
openldap2.4-extra-schemas-1.3-10.el5
openldap2.4-servers-2.4.22-1.el5

UPDATE 20/

Thursday, 7 April 2011

Check BDII script updated

Yesterday the top BDII stopped working rather than the site BDII. It crashed. The pid file was still there but the process was not running.

So I adjusted the script to use a different query that works on all levels of bdii (resource, site, top) looking for o=infosys rather than o=grid and some specific attribute.

I also looked at the bdii startup script and it does a good job at cleaning up processes and lock/pid files in the stop function so I just use service bdii restart whether the process is there or not only the alert remains different in the two cases.

New version is still in

http://www.sysadmin.hep.ac.uk/svn/fabric-management/processes/monitoring/testbdii.sh

Monday, 4 April 2011

Sharing scripts

in my Northgrid talk at GridPP I pointed out we all do the same things but in a slightly different way I thought it'd be good to resume the thread on sharing management/monitoring tools. I always thought building a repository was a good thing and I still do.

I think the tools should be as generic as possible but do not need to be perfect. Of course if scripts work out of the box it's a bonus but they might be useful also to improve local tools with additional checks one might not have thought about.

I'll start with a couple of scripts I rewrote last Monday to make them more robust:

-- Check the BDII:

http://www.sysadmin.hep.ac.uk/svn/fabric-management/processes/monitoring/testbdii.sh

The original script was checking a network connection exist if it didn't exist it restarted the bdii service.

The new version checks the slapd is responsive, if it isn't checks if there is a hung process, if there is it kills it and restarts the bdii, if there isn't just restarts the bdii.

-- Check Host Certificate End Date:

http://www.sysadmin.hep.ac.uk/svn/fabric-management/certificates/x509/check-host-cert-date.sh

The old version was just checking if the certificate was expired and sent an alert. Not very useful in itself as it picks the problem when the damage is already done.

The old version checks that, because it might be useful if machines have been down for a while, and also it starts to send alerts 30 days before the expiration date. Finally if the certificate is not there it asks the obvious question should you be running this script on this machine?