21 May 2012

158. Setting up ecce with qsub at An Australian University computational cluster

EDIT: this works for G09 on that particular cluster. Come back in a week or two for a more general solution (end of May 2012/beginning of June 2012).

I don't feel comfortable revealing where I work. But imagine that you end up working at an Australian University in, say, Melbourne. I do recognise that I will be giving enough information here to make it possible to identify  who I am (and there are many reasons not to want to be identifiable -- partly because students can be mean and petty, and partly because I suffer from the delusion that IT rules apply to Other People, and not me -- and have described ways of doing things you're not supposed to be doing in this blog)

Anyway.

My old write-ups of ecce are pretty bad, if not outright inaccurate. Anyway, I presume that in spite of that you've managed to set up ECCE well enough to run stuff on nodes of your local cluster.

Now it's time for the next level -- on a remote site using SGE/qsub

So far I've only tried this out with G09 -- they are currently looking to set up nwchem on the university cluster. Not sure what the best approach to the "#$ -pe g03_smp2 2" switch is for nwchem.

--START HERE --

EVERYTHING I DESCRIBE IS DONE ON YOUR DESKTOP, NOT ON THE REMOTE SYSTEM. Sorry for shouting, but don't got a-messing with the remote computational cluster -- we only want to teach ecce how to submit jobs remotely. The remote cluster should be unaffected.

1. Creating the Machine
To set up a site with a queue manager, start
ecce -admin

Do something along the lines of what's shown in the figure above.

If you're not sure whether your qsub belongs to PBS or SGE, type qstat -help and look at the first line returned, e.g. SGE 6.2u2_1.

2. Configure the site
Now, edit your ecce-6.3/apps/siteconfig/CONFIG.msgln4  (local nodes go into ~/.ECCE  but remote SITES go in apps/siteconfig --  and that's what we're working with here).

   NWChem: /usr/local/bin/NWCHEM
   Gaussian-03: /usr/local/bin/G09
   perlPath: /usr/bin/perl
   qmgrPath: /usr/bin/qsub
 
   SGE {
   #$ -S /bin/csh
   #$ -cwd
   #$ -l h_rt=$wallTime
   #$ -l h_vmem=4G
   #$ -j y
   #$ -pe g03_smp2 2

   module load gaussian/g09
    }
A word of advice -- open the file in vim (save using :wq!) or do a chmod +w on it first since it will be set to read-only by default.


3. Queue limits
The same goes for the next file, which controls various job limits, ecce-6.3/apps/siteconfig/msgln4.Q:
# Queue details for msgln4
Queues:    squ8

squ8|minProcessors:       2
squ8|maxProcessors:       6
squ8|runLimit:       4320
squ8|memLimit:       4000
squ8|scratchLimit:       0
4. Connect
In the ecce launcher-mathingy click on Machine Browser, and Set Up Remote Access for the remote cluster. Basically, type in your user name and password.

Click on machine status to make sure that it's connecting

5.Test it out!
If all is well you should be good to go

157. Restarting gaussian (g09) job on an SGE system (qsub)

The Australian University I'm working at has a computational cluster where jobs are submitted using qsub. This post is more like a personal note to myself, but the point about resubmitting jobs may be of use to someone.

This page is useful reading for how these types of scripts with mixed shell and gaussian stuff work: http://www.gaussian.com/g_tech/g_ur/m_running.htm

0. The qsub header (Notes To Myself)

http://cf.ccmr.cornell.edu/cgi-bin/w3mman2html.cgi?qsub(1B)
#$ -S /bin/sh Shell. Can be csh, tcsh etc..
#$ -cwd execute in Current Working Directory
#$ -l h_rt=12:00:00 Maximum allowed run time in hours. -l is a list with resource limits.
#$ -l h_vmem=4G Memory limit. http://www.biostat.jhsph.edu/bit/cluster-usage.html#MemSpec. Should match %mem 4000mb

#$ -j y"-j join. Declares if the standard error stream of the job will be merged with the standard output stream of the job." It creates *.o* and *.p* files with what would've been echoed in the terminal.

#$ -pe g03_smpX XSeems to stand for parallel execution. X would be the number of slots it seems. The g03_smpX seems to be the message passing interface, but not entirely sure. 

1. Setting up simple jobs
First create a standard template, let's call it qsub.header

#!/bin/sh
#$ -S /bin/sh
#$ -cwd
#$ -l h_rt=12:00:00
#$ -l h_vmem=4G
#$ -j y
#$ -pe g03_smp2 2
module load gaussian/g09
time G09 << END > g09_output.log
and save it in your home folder ~.

Create an input file, e.g. water.in and put it in your work directory, e.g. ~/g09


%chk=water
#P ub3lyp/6-31G* opt

water energy

0  1
O
H  1  1.0
H  1  1.0  2  120.0

Put them together:
cat ~/qsub.header > water.qsub
cat ~/g09/water.in >>water.qsub

#!/bin/sh
#$ -S /bin/sh
#$ -cwd
#$ -l h_rt=12:00:00
#$ -l h_vmem=4G
#$ -j y
#$ -pe g03_smp2 2
module load gaussian/g09
time G09 << END > g09_output.log

%chk=water.chk
%nprocshared=6
#P ub3lyp/6-31G* opt

water energy

0  1
O
H  1  1.0
H  1  1.0  2  120.0

2. Restarting jobs

Most likely your home folder is shared across the nodes via nfs.

To find out, submit
#!/bin/sh
#$ -S /bin/sh
#$ -cwd
#$ -l h_rt=12:00:00
#$ -l h_vmem=4G
#$ -j y
#$ -pe g03_smp2 2
pwd
ls -lah
tree -L 1 -d
to get some directory information about the nodes.

Once you have that, just put the absolute path to your .chk file in your restart script, e.g.

#!/bin/sh
#$ -S /bin/sh
#$ -cwd
#$ -l h_rt=12:00:00
#$ -l h_vmem=4G
#$ -j y
#$ -pe g03_smp2 2
module load gaussian/g09
time G09 << END > g09_output.log

%chk=/nfs/home/hpcsci/username/g09/water.chk
%nprocshared=2
#P ub3lyp/6-31G* opt guess=read geom=allcheck



18 May 2012

156. Compiling nwchem 6.1 with internal libs, openmpi, python on debian testing/wheezy

I followed this thread: http://www.nwchem-sw.org/index.php/Special:AWCforum/st/id435/segmentation_fault--running_smal....html

See here for build with external blas: http://verahill.blogspot.com.au/2012/05/building-nwchem-61-on-debian.html

--Start here --
sudo apt-get install libopenmpi-dev python-dev

sudo mkdir /opt/nwchem
sudo chown ${USER} /opt/nwchem
cd /opt/nwchem
wget http://www.nwchem-sw.org/images/Nwchem-6.1-2012-Feb-10.tar.gz
tar xvf Nwchem-6.1-2012-Feb-10.tar.gz
mv nwchem-6.1 nwchem-6.1_internal
cd nwchem-6.1_internal/

Edit src/config/makefile.h and add -lz -lssl to the end of line 1914 (needed by python)

Edit src/tools/GNUmakefile and add two lines as shown below in red (line numbers added by me for illustration only) -- editing needed for compile with internal libs.
355 ifneq ($(BLAS_LIB),)
356     ifeq ($(BLAS_SIZE),4)
357         MAYBE_BLAS = --with-blas4="$(strip $(BLAS_LIB))"
358     else
359         MAYBE_BLAS = --with-blas8="$(strip $(BLAS_LIB))"
360     endif
361 endif
to
355 ifneq ($(BLAS_LIB),)
356     ifeq ($(BLAS_SIZE),4)
357         MAYBE_BLAS = --with-blas4="$(strip $(BLAS_LIB))"
358     else
359         MAYBE_BLAS = --with-blas8="$(strip $(BLAS_LIB))"
360     endif
361 else
362     MAYBE_BLAS = --without-blas
363 endif
Then continue as normal:


export LARGE_FILES=TRUE
export TCGRSH=/usr/bin/ssh
export NWCHEM_TOP=`pwd`
export NWCHEM_TARGET=LINUX64
export NWCHEM_MODULES="all python"
export PYTHONVERSION=2.7
export PYTHONHOME=/usr
export USE_MPI=y
export USE_MPIF=y
export USE_MPIF4=y
export MPI_LOC=/usr/lib/openmpi/lib
export MPI_INCLUDE=/usr/lib/openmpi/include
export LIBRARY_PATH=$LIBRARY_PATH:/usr/lib/openmpi/lib
export LIBMPI="-lmpi -lopen-rte -lopen-pal -ldl -lmpi_f77 -lpthread"
cd $NWCHEM_TOP/src
make clean
make nwchem_config
make FC=gfortran

**If you get errors relating to openssl etc. at the end of compile, try building without python support by removing anything in blue. You won't need to do make clean so the rebuild will be quick. 

Building takes ages

Edit your ~/.bashrc and add
export NWCHEM_EXECUTABLE=/opt/nwchem/nwchem-6.1_internal/bin/LINUX64/nwchem
export NWCHEM_BASIS_LIBRARY=/opt/nwchem/nwchem-6.1_internal/src/basis/libraries/
export PATH=$PATH:/opt/nwchem/nwchem-6.1_internal/bin/LINUX64


Links to this post:
http://chemport.ru/forum/viewtopic.php?f=71&t=98589