Update 21 May 2013:
The execution times can be improved considerably by setting
ARMCI_NETWORK=SOCKETS
They are still ca 30% longer than 6.1.1 though due to slower SCF convergence.
See http://www.nwchem-sw.org/index.php/Special:AWCforum/st/id834/Nwchem_6.3_running_2-5_times_slo....html
UPDATE 20 May 2013:
Nwchem 6.3 is very slow compared to 6.1.1. A six-core run (out of eight cores available) was 121 s using 6.1.1 but 254 seconds on 6.3!
I observed this on debian as well: 6.3 on debian is five times slower (190s vs 40 s for example at 8 cores in http://verahill.blogspot.com.au/2013/05/414-frequency-vs-cores-crude.html) than 6.1.1. Not sure why that is.
Original:
NWChem 6.3 is out now. Here's how to build it on ROCKS 5.4.3 (based on Centos 5.6) for CPU-based calculations (currently only CCSD(T) can take advantage of GPU/CUDA anyway).
To build on debian, see http://verahill.blogspot.com.au/2013/05/424-nwchem-63-on-debian-wheezy.html
This assumes that you've got a proper build environment (gcc, fortran, openmpi) installed.
Openblas:
I've added all users who do computations to the group compchem.
sudo mkdir /share/apps/openblas sudo chown $USER:compchem /share/apps/openblas cd ~/tmp wget http://nodeload.github.com/xianyi/OpenBLAS/tarball/v0.1.1 tar xvf v0.1.1 cd xianyi-OpenBLAS-e6e87a2/ wget http://www.netlib.org/lapack/lapack-3.4.1.tgz make all BINARY=64 CC=/usr/bin/gcc FC=/usr/bin/gfortran USE_THREAD=0 INTERFACE64=1 1> make.log 2>make.err make PREFIX=/share/apps/openblas install cp lib*.* /share/apps/openblas/lib sudo chmod 755 /share/apps/openblas -R
For later use with nwchem and ecce, add /share/apps/openblas/lib to /etc/ld.so.conf and do
sudo ldconfig
Put
in ~/.bashrc and/or queue files.export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/share/apps/openblas/lib
NWChem
I've added all users who do computations to the group compchem.
sudo mkdir /share/apps/nwchem/ sudo chown $USER:compchem /share/apps/nwchem/ cd /share/apps/nwchem wget http://www.nwchem-sw.org/download.php?f=Nwchem-6.3-src.2013-05-17.tar.gz tar xvf Nwchem-6.3-src.2013-05-17.tar.gz cd nwchem-6.3-src.2013-05-17/ cd src/ wget http://www.nwchem-sw.org/images/Iswtch.patch.gz gzip -d Iswtch.patch patch -p0 < Iswtch.patch cd ../ export LARGE_FILES=TRUE export TCGRSH=/usr/bin/ssh export NWCHEM_TOP=`pwd` export NWCHEM_TARGET=LINUX64 export NWCHEM_MODULES="all python" export PYTHONHOME=/opt/rocks export PYTHONVERSION=2.4 export USE_MPI=y export USE_MPIF=y export USE_MPIF4=y export MPI_LOC=/opt/openmpi export MPI_INCLUDE=/opt/openmpi/include export LIBRARY_PATH=$LIBRARY_PATH:/opt/openmpi/lib:/share/apps/openblas export LIBMPI="-lmpi -lopen-rte -lopen-pal -ldl -lmpi_f77 -lpthread" export BLASOPT="-L/share/apps/openblas/lib -lopenblas -lopenblas_nehalem-r0.1.1 -lopenblas_nehalemp-r0.1.1" export ARMCI_NETWORK=SOCKETS cd $NWCHEM_TOP/src export FC=gfortran make clean make nwchem_config make FC=gfortran cd ../contrib ./getmem.nwchem sudo chmod 755 /share/apps/nwchem/nwchem-6.3-src.2013-05-17 -R
Create a default.nwchemrc in /share/apps/nwchem
and put symmlinks to it in the users' home directories, e.g.nwchem_basis_library /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/basis/libraries/ ffield amber amber_1 /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/amber_s/ amber_2 /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/amber_x/ amber_3 /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/amber_q/ amber_4 /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/amber_u/ amber_5 /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/custom/ spce /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/solvents/spce.rst charmm_s /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/charmm_s/ charmm_x /share/apps/nwchem/nwchem-6.3-src.2013-05-17/src/data/charmm_x/
cd ~ ln -s /share/apps/nwchem/default.nwchemrc .nwchemrc
RE: CUDA
ReplyDeleteDoes the CUDA run only works with CCSD(T) for single point energy only? Will nwchem uses CUDA for other type of jobs, for example optimization?