I've previously struggled with Dalton 2.0-cam and given up. I somehow didn't know about Dalton 2011 at that point, but it turns out it's much easier to build. Well, I managed to build it on ROCKS/CentOS (gcc 4.1). I'm still working on the debian version which has a much newer gcc (4.7)
Before you get started you may want to compile ATLAS as shown here:
http://verahill.blogspot.com.au/2012/09/rocks-543-atlas-and-gromacs-on-xeon.html
License:
First go to http://daltonprogram.org/licence/ and fill out the license agreement. Once that's done you'll get an automated email with a license form, which you should print, sign, scan and email to the email address you're given. Once your form has been processed you'll be sent another email with a user name and password. I received my user name and password the next business day.
Go online and download the source file, Dalton2011_release_v0.tgz, and put it in ~/tmp.
Sort out where you want your program to end up
sudo mkdir /share/apps/dalton
sudo chown $USER /share/apps/dalton
mkdir /share/apps/dalton/bin /share/apps/dalton/basis /share/apps/dalton/lsdalton
Next,
cd ~/tmp
tar xvf Dalton2011_release_v0.tgz
cd Dalton2011_release/DALTON
./configure
and answer all the questions:
------------------------------------------------------------------
Configuring the DALTON Makefile.config and "dalton" run script
------------------------------------------------------------------
INFO: Operating system from 'uname -s' : Linux
INFO: Processor type from 'uname -m' : x86_64
No architecture specified, attempting auto-configuration:
This appears to be a -linux architecture. Is this correct? [Y/n]
--> Installing DALTON on a -linux computer
Note that 64-bit integers are desirable for Cholesky and very large
scale CI, otherwise the most important effect is that some files will be bigger.
If you choose 64-bit integers, be careful that any system library
routines (incl. MPI) also use 64-bit integers!
Do you want 64-bit integers? [y/N] Do you want to install the program in a parallel MPI version? [Y/n]
-->WARNING: Makefiles for MPI architecture are difficult to guess
Please compare the generated Makefile.config with local documentation.
Checking for Fortran compiler ...
from this list: mpif90 mpiifort ifort pgf95 pgf90 gfortran g95
Compiler /opt/openmpi/bin/mpif90 found, use this compiler? [Y/n]
-->Compiler mpif90 found and accepted.
Is backend compiler gfortran ? [Y/n]
Checking for C compiler ...
from this list: mpicc mpiicc icc ecc pgcc gcc
Compiler /opt/openmpi/bin/mpicc found, use this compiler? [Y/n]
-->Compiler mpicc found and accepted.
Testing existence of libraries in this order:
libacml.a libmkl.so libmkl_p3.a libatlas.a libblas.a
Directory search list for libraries:
/state/partition1/home/me/tmp/ATLAS/build/lib /state/partition1/apps/ATLAS/lib /lib /usr/local/lib /usr/lib /usr/local/lib/ATLAS /lib64 /usr/lib64 /usr/local/lib64
Do you want to replace this with your own directory search list? [y/N] Found /state/partition1/home/me/tmp/ATLAS/build/lib/libatlas.a, use it? [Y/n] Found /state/partition1/apps/ATLAS/lib/libatlas.a, use it? [Y/n]
-->The following mathematical library(ies) will be used:
-L/state/partition1/apps/ATLAS/lib -llapack -llapack -lf77blas -latlas
DALTON uses almost 100 Megabytes of static
allocations, in addition to the dynamic allocation.
DALTON has the possibility to reserve an amount of static memory
for storing two-electron integrals in direct and parallel calculations
Storing some or all of the 2-el. integrals in memory will speed up
direct and parallel calculations (and in particular the latter).
NOTE: This will increase the static memory allocation used by DALTON
Would you like to activate the possibility of storing 2-el.int. in memory? [y/N] How many MB to use for storing 2-el. integrals?
-->Program will be installed with 500 MB (65000000 words) used for storing 2-el. integrals
Maximum amount of work memory for dynamic allocations can be changed
at run time with the environment variable WRKMEM (in REAL*8 words = megabytes/8)
or by using the -M option to the run script: "dalton -M mb ..." (in megabytes).
We recommend at least 200 MB work memory,
larger for correlated calculations, but it should for maximum
efficiency NOT exceed available physical memory per CPU in parallel calculations.
How many MB to use as default for work memory (hit return for default of 1000 MB)?
-->Program will be installed with a default work memory of 900 MB (117000000 words)
-->Current directory is /home/me/tmp/Dalton2011_release/DALTON
Use default ../bin as installation directory for DALTON binaries and scripts? [Y/n] Please enter another installation directory:
-->DALTON executable and script will be placed in /share/apps/dalton/test directory
-->Default basis set directory will be /home/me/tmp/Dalton2011_release/DALTON/../basis/
Use this directory as default basis set directory? [Y/n]
Please choose another default basis set directory (must end with /)
-->Default basis set directory will be /share/apps/dalton/basis/
I did not find /work, /scratch, /scr, or /temp. I will use /tmp
-->Job specific directories under $SCRATCH/$USER
-->will be used for temporary files when running DALTON
Use SCRATCH=/tmp as default root scratch space in "dalton" run script? [Y/n]
-->Creating Makefile.config ...
gfortran version 412 prc=x86_64
INFO: Compiling with 32-bit integers.
INFO: Make sure pre-compiled BLAS, MPI etc. libraries are also with 32-bit integers!!!
Proper 64-bit file access detected.
-->Creating the DALTON run-script in /share/apps/dalton/test
The configuration of DALTON has finished succesfully.
Check compiler flags etc. in Makefile.config and run "make" to get executable.
Regardless of what you'll answer, here's an example of a
Makefile.config that I used. The key is to add
-I../modules to INCLUDES, and delete
-fbacktrace.
ARCH = linux
#
#
CPPFLAGS = -DVAR_GFORTRAN -DSYS_LINUX -DVAR_MFDS -D'INSTALL_WRKMEM=117000000' -D'INSTALL_MMWORK=65000000' -D_FILE_OFFSET_BITS=64 -DVAR_MPI -DGFORTRAN=412 -DIMPLICIT_NONE
F90 = mpif90
CC = mpicc
LOADER = mpif90
RM = rm -f
FFLAGS = -march=x86-64 -O3 -ffast-math -funroll-loops -ftree-vectorize
SAFEFFLAGS = -march=x86-64 -O3 -ffast-math -funroll-loops -ftree-vectorize
CFLAGS = -march=x86-64 -O3 -ffast-math -funroll-loops -ftree-vectorize -std=c99 -DRESTRICT=restrict -DFUNDERSCORE=1
INCLUDES = -I../include -I../modules
MODULES = -J../modules
LIBS = -L/state/partition1/apps/ATLAS/lib -llapack -llapack -lf77blas -latlas -L/opt/openmpi/lib -lmpi
INSTALLDIR = /share/apps/dalton/test
PDPACK_EXTRAS = linpack.o eispack.o gp_zlapack.o gp_dlapack.o
GP_EXTRAS =
AR = ar
ARFLAGS = rvs
# flags for ftnchek on Dalton /hjaaj
CHEKFLAGS = -nopure -nopretty -nocommon -nousage -noarray -notruncation -quiet -noargumants -arguments=number -usage=var-unitialized
# -usage=var-unitialized:arg-const-modified:arg-alias
# -usage=var-unitialized:var-set-unused:arg-unused:arg-const-modified:arg-alias
#
default : dalton linuxparallel.x
SAFE_FFLAGS_for_ifort = $(FFLAGS)
#
# Parallel initialization
#
MPI_INCLUDE_DIR =
MPI_LIB_PATH =
MPI_LIB =
#
#
# Suffix rules
# hjaaj Oct 04: .g is a "cheat" suffix, for debugging.
# 'make x.g' will create x.o from x.F or x.c with -g debug flag set.
#
.SUFFIXES : .F .F90 .c .o .i .g .s
.F.o:
$(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(FFLAGS) -c $*.F
.F.i:
$(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) -E $*.F > $*.i
.F.g:
$(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(SAFEFFLAGS) -g -c $*.F
.F.s:
$(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(FFLAGS) -S -g -c $*.F
.F90.o:
$(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(FFLAGS) -c $*.F90
.F90.i:
$(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) -E $*.F90 > $*.i
.F90.g:
$(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(SAFEFFLAGS) -g -c $*.F90
.F90.s:
$(F90) $(INCLUDES) $(MODULES) $(CPPFLAGS) $(FFLAGS) -S -g -c $*.F90
.c.o:
$(CC) $(INCLUDES) $(CPPFLAGS) $(CFLAGS) -c $*.c
.c.i:
$(CC) $(INCLUDES) $(CPPFLAGS) $(CFLAGS) -E $*.c > $*.i
.c.g:
$(CC) $(INCLUDES) $(CPPFLAGS) $(CFLAGS) -g -c $*.c
.c.s:
$(CC) $(INCLUDES) $(CPPFLAGS) $(CFLAGS) -S -g -c $*.c
If all is looking well, make.
make
cd ../
cp basis/* /share/apps/dalton/basis
DO NOT RUN MAKE IN PARALLEL i.e. no make -j3 or anything like that.
Add /share/apps/dalton/bin to your PATH i.e. add a line saying
export PATH=$PATH:/share/apps/dalton/bin
to your ~/.bashrc and source it.
So far I haven't had much time to look at it, but here's the result of the 'short' test series:
./TEST -dalton /share/apps/dalton/bin/dalton short
[..]
#####################################################################
Summary
#####################################################################
THERE IS A PROBLEM IN TEST CASE(S)
prop_exci prop_vibg2 walk_vibave2 dftmm_1
date and time : Sun Nov 4 18:41:59 PST 2012
Here's what I found for each of the troublesome ones above:
prop_exci:
126: INFO from READIN: Threshold for discarding integrals was 1.00D-16
127: INFO from READIN: Threshold is reset to minimum value 1.00D-15
But otherwise it finished ok.
prop_vibg2:
SIROUT stat info, IST and IEND = 0 -1
IST or IEND out of bounds - probably no optimization in this run.
But otherwise it finished ok.
walk_vibave2:
3 informational messages have been issued by Dalton,
output from 'grep -n INFO' (max 10 lines):
549: *** SETSIR-INFO, time in NSETUP: 0.00 seconds.
2346: *** SETSIR-INFO, time in NSETUP: 0.00 seconds.
3691: *** SETSIR-INFO, time in NSETUP: 0.00 seconds
But otherwise it finished ok.
dftmm_1:
NOTE: 1 warnings have been issued.
Check output, result, and error files for "WARNING".
dftmm_1.tar.gz has been copied to /home/me/tmp/Dalton2011_release/DALTON/test
----------------------------------------------------------
2 WARNINGS have been issued by Dalton,
output from 'grep -n -i WARNING' (max 10 warnings):
711: NOTE: 1 warnings have been issued.
712: Check output, result, and error files for "WARNING".
I can't find the warning in the output, which looks like it finished ok.
All in all, it looks very promising.
Note on running in parallel
I had to do
mkdir /tmp/$USER
first.
In addition, when running I have to explicitly define my scratch directory:
dalton -t /tmp/$USER -N 4 myinput.dal myinput.mol
Other than that it's OK. I just get the overall impression that things aren't very stable (some jobs crash, some don't)