30 May 2013

434. ECCE 6.4 in a 32 bit Debian 7 Virtual Machine by compiling, or in a 64 bit Debian 7 Virtual Machine by using pre-built binaries

!NOTE!
ECCE is only available prebuilt for 64 bit OS. You will NOT be able to install the binaries from EMSL on a 32 bit OS in your virtual machine.

!NOTE2! I use localhost in the set-up below. Be aware that this will block outside access: http://www.nwchem-sw.org/index.php/Special:AWCforum/st/id858/#post_3178


This post shows how to
1. compile ECCE for a 32 bit system
and
2. install the pre-built EMSL 64 bit binaries for a 64 bit system.

The reason why I show both is that if you are running a 32 bit host system you might not be able to run a 64 bit client OS.

!NOTE!

This is essentially an update of an older post (http://verahill.blogspot.com.au/2012/06/ecce-in-virtual-machine-step-by-step.html) which describes how to install ECCE 6.3 on Debian 6.

Part of the reason for updating is this comment: http://verahill.blogspot.com.au/2012/06/ecce-in-virtual-machine-step-by-step.html?showComment=1369743102385#c8435880599103089732

If you are running a debian or redhat based linux distribution you should be able to install ecce natively without issue (e.g. http://verahill.blogspot.com.au/2013/01/325-compiling-ecce-64-on-debian-testing.html).

However, if you are using Windows or OS X you will probably want to install ecce inside a virtual machine, hence this post:


0. Install virtualbox
How you install virtualbox depends on your host system. I'm running debian wheezy AMD 64, so I did
cd ~/Downloads
wget http://download.virtualbox.org/virtualbox/4.2.12/virtualbox-4.2_4.2.12-84980~Debian~wheezy_amd64.deb
sudo dpkg -i virtualbox-4.2_4.2.12-84980~Debian~wheezy_amd64.deb

Note that you are in no way obliged to download the Oracle packaged version of virtualbox. I'm using it because I've had issues with the debian packages in the past when using kernels that are very new (and that I've compiled myself). See e.g.http://verahill.blogspot.com.au/2013/05/419-talking-to-myself-in-public-dkms.html

See here for other versions of virtualbox: https://www.virtualbox.org/wiki/Linux_Downloads


1. Download the Debian 7 installation CD
If you are on a unix-like system you should be able to do something along the lines of
mkdir ~/tmp
cd ~/tmp
wget http://cdimage.debian.org/debian-cd/7.0.0/i386/iso-cd/debian-7.0.0-i386-netinst.iso
to download the 32 bit iso and

wget http://cdimage.debian.org/debian-cd/7.0.0/amd64/iso-cd/debian-7.0.0-amd64-netinst.iso
to download the 64 bit version. Please read the introduction to this post to learn about the consequences of installing a 32 bit system.


2.Set up the virtual machine
(in this example I'm using the i386 version of debian, but the steps are identical for the amd64 version)

Start virtualbox and click on New:







Mount the CD



3. Install Debian 7 on the virtual machine
Click Start and the installation should start. (the steps are the same for the i386 and the amd64 versions)


































Debian 7 (wheezy) is now installed. Log in.

At this point you can install whatever you want. I suggest installing the

  • LXDE desktop environment (I know that it's a tautology). 
  • openjdk-7-jdk (you might be able to get away with -jre)





Reboot.







4a. 32 bit Virtual machine: Compile ECCE v 6.4
Note: you might have to go to http://ecce.emsl.pnl.gov/using/download.shtml and download the source in a browser if wget doesn't work.





mkdir ~/tmp
cd ~/tmp
wget http://ecce.emsl.pnl.gov/cgi-bin/help/nwecce.pl/ecce-v6.4-src.tar.bz2
tar xvf ecce-v6.4-src.tar.bz2
sudo apt-get install bzip2 build-essential autoconf libtool ant pkg-config gtk+-2.0-dev libxt-dev csh gfortran openjdk-6-jdk python-dev libjpeg-dev imagemagick xterm
cd ecce-v6.4/
export ECCE_HOME=`pwd`
cd build/
./build_ecce

At this point you'll be going through the same steps as shown in this post: http://verahill.blogspot.com.au/2013/01/325-compiling-ecce-64-on-debian-testing.html (yesterday's testing is today's stable)
Checking prerequisites for building ECCE... If any of the following tools aren't found or aren't the right version, hit -c at the prompt and either find or install the tool before re-running this script. The whereis command is useful for finding tools not in your path. Found gcc in: /usr/bin/gcc ECCE requires gcc 3.2.x or 4.x.x This version: gcc (Debian 4.7.2-5) 4.7.2 Hit return if this gcc is OK... [..] This works fine unless your site needs support for multiple platforms. Finished checking prerequisites for building ECCE. Do you want to skip these checks for future build_ecce invocations (y/n)? y
./build_ecce
Xerxes built
./build_ecce
Mesa OpenGL built
./build_ecce
wxWidgets built
./build_ecce
wxPython built
./build_ecce
Apache HTTP server built
./build_ecce
ECCE built and distribution created in /home/verahill/tmp/ecce-v6.4
cd ../ ./install_ecce.v6.4.csh



and continue to section 5.

4b. 64 bit Virtual machine: Download ECCE v 6.4


mkdir ~/tmp
cd ~/tmp
wget http://ecce.emsl.pnl.gov/cgi-bin/help/nwecce.pl/install_ecce.v6.4.rhel5-gcc4.1.2-m64.csh
sudo apt-get install csh
csh install_ecce.v6.4.rhel5-gcc4.1.2-m64.csh

and continue to section 5.

5. Install ECCE
Main ECCE installation menu =========================== 1) Help on main menu options 2) Prerequisite software check 3) Full install 4) Full upgrade 5) Application software install 6) Application software upgrade 7) Server install 8) Server upgrade IMPORTANT: If you are uncertain about any aspect of installing or running ECCE at your site, please refer to the detailed ECCE Installation and Administration Guide at http://ecce.pnl.gov/docs/installation/2864B-Installation.pdf Hit at prompts to accept the default value in brackets. Selection: [1] 3 Host name: [eccehost] localhost Application installation directory: [/home/verahill/tmp/ecce-v6.4/ecce-v6.4/apps] /home/verahill/.ecce/apps Server installation directory: [/home/verahill/.ecce/server] ECCE v6.4 will be installed using the settings: Installation type: [full install] Host name: [localhost] Application installation directory: [/home/verahill/.ecce/apps] Server installation directory: [/home/verahill/.ecce/server] Are these choices correct (yes/no/quit)? [yes] Installing ECCE application software in /home/verahill/.ecce/apps... Extracting application distribution... Extracting NWChem binary distribution... Extracting NWChem common distribution... Extracting client WebHelp distribution... Configuring application software... Configuring NWChem... Installing ECCE server in /home/verahill/.ecce/server... Extracting data server in /home/verahill/.ecce/server/httpd... Extracting data libraries in /home/verahill/.ecce/server/data... Extracting Java Messaging Server in /home/verahill/.ecce/server/activemq... Configuring ECCE server... ECCE installation succeeded. *************************************************************** !! You MUST perform the following steps in order to use ECCE !! -- Unless only the user 'verahill' will be running ECCE, start the ECCE server as 'verahill' with: /home/verahill/.ecce/server/ecce-admin/start_ecce_server -- To register machines to run computational codes, please see the installation and compute resource registration manuals at http://ecce.pnl.gov/using/installguide.shtml -- Before running ECCE each user must source an environment setup script. For csh/tcsh users add this to ~/.cshrc: if ( -e /home/verahill/.ecce/apps/scripts/runtime_setup ) then source /home/verahill/.ecce/apps/scripts/runtime_setup endif For sh/bash users, add this to ~/.profile or ~/.bashrc: if [ -e /home/verahill/.ecce/apps/scripts/runtime_setup.sh ]; then . /home/verahill/.ecce/apps/scripts/runtime_setup.sh fi ***************************************************************
echo 'export ECCE_HOME=/home/verahill/.ecce/apps' >> ~/.bashrc
echo 'PATH=$PATH:/home/verahill/.ecce/server/ecce-admin/:/home/verahill/.ecce/apps/scripts/' >> ~/.bashrc
echo 
source ~/.bashrc
start_ecce_server 
/home/verahill/.ecce/server/httpd/bin/apachectl start: httpd started [1] 4137 INFO BrokerService - ActiveMQ 5.1.0 JMS Message Broker (localhost) is starting INFO BrokerService - ActiveMQ JMS Message Broker (localhost, ID:ecce64bit-35450-1369895708602-0:0) started
ecce





Note that this is just the beginning. You should now compile nwchem and set up ecce to work with nwchem either locally or remotely.

This post shows one type of configuration: http://verahill.blogspot.com.au/2012/06/ecce-in-virtual-machine-step-by-step.html

But read the documentation and search this site and you'll find more examples.

433. Wine 1.5.31 on Debian

Here's a generic way of building wine which works for 1.5.31 (and 1.5.28 and everything in between except 1.5.31).




See here for information about 3D acceleration using libGL/U with Wine: http://verahill.blogspot.com.au/2013/05/429-briefly-wine-libglliubglu-blender.html

Getting started:
If you set up a e.g. chroot to build 1.5.28 before, you don't need to set up a new chroot to build 1.5.31. In that case, skip the set-up step below and instead re-enter your existing chroot like this:
sudo mount -o bind /proc wine32/proc
sudo cp /etc/resolv.conf wine32/etc/resolv.conf
sudo chroot wine32
su sandbox
cd ~/tmp

Setting up the Chroot
sudo apt-get install debootstrap
mkdir $HOME/tmp/architectures/wine32 -p
cd $HOME/tmp/architectures
sudo debootstrap --arch i386 wheezy $HOME/tmp/architectures/wine32 http://ftp.au.debian.org/debian/
sudo mount -o bind /proc wine32/proc
sudo cp /etc/resolv.conf wine32/etc/resolv.conf
sudo chroot wine32

You're now in the chroot:
apt-get update
apt-get install locales sudo vim
echo 'export LC_ALL="C"'>>/etc/bash.bashrc
echo 'export LANG="C"'>>/etc/bash.bashrc
echo '127.0.0.1 localhost beryllium' >> /etc/hosts
source /etc/bash.bashrc
adduser sandbox
usermod -g sudo sandbox
echo 'Defaults !tty_tickets' >> /etc/sudoers
su sandbox
cd ~/

Replace 'beryllium' with the name your host system (it's just to suppress error messages)

Building Wine
While still in the chroot, continue (the i386 is ok; don't worry about it -- you don't actually need it):

sudo apt-get install libx11-dev:i386 libfreetype6-dev:i386 libxcursor-dev:i386 libxi-dev:i386 libxxf86vm-dev:i386 libxrandr-dev:i386 libxinerama-dev:i386 libxcomposite-dev:i386 libglu-dev:i386 libosmesa-dev:i386 libglu-dev:i386 libosmesa-dev:i386 libdbus-1-dev:i386 libgnutls-dev:i386 libncurses-dev:i386 libsane-dev:i386 libv4l-dev:i386 libgphoto2-2-dev:i386 liblcms-dev:i386 libgstreamer-plugins-base0.10-dev:i386 libcapi20-dev:i386 libcups2-dev:i386 libfontconfig-dev:i386 libgsm1-dev:i386 libtiff-dev:i386 libpng-dev:i386 libjpeg-dev:i386 libmpg123-dev:i386 libopenal-dev:i386 libldap-dev:i386 libxrender-dev:i386 libxml2-dev:i386 libxslt-dev:i386 libhal-dev:i386 gettext:i386 prelink:i386 bzip2:i386 bison:i386 flex:i386 oss4-dev:i386 checkinstall:i386 ocl-icd-libopencl1:i386 opencl-headers:i386 libasound2-dev:i386 build-essential
mkdir ~/tmp
cd ~/tmp
wget http://prdownloads.sourceforge.net/wine/wine-1.5.31.tar.bz2
tar xvf wine-1.5.31.tar.bz2
cd wine-1.5.31/./configure
time make -j3
sudo checkinstall --install=no
checkinstall 1.6.2, Copyright 2009 Felipe Eduardo Sanchez Diaz Duran This software is released under the GNU GPL. The package documentation directory ./doc-pak does not exist. Should I create a default set of package docs? [y]: Preparing package documentation...OK Please write a description for the package. End your description with an empty line or EOF. >> wine 1.5.31 >> ***************************************** **** Debian package creation selected *** ***************************************** This package will be built according to these values: 0 - Maintainer: [ root@beryllium ] 1 - Summary: [ wine 1.5.31] 2 - Name: [ wine ] 3 - Version: [ 1.5.31] 4 - Release: [ 1 ] 5 - License: [ GPL ] 6 - Group: [ checkinstall ] 7 - Architecture: [ i386 ] 8 - Source location: [ wine-1.5.31 ] 9 - Alternate source location: [ ] 10 - Requires: [ ] 11 - Provides: [ wine ] 12 - Conflicts: [ ] 13 - Replaces: [ ]
Checkinstall takes a little while (In particular this step: 'Copying files to the temporary directory...').

Installing Wine

Exit the chroot
sandbox@beryllium:~/tmp/wine-1.5.31$ exit
exit
root@beryllium:/# exit
exit
me@beryllium:~/tmp/architectures$ 

On your host system
 Enable multiarch* and install ia32-libs, since you've built a proper 32 bit binary:

sudo dpkg --add-architecture i386
sudo apt-get update
sudo apt-get install ia32-libs

*At some point I think ia32-libs may be replaced by proper multiarch packages, but maybe not. So we're kind of doing both here.

 Copy the .deb package and install it
sudo cp wine32/home/sandbox/tmp/wine-1.5.31/wine_1.5.31-1_i386.deb .
sudo chown $USER wine_1.5.31-1_i386.deb
sudo dpkg -i wine_1.5.31-1_i386.deb

Links to this post:
http://appdb.winehq.org/objectManager.php?sClass=version&iId=19141&iTestingId=79415&bShowAll=true

24 May 2013

432. NWChem 6.3 -- COSMO is now fast(er)!

I probably would've been more excited about this about a year ago when I 'believed' in implicit solvation models (nothing's perfect, and we'll use what is practical so they do fill a strong need. They just aren't very informative for a lot of systems) but it's still a Good Thing.

COSMO has been done using numerical gradients in nwchem 6.1.1 and earlier versions, which has meant that it's been horrendously slow in many cases, in particular if you need to optimise a structure using implicit solvation. COSMO has been -- and still is -- the only implicit solvation model implemented in NWChem, so slow COSMO puts a bit of a spanner in the solvation energy works. Sometimes the calculation even refuses to converge at all.

In contrast, Gaussian has had a number of implicit solvation models implemented, ranging from the quick and dirty PCM, to slower (and better?) C-PCM and I-PCM.

So this is great news.

A quick example:


The test:
Here's a test job (the default cosmo parameters aren't realistic, but this is for testing purposes):
scratch_dir /scratch start benzene geometry units angstroms C 0.100 1.396 0.000 C 1.209 0.698 0.000 C 1.209 -0.698 0.000 C 0.000 -1.396 0.000 C -1.209 -0.698 0.000 C -1.209 0.698 0.000 H 0.000 2.479 0.000 H 2.147 1.240 0.000 H 2.147 -1.240 0.000 H 0.000 -2.479 0.000 H -2.147 -1.240 0.000 H -2.147 1.240 0.000 end basis H library "6-31+g*" c library "6-31+g*" end dft direct end cosmo end scf maxiter 999 end task dft
Note that this is the same test job (plus cosmo, minus optimize) as shown here: http://verahill.blogspot.com.au/2013/05/430-briefly-crude-comparison-of.html

The results:
And here is what I see using nwchem 6.3. (w/ acml 5.3.1, AMD FX 8150/32 gb ram):
6.1.1 19.4 seconds
6.3   14.3 seconds

The difference isn't significant (in the sense that times are too variable so we can't really tell which is faster for such a short job).

But when we change task dft to task dft optimize we get
6.1.1 Fails after 2600 seconds
6.3   128.3 seconds

6.3 churns through the steps pretty efficiently:
@ Step Energy Delta E Gmax Grms Xrms Xmax Walltime @ ---- ---------------- -------- -------- -------- -------- -------- -------- @ 0 -230.09337488 0.0D+00 0.07376 0.01302 0.00000 0.00000 18.1 @ 1 -230.10523734 -1.2D-02 0.00903 0.00231 0.03627 0.10509 45.7 @ 2 -230.10619442 -9.6D-04 0.00491 0.00084 0.01898 0.06082 69.1 @ 3 -230.10628696 -9.3D-05 0.00176 0.00030 0.00737 0.02428 93.3 @ 4 -230.10629787 -1.1D-05 0.00023 0.00005 0.00219 0.00682 115.8 @ 5 -230.10629827 -4.0D-07 0.00004 0.00001 0.00047 0.00136 128.2 @ 5 -230.10629827 -4.0D-07 0.00004 0.00001 0.00047 0.00136 128.2
while 6.1.1 drags itself along for almost an hour:
@ Step Energy Delta E Gmax Grms Xrms Xmax Walltime @ ---- ---------------- -------- -------- -------- -------- -------- -------- @ 0 -230.09389924 0.0D+00 0.07389 0.01306 0.00000 0.00000 691.4 @ 1 -230.10680306 -1.3D-02 0.01081 0.00197 0.03065 0.10438 1378.3 @ 2 -230.10690186 -9.9D-05 0.01000 0.00167 0.00231 0.00803 2092.2
before failing with
6:6:driver: task_gradient failed:: 0 (rank:6 hostname:neon pid:4536):ARMCI DASSERT fail. ../../ga-5-1/armci/src/common/armci.c:ARMCI_Error():208 cond:0 ------------------------------------------------------------------------ There is an error related to the specified geometry ------------------------------------------------------------------------

Sure, the optimization takes 128 seconds instead of ca 44 seconds, but for anyone who's used NWCHEM with COSMO in the past, that's actually not too bad.

I ran another job to get a better feeling for how much longer COSMO vs no COSMO takes for optimization. Optimization of Arecoline (available in ECCE as a fragment) at rb3lyp/6-31+G* takes 2h 5 min with COSMO (33 optimization steps). Without COSMO it takes 37 minutes and uses 14 steps.