Note: you may want to install
awscli and
euca2ools. I didn't, so I don't actually know whether they are useful.
My instructions are quite rudimentary since I don't have much time to write these blog posts anymore. Hopefully there's enough information to get you through.
AWS
Either way, sign up for AWS. If you already have an amazon ID I think you can use that. Go to
https://aws.amazon.com/
Select Launch an Instance and pick the ubuntu AIM and do Launch and Review. I launched it as a t2.micro instance type, as it is free and it's sufficient for set up but not to run jobs.
Hit launch, and create a new key pair. I called mine
myfirstkeypair and saved the
pem file in my
~/Downloads folder
In my Downloads folder:
ssh -i "myfirstkeypair.pem" ubuntu@ec2-11-222-33-444.us-west-2.compute.amazonaws.com
I then set a password in the ubuntu AWS image:
sudo passwd ubuntu
I added my
id_rsa.pub to
~/.ssh/authorized_keys on the ubuntu AWS image to make logging in via ssh easier -- that way I won't need the pem file.
Set up Gaussian
I then connected with SCP and uploaded my gaussian files -- I went straight for EM64T G09D. It went quite fast at +5 MB/s
scp E6L-103X.tgz ubuntu@ec2-00-111-22-333.us-west-2.compute.amazonaws.com:/home/ubuntu/E6L-103X.tgz
Once that was done, on the ubuntu AWS instance I did:
sudo apt-get install csh
sudo mkdir /opt/gaussian
cd /opt
sudo chown ubuntu gaussian -R
cd /opt/gaussian
cp ~/E6L-103X.tgz .
tar xvf E6L-103X.tgz
cd g09
csh bsd/install
echo 'export GAUSS_EXEDIR=/opt/gaussian/g09/bsd:/opt/gaussian/g09/local:/opt/gaussian/g09/extras:/opt/gaussian/g09' >> ~/.bashrc
echo 'export GAUSS_SCRDIR=/home/ubuntu/scratch' >> ~/.bashrc
echo 'export PATH=$PATH:/opt/gaussian/g09' >> ~/.bashrc
source ~/.bashrc
mkdir ~/scratch ~/jobs
NOTE that you can't run any gaussian jobs under a t2.micro instance. You will have to stop and relaunch as at least a t2.small instance or the jobs will be '
Killed' (that's what is echoed in the terminal when you try to run)
Note that if you terminate an image it will be deleted.
Stop the image and then create a snapshot or an image from it to keep everything you've installed.
Set up Slurm
You'll want a queue manager so that you can launch several jobs in serial. Also, you can set up your script so that it shuts down the image when your job is done to save money.
sudo apt-get update
sudo apt-get install slurm-llnl
ControlMachine=localhost
ControlAddr=127.0.0.1
MpiDefault=none
ProctrackType=proctrack/pgid
ReturnToService=2
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=slurm
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
TaskPlugin=task/none
FastSchedule=1
SchedulerType=sched/backfill
SelectType=select/linear
AccountingStorageType=accounting_storage/none
ClusterName=rupert
JobAcctGatherType=jobacct_gather/none
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
NodeName=localhost NodeAddr=127.0.0.1
PartitionName=All Nodes=localhost
sudo /usr/sbin/create-munge-key
Edit
/etc/default/munge:
OPTIONS=--force
Then run
sudo service slurm-llnl restart
sudo service munge restart
Test using
slurm.batch
#!/bin/bash
#
#SBATCH -p All
#SBATCH --job-name=test
#SBATCH --output=res.txt
#
#SBATCH --ntasks=1
#SBATCH --time=10:00
srun hostname
srun sleep 60
and submit with
sbatch slurm.batch
squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
2 All test ubuntu R 0:08 1 localhost
Benchmark:
#!/bin/csh
#SBATCH -p All
#SBATCH --time=9999999
#SBATCH --output=slurm.out
#SBATCH --job-name=benchmark
setenv GAUSS_SCRDIR /home/ubuntu/scratch
setenv GAUSS_EXEDIR /opt/gaussian/g09/bsd:/opt/gaussian/g09/local:/opt/gaussian/g09/extras:/opt/gaussian/g09
/opt/gaussian/g09/g09< benchmark.in > benchmark.out
Using the same opt/freq benchmark as in
post 621.
c4.2xlarge 2h 11 min [1h 20 min] 8 vcpu/16 Gb
c4.4xlarge 1h 15 min [ 44 min] 16 vcpu/32 Gb
c4.8xlarge 41 min [ 25 min] 36 vcpu/60 Gb
It scales surprisingly well, although not perfectly linearly. It's clear that it's cheaper to use a smaller instance, so if time isn't critical or the larger memory isn't needed, c4.8xlarge is not the first choice.
Dropbox:
You might want to use dropbox to transfer files back and forth, especially finished job files (useful if you shut down the machine using a slurm script as shown below)
cd ~ && wget -O - "https://www.dropbox.com/download?plat=lnx.x86_64" | tar xzf -
~/.dropbox-dist/dropboxd
This computer isn't linked to any Dropbox account...
Please visit https://www.dropbox.com/cli_link_nonce?nonce=0011223344556677889900aabbccddeef to link this device.
This computer isn't linked to any Dropbox account...
Open that link in a browser, then go back to the terminal.
wget -O - https://www.dropbox.com/download?dl=packages/dropbox.py > dropbox.py
sudo mv dropbox.py /usr/local/bin
sudo chmod +x d/usr/local/bin/dropbox.py
dropbox.py autostart y
Now, since you don't want to use up space unnecessarily (you're paying for it after all), exclude as many directories as possible. To exclude all existing dropbox dirs, do
cd ~/Dropbox
dropbox.py exclude add `ld -d */`
dropbox.py exclude add `ld *.*`
dropbox.py exclude list
Note that it can't handle directories with spaces in the name, so you'll need to polish the list by hand. Next create a directory where you want to run and store your jobs,e .g.
mkdir ~/Dropbox/aws_jobs
When you run a gaussian job, make sure to specify where the .chk files should end up, e.g.
%chk=/home/ubuntu/scratch/benchmark.chk
so that you don't use up space/bandwidth for your chk files (unless of course you want to).
Stop after execution:
Use a batch script along these lines:
#!/bin/csh
#SBATCH -p All
#SBATCH --time=9999999
#SBATCH --output=slurm.out
#SBATCH --job-name=benchmark
setenv GAUSS_SCRDIR /home/ubuntu/scratch
setenv GAUSS_EXEDIR /opt/gaussian/g09/bsd:/opt/gaussian/g09/local:/opt/gaussian/g09/extras:/opt/gaussian/g09
/opt/gaussian/g09/g09< benchmark.in > benchmark.out
rm /home/ubuntu/scratch/*.*
sudo shutdown -h now