Lam Mps

29
Molecular Dynamics Simulation on massively parallel computers using LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) Objective To study the behaviour of Molecular Dynamics using Lammps (Large-Scale Atomic/Molecular Massively Parallel Simulator). To visualize the motion of atom using different tools and to simulate the motion of thousand of atoms. Introduction LAMMPS stands for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions. LAMMPS runs efficiently on single-processor desktop or laptop machines, but is designed for parallel computers. It will run on any parallel machine that compiles C++ and supports the MPI message-passing library. LAMMPS can model systems with only a few particles up to millions or billions. LAMMPS is a freely-available open-source code which means you can use or modify the code however you wish. LAMMPS is designed to be easy to modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics.

Transcript of Lam Mps

Page 1: Lam Mps

Molecular Dynamics Simulation on massively parallel computers using LAMMPS (Large-scale Atomic/Molecular Massively

Parallel Simulator)

Objective

To study the behaviour of Molecular Dynamics using Lammps (Large-Scale Atomic/Molecular Massively Parallel Simulator). To visualize the motion of atom using different tools and to simulate the motion of thousand of atoms.

Introduction

LAMMPS stands for Large-scale Atomic/Molecular Massively Parallel Simulator.

LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.

LAMMPS runs efficiently on single-processor desktop or laptop machines, but is designed for parallel computers. It will run on any parallel machine that compiles C++ and supports the MPI message-passing library.

LAMMPS can model systems with only a few particles up to millions or billions.

LAMMPS is a freely-available open-source code which means you can use or modify the code however you wish. LAMMPS is designed to be easy to modify or extend with new capabilities, such as new force fields, atom types, boundary conditions, or diagnostics.

LAMMPS integrates Newton's equations of motion for collections of atoms, molecules, or macroscopic particles that interact via short- or long-range forces with a variety of initial and/or boundary conditions. For computational efficiency LAMMPS uses neighbor lists to keep track of nearby particles. The lists are optimized for systems with particles that are repulsive at short distances, so that the local density of particles never becomes too large. On parallel machines, LAMMPS uses spatial-decomposition techniques to partition the simulation domain into small 3d sub-domains, one of which is assigned to each processor. Processors communicate and store "ghost" atom information for atoms that border their sub-domain. LAMMPS is most efficient (in a parallel sense) for systems whose particles fill a 3d rectangular box with roughly uniform density.

Page 2: Lam Mps

Here is the listing of some feature of lammps-

runs on a single processor or in parallel distributed-memory message-passing parallelism (MPI) spatial-decomposition of simulation domain for parallelism open-source distribution highly portable C++ optional libraries used: MPI and single-processor FFT easy to extend with new features and functionality runs from an input script syntax for defining and using variables and formulas syntax for looping over runs and breaking out of loops run one or multiple simulations simultaneously (in parallel) from one script build as library, invoke LAMMPS thru library interface or provided Python wrapper couple with other codes: LAMMPS calls other code, other code calls LAMMPS,

umbrella code calls both

Literature Survey

Lammps has some dependency libraries and compilers so build those libraries and compilers before installing Lammps on your machine. Next few pages are the explanation of different compiler and libraries, how to install them and why do one need it before building Lammps.

1. Intel ICC Compiler

Download it from http://www.intel.com/software/products/noncom

Intel C++ Compiler (also known as icc or icl) is a group of C and C++ compilers from Intel Corporation available for GNU/Linux, Mac OS X, and Microsoft Windows.

Intel supports compilation for its IA-32 and Intel 64 processors. Intel C++ Compiler supports both OpenMP 3.0 and automatic parallelization for symmetric multiprocessing. With the add-on capability Cluster OpenMP, the compiler can also automatically generate Message Passing Interface calls for distributed memory multiprocessing from OpenMP directives.

Intel C++ Compiler belongs to the family of compilers with the Edison Design Group frontend (like the SGI MIPSpro, Comeau C++, Portland Group, and others). The compiler is

Page 3: Lam Mps

also notable for being widely used for SPEC CPU Benchmarks of IA-32, x86-64, and Itanium 2 architectures.

The Intel C++ Compiler is available in four forms. It is part of Intel Parallel Studio, the Intel C++ Compiler Professional Edition package, the Intel Compiler Suite package and the Intel Cluster Toolkit, Compiler Edition.

2. FFTW Library

FFTW is a comprehensive collection of fast C routines for computing the discrete Fourier transform in one or more dimensions, of both real and complex data, and of arbitrary input size. FFTW also includes parallel transforms for both shared- and distributed-memory systems.

FFTW is usually faster (and sometimes much faster) than all other freely-available Fourier transform programs found on the Net. For transforms whose size is a power of two, it compares favorably with the FFT codes in Sun's Performance Library and IBM's ESSL library, which are targeted at specific machines. Moreover, FFTW's performance is portable. Indeed, FFTW is unique in that it automatically adapts itself to your machine, your cache, the size of your memory, the number of registers, and all the other factors that normally make it impossible to optimize a program for more than one machine. An extensive comparison of FFTW's performance with that of other Fourier transform codes has been made. The results are available on the Web at the benchFFT home page.

In order to use FFTW effectively, one need to understand one basic concept of FFTW's internal structure. FFTW does not use a fixed algorithm for computing the transform, but it can adapt the DFT algorithm to details of the underlying hardware in order to achieve best performance. Hence, the computation of the transform is split into two phases. First, FFTW's planner is called, which "learns" the fastest way to compute the transform on your machine. The planner produces a data structure called a plan that contains this information. Subsequently, the plan is passed to FFTW's executor, along with an array of input data. The executor computes the actual transform, as dictated by the plan. The plan can be reused as many times as needed. In typical high-performance applications, many transforms of the same size are computed, and consequently a relatively-expensive initialization of this sort is acceptable. On the other hand, if you need a single transform of a given size, the one-time cost of the planner becomes significant. For this case, FFTW provides fast planners based on heuristics or on previously computed plans.

Besides the automatic performance adaptation performed by the planner, it is also possible for advanced users to customize FFTW for their special needs. As distributed, FFTW works most efficiently for arrays whose size can be factored into small primes (2, 3, 5, and 7), and uses a slower general-purpose routine for other factors. FFTW, however, comes with a code generator that can produce fast C programs for any particular array size you may care about. For example, if you need transforms of size 513 = 19*33, you can customize FFTW to support the factor 19 efficiently.

Page 4: Lam Mps

FFTW can exploit multiple processors if you have them. FFTW comes with a shared-memory implementation on top of POSIX (and similar) threads, as well as a distributed-memory implementation based on MPI.

Installation and Customization of fftw

Following is the description of the installation and customization of FFTW, the latest version of which may be downloaded from the FFTW home page.

As distributed, FFTW makes very few assumptions about your system. All you need is an ANSI C compiler (gcc is fine, although vendor-provided compilers often produce faster code). However, installation of FFTW is somewhat simpler if you have a Unix or a GNU system, such as Linux..

Installation on Unix

FFTW comes with a configure program in the GNU style. Installation can be as simple as:

./configuremakemake install

This will build the uniprocessor complex and real transform libraries along with the test programs. We strongly recommend that you use GNU make if it is available; on some systems it is called gmake. The "make install" command installs the fftw and rfftw libraries in standard places, and typically requires root privileges (unless you specify a different install directory with the --prefix flag to configure). You can also type "make check" to put the FFTW test programs through their paces. If you have problems during configuration or compilation, you may want to run "make distclean" before trying again; this ensures that you don't have any stale files left over from previous compilation attempts.

The configure script knows good CFLAGS (C compiler flags) for a few systems. If your system is not known, the configure script will print out a warning. In this case, you can compile FFTW with the command

make CFLAGS="<write your CFLAGS here>"

The configure program supports all the standard flags defined by the GNU Coding Standards; see the INSTALL file in FFTW or the GNU web page. Note especially --help to list all flags and --enable-shared to create shared, rather than static, libraries. configure also accepts a few FFTW-specific flags, particularly:

It is often useful to install both single- and double-precision versions of the FFTW libraries on the same machine, and we provide a convenient mechanism for achieving this on Unix systems.

When the --enable-type-prefix option of configure is used, the FFTW libraries and header files are installed with a prefix of `d' or `s', depending upon whether you compiled

Page 5: Lam Mps

in double or single precision. Then, instead of linking your program with -lrfftw -lfftw, for example, you would link with -ldrfftw -ldfftw to use the double-precision version or with -lsrfftw -lsfftw to use the single-precision version. Also, you would #include <drfftw.h> or <srfftw.h> instead of <rfftw.h>, and so on.

The names of FFTW functions, data types, and constants remain unchanged! You still call, for instance, fftw_one and not dfftw_one. Only the names of header files and libraries are modified. One consequence of this is that you cannot use both the single- and double-precision FFTW libraries in the same program, simultaneously, as the function names would conflict.

So, to install both the single- and double-precision libraries on the same machine, use the following set of command

./configure --enable-type-prefix [ other options ]makemake installmake clean./configure --enable-float --enable-type-prefix [ other options ]makemake install

To force configure to use a particular C compiler (instead of the default, usually cc), set the environment variable CC to the name of the desired compiler before running configure; you may also need to set the flags via the variable CFLAGS.

3. MPI Library

MPI is a language-independent communications protocol used to program parallel computers. Both point-to-point and collective communication are supported. MPI is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation.MPI's goals are high performance, scalability, and portability. MPI remains the dominant model used in high-performance computing today.

MPI is not sanctioned by any major standards body; nevertheless, it has become a de facto standard for communication among processes that model a parallel program running on a distributed memory system. Actual distributed memory supercomputers such as computer clusters often run such programs. The principal MPI–1 model has no shared memory concept, and MPI–2 has only a limited distributed shared memory concept. Nonetheless, MPI programs are regularly run on shared memory computers. Designing programs around the MPI model (contrary to explicit shared memory models) has advantages over NUMA architectures since MPI encourages memory locality.

Although MPI belongs in layers 5 and higher of the OSI Reference Model, implementations may cover most layers, with sockets and TCP used in the transport layer.

Page 6: Lam Mps

Most MPI implementations consist of a specific set of routines (i.e., an API) directly callable from Fortran, C and C++ and from any language capable of interfacing with such libraries (such as C#, Java or Python). The advantages of MPI over older message passing libraries are portability (because MPI has been implemented for almost every distributed memory architecture) and speed (because each implementation is in principle optimized for the hardware upon which it runs).

MPI uses Language Independent Specifications (LIS) for calls and language bindings. The first MPI standard specified ANSI C and Fortran-77 bindings together with the LIS. At present, the standard has several popular versions: version 1.3 (shortly called MPI–1), which emphasizes message passing and has a static runtime environment, and MPI–2.2 (MPI–2), which includes new features such as parallel I/O, dynamic process management and remote memory operations.MPI–2's LIS specifies over 500 functions and provides language bindings for ANSI C, ANSI Fortran (Fortran90), and ANSI C++. Object interoperability was also added to allow for easier mixed-language message passing programming. A side–effect of MPI–2 standardization (completed in 1996) was clarification of the MPI–1 standard, creating the MPI–1.2.

Note that MPI–2 is mostly a superset of MPI–1, although some functions have been deprecated. MPI–1.3 programs still work under MPI implementations compliant with the MPI–2 standard.

MPI is often compared with PVM, which is a popular distributed environment and message passing system developed in 1989, and which was one of the systems that motivated the need for standard parallel message passing. Threaded shared memory programming models (such as Pthreads and OpenMP) and message passing programming (MPI/PVM) can be considered as complementary programming approaches, and can occasionally be seen together in applications, e.g. in servers with multiple large shared-memory nodes.

4. GOpenMol

GOpenMol is a tool for the visualization and analysis of molecular structures and their chemical properties. The program uses Tcl/Tk scripting engine and can thus be easily extended without modifying the kernel code. GOpenMol can also be extended by writing extensions using sharable objects (Linux/Unix) and dynamic data exchange (Windows) modules. Moreover there is a set of programs and utility functions included in gOpenMol.

gOpenMol can be used for the display and analysis of

molecular structures and properties calculated with external programs, molecular dynamics trajectories, isocontour surfaces of grid data, such as molecular orbitals and electron densities. cut planes through grid data sets, and it can also be used to make short animations

where a cut plane travels through a molecule grid data. The program can also be used together with electrostatic potentials from programs

like GaussianXX, GAMESS Jaguar, UHBD (University of Houston Brownian Dynamics), AutoDock and the GRID programs.

Page 7: Lam Mps

Software

Several different input coordinate and binary traectory file formats can be imported, displayed and analyzed in gOpenMol. Isocontour surfaces and cross sections of grid data can also be displayed in gOpenMol. The program has a graphical user interface (GUI) and an internal command line interpreter based on the Tcl/Tk.

Hardware

gOpenMol is implemented using either the SGI OpenGL or MESA (http://www.mesa3d.org/) graphics libraries. When using the MESA graphics library it is possible to turn gOpenMol into pure X-Windows application, where no extra graphics hardware is needed.

gOpenMol is currently supported on the following platforms. The primary development platforms Linux (Intel) and Windows are marked with [*]. Versions for the other platforms might be available at some time in the future.

IBM AIX 5.2 PowerPC   PC/Linux  [*]  SunOS 5.9 Sparc  Windows 95/98/NT/2000/XP (WIN32) [*] 

Supported file formats

gOpenMol supports the following molecule coordinate input formats...

ADF output log file  CHARMm/CHARMM  Chemical Markup Language (CMLTM)  Chem3D binari file  DL_Poly CONFIG file  A frame from a trajectory  Gamess DAT, IRC and LOG output files  Gaussian formatted checkpoint file  GROMACS  GROMOS  HyperChem  Insight (car files)  Mol2  Mopac  Mumod  OpenMol (center binary file)  PDB (Brookhaven Protein Data Bank format)  Spartan binary file  Tinker coordinate file  Turbomole grid file  UHBD qcd coordinate file  Xmol xyz coordinate file  GXYZ general xyz coordinate file  YASP 

Page 8: Lam Mps

... binary trajectory formats...

Amber  Cerius2  CHARMm/CHARMM  Discover  DL_Poly  GROMACS  Gromos (mind you, this is the old GROMOS format)  HyperChem (This format has not yet been tested with the latest version.)  MUMOD  XPLOR  YASP 

... and ascii trajectory formats.

DL_Poly trajectory format  GROMOS96 trajectory format  TINKER multi frame coordinate trajectory format  XMOL xyz multi-step data sets. The included Xvibs program enables the display of vibrational 

modes from the GAUSSIANXX and GAMESS programs. 

Extensive display of isocontour surfaces and cut planes for

Orbitals, densities ... from GAUSSIANXX set of programs.  Connolly type surfaces using the ProbeSurf program. 

Utility programs

The utility programs are located in the directorybin/ and the source code is available in the directory utility/, except for the MolPro-program, which can be downloaded from the Internet. Some of the programs can be run through the menu Run in the GUI-window and the rest of the programs can be run in a DOS-terminal.

Programs that can be run through the menu Run in the GUI-window.AutoDock2plt : Converts an AutoDock map file to a plt file. ContMan : Contour (plt) files can be manipulated by adding or substracting two contours. Gamess2plt : Converts a Gamess "cube" PUNCH file to a plt file. gCube2plt/g94cub2plt : Converts a cube file from the GaussianXX program into a binary format known by gOpenMol. Jaguar2plt  : Converts a Jaguar plot file to a gOpenMol plt file. Join Gamess IRC files : Joins frames from an IRC calculation to amn XMOL file. Kont2plt  : Converts a Grid data file to a formatted or an unformatted plt file. Pltfile : This program can be used to format a binary plot file or to make a formatted file into binary. This program is helpful if you want to move the plot file from one computer hardware to another. Probesurf (Connolly) : Generates a Connolly type of surface around the molecular system. Socket server/client  : Bits and pieces to send structures between two gOpenMol programs. TMole2plt  : Converts a TurboMole grid file to a plt file. UHBD2plt : Converts a UHBD formatted and unformatted PHI grid files into the plt format used by the gOpenMol program. 

Page 9: Lam Mps

Xvibs (conversion)  : Program to generate a multi structure XMOL file for animation from a variety of quantum chemistry programs. 

Programs that can be run in a DOS-window.ambera2b : Converts a formatted ascii AMIER trajectory into an unformatted one. gOpenMol can handle only unformatted AMIER trajectories. 

Charmmtrj :  This program makes the formatted/binary tranformation for a CHARMM dynamics trajectory file. 

Convert : Converts the output search results (FDAT) from Cambridge Structural Database into separate PDI files. 

MolPro : Converts Molpro2002.1 cube file with one or more orbitals to gOpenMol2.1 binary plt file and one crd file. At the moment, gradients or laplacians are not supported. The utility program can be dowloaded from Lauri Lehtovaara's we page at http://www.cc.jyu.fi/~lauri/progs.html. 

sybyl2amber : Converts a formatted ascii Sybyl trajectory into an unformatted AMBER one. 

Trajmerge : Merges two CHARMM trajectory files into one. 

xplor2charmm : Converts an XPLOR trajectory into a CHARMM trajectory. 

Installation of GOpenMol: Download the zipped file of GopenMol from http://www.csc.fi/english/pages/g0penMol/Downloads and then follow the command to install on your respective machine

Windows version

1. Create a directory, for example, gopenmol. 2. Download the zip file to your temp directory. 3. Unzip the files into the gopenmol directory. 4. Go into the directory gopenmol and click the icon install.bat 5. Run program by clicking the icon rungOpenMol.bat in the directory ..\gopenmol\bin

You can make a short cut from the file rungOpenMol.bat to your Desktop the way you usually make short cuts in Windows.

Other versions

1. Download and uncompress the tar file. 2. Untar the file (creates gopenmol/..) 3. Go into the directory gopenmol and type: ./install 4. Run program through the script

'your_directory_structure/gopenmol/bin/rungOpenMol'

Page 10: Lam Mps

5.Atom Eye Tool

Atom-Eye is a tool to get the snapshot of Molecular Dynamics Simulation. It supports only few format which are PDB | CFG. So to visualize the lammps dump output, first change it into either of the format. The dump output could be converted into CFG Format using an existing lammps tool named lmp2cfg.

Download this tool from http://mt.seas.upenn.edu/Archive/Graphics/A/#download

Right-click on the link and "Save Target As..." to one of your directories by say with name A. Now do the following command

Chmod 755 A

./A

To check whether or not its working, test it with a .CFG format file which can be get either from above link or you can change the lammps dump output into CGF format using a lammps tools named lmp2cfg .

Making Lammps

When you download LAMMPS you will need to unzip and untar the downloaded file with the following commands, after placing the file in an appropriate directory.

gunzip lammps*.tar.gz tar xvf lammps*.tar

Building LAMMPS can be non-trivial. We need to edit a makefile, there are compiler options, additional libraries can be used like mpi & fft .

 We have to install MPICH and fftw2

Editing a new low-level Makefile.ubuntu (for single ubuntu system)

Requirement: a Linux box and a C compiler 

I have used the following 'Makefile' for building LAMMPS on a single linux(ubuntu) box ***************************************************************************

SHELL = /bin/sh  # System-specific settings  CC =  mpicxx

Page 11: Lam Mps

CCFLAGS = -O -DFFT_FFTW -DLAMMPS_GZIP -DMPICH_IGNORE_CXX_SEEKDEPFLAGS = -MLINK =  $(CC)LINKFLAGS = -OUSRLIB = -lfftwSYSLIB = ARCHIVE = arARFLAGS = -rcSIZE =  size  # Link target  $(EXE): $(OBJ)      $(LINK) $(LINKFLAGS) $(OBJ) $(USRLIB) $(SYSLIB) -o $(EXE)      $(SIZE) $(EXE)  # Library target  lib: $(OBJ)      $(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)  # Compilation rules  %.o:%.cpp      $(CC) $(CCFLAGS) -c $<  %.d:%.cpp      $(CC) $(CCFLAGS) $(DEPFLAGS) $< > $@  # Individual dependencies  DEPENDS = $(OBJ:.o=.d)include $(DEPENDS) 

-> copy the above Makefile into lammps/src/MAKE and rename it as Makefile.ubuntu. Now run the following command from lammps/src directory: 

Make ubuntu  

this will build the lammps executable file named lmp_ubuntu in the src directoy. 

Editing a new low-level Makefile.ubuntu (for parallel cluster system)

Requirement: a linux system, Intel ICC Compiler, MPICH Library, FFTW Library so before building lammps on cluster system, we should install the above mentioned libraries and compilers: 

Intel ICC Compiler: 

This is already installed on cluster system. If neened, dowload it from  http://software.intel.com/en-us/articles/non-commercial-software-development/ and install it. 

MPICH Library: 

If you want LAMMPS run in parallel, you must have a MPI library installed on your platform. We need to specify where the mpi.h file(MPI_INC)  and the mpi library(MPI_PATH) is found and its name (MPI_LIB) in Makefile.linux which is in /lammps/src/MAKE directoy 

on our cluster system, the paths are: 

Page 12: Lam Mps

MPI_INC =       -DMPICH_SKIP_MPICXX -I/opt/mpich/include#x0d;

MPI_PATH =   -L/opt/mpich/ch-p4/lib64#x0d;

MPI_LIB =   -lmpich -lpthread#x0d; 

2.2.3.  FFTW Library 

Download the FFTW library(Version 2.1.5) from the site www.fftw.org unzip and untar the folder and now run the following code from within /fftw directory ./configure –enable-type-prefix (other options)

make

make install

make clean

./configure –enable-float –enable-type-prefix (other option)

make

make install 

Suppose we want to install it in the /home/ankit/fftw directory, so do it using following command: 

./configure --prefix=/home/ankit/fftw2 --enable--mpi#x0d;

make

make install

make clean

./configure --prefix=/home/ankit/fftw --enable--float –enable—mpi

make

make install 

Set the path of FFTW library in Makefile.linux as following:

FFT_INC =     -DFFT_FFTW -I/home/ankit/fftwinclude#x0d;

FFT_PATH =  -L/home/ankit/fftw/lib#x0d;

FFT_LIB =      -lfftw#x0d; 

 

Page 13: Lam Mps

My Makefile.linux for building on cluster system ***********************************************************************************************************

# linux = RedHat Linux box, Intel icc, Intel ifort, MPICH2, FFTWSHELL = /bin/sh  # ---------------------------------------------------------------------# compiler/linker settings# specify flags and libraries needed for your compiler  CC =            iccCCFLAGS =       -ODEPFLAGS =      -MLINK =          iccLINKFLAGS =     -OLIB =           -lstdc++ARCHIVE =       arARFLAGS =       -rcSIZE =          size  # ---------------------------------------------------------------------

# LAMMPS-specific settings# specify settings for LAMMPS features you will use  # LAMMPS ifdef options, see doc/Section_start.html  LMP_INC =       -DLAMMPS_GZIP  

# MPI library, can be src/STUBS dummy lib# INC = path for mpi.h, MPI compiler settings# PATH = path for MPI library# LIB = name of MPI library  MPI_INC =       -DMPICH_SKIP_MPICXX -I/opt/mpich/include#x0d;MPI_PATH =   -L/opt/mpich/ch-p4/lib64#x0d;MPI_LIB =   -lmpich -lpthread#x0d; 

# FFT library, can be -DFFT_NONE if not using PPPM from KSPACE package# INC = -DFFT_FFTW, -DFFT_INTEL, -DFFT_NONE, etc, FFT compiler settings# PATH = path for FFT library# LIB = name of FFT library  FFT_INC =       -DFFT_FFTW -I/home/ankit/fftw/includeFFT_PATH =      -L/home/ankit/fftw/libFFT_LIB =       -lfftw  

# additional system libraries needed by LAMMPS package libraries# these settings are IGNORED if the corresponding LAMMPS package#   (e.g. gpu, meam) is NOT included in the LAMMPS build# SYSLIB = names of libraries# SYSPATH = paths of libraries  gpu_SYSLIB =       -lcudartmeam_SYSLIB =      -lifcore -lsvml -lompstub -limfreax_SYSLIB =      -lifcore -lsvml -lompstub -limfuser-atc_SYSLIB =  -lblas -llapackgpu_SYSPATH =      -L/usr/local/cuda/lib64meam_SYSPATH =     -L/opt/intel/fce/10.0.023/libreax_SYSPATH =     -L/opt/intel/fce/10.0.023/libuser-atc_SYSPATH =   

Page 14: Lam Mps

# ---------------------------------------------------------------------# build rules and dependencies# no need to edit this section  include Makefile.package  EXTRA_INC = $(LMP_INC) $(PKG_INC) $(MPI_INC) $(FFT_INC)EXTRA_PATH = $(PKG_PATH) $(MPI_PATH) $(FFT_PATH) $(PKG_SYSPATH)EXTRA_LIB = $(PKG_LIB) $(MPI_LIB) $(FFT_LIB) $(PKG_SYSLIB) # Link target  $(EXE): $(OBJ)        $(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)        $(SIZE) $(EXE)************************************************************************************************************** 

copy the above Makefile in /lammps/src/MAKE directory and rename it as Makefile.inux and now from the directory /lammps/src, run the following command: 

make linux 

this will build the lammps executable in the /lammps/src directory.  

Running LAMMPS

By default, LAMMPS runs by reading commands from stdin; e.g. lmp_linux < in.try. This means you first create an input script (e.g. in.try) containing the desired commands.

On single processor:-

copy the lammps_ubuntu or lammps_linux executable file from /lammps/src and paste it in the folder contain in.try (input script).

Now type the following command within the folder conatin ur input script (eg: in.try) 

./lmp_ubuntu < in.try ##for ubuntu

./lmp_linux < in.try  ## for linux

On More than one processor:- 

write a script for this which is as follwing. Copy the content in script.sh file and place it in folder contain the input script (eg; in.try) and lammps executable (lmp_linux)

*******************************************************************************************************

      1 #!/bin/bash#x0d;      2 #x0d;      3 ############# Bash Submission Script for c5pc00  #############x0d;      4 ##               Wed April 15 09:57:00 IST 2011#x0d;

Page 15: Lam Mps

      5 ##################################################x0d;      6 #x0d;      7 ############ Enter no. of nodes (N) and total no. of processors (n) required (Maximum 4 Processors  per Node). ###########x0d;      8 ####### Do Not Remove comment(#)#x0d;      9 #SBATCH  -N 2 -n 8#x0d;     10 ####salloc -N 1#x0d;     11 ####salloc -n 4#x0d;     12 #x0d;     13 ############ Export working directory (Do not modify) ############x0d;     14 export work_dir=`pwd`#x0d;     15 cd $work_dir#x0d;     16 echo $work_dir#x0d;     17 #x0d;     18 ############ create mpd ring, DO NOT modify this section unless you really know #############x0d;     19 srun hostname >hosts#x0d;     20 mpdboot -n $SLURM_NNODES -v -f hosts#x0d;     21 echo `date`#x0d;     22 #x0d;     23 ############ ONLY change the path of the executable (program) to your own application and necessary arguments  ##########x0d;     24 #x0d;     25 #### VASP Gamma Point Calculations#x0d;     26 #x0d;     27 mpiexec -l -machinefile hosts -n $SLURM_NPROCS $work_dir/lmp_linux < in.try#x0d;     28 #x0d;     29 echo `date`#x0d;     30 #x0d;     31 ############ exit the mpd ring and clean off the nodes (Do not change) ####################x0d;     32 #x0d;     33 mpdallexit#x0d;     34 mpdcleanup#x0d;     35 #x0d;     36 ####`rm hosts`#x0d;     37 #x0d;     38 exit#x0d; ****************************************************************************************************************

Suppose we want to run in.try file which is in /lammps/example/rigid directory to run on two processor then all we need to do is to edit the 9th and 27th line as  9 #SBATCH  -N 2 -n 8;  -> this means 2  node and as each node has 4 processor in our cluster system so 8 is the no of processor27 mpiexec -l -machinefile hosts -n $SLURM_NPROCS $work_dir/lmp_linux < in.try;  -> this implies that we are using ./lmp_ubuntu executable to run in.try input srcipt. Now run the following command from within the /lammps/example/rigid directory: sbatch script.shthis will give you some result like “sbatch: Submitted batch job 13761#x0d;” and will create a 'slurm-13761.out' file.To view the screen ouputvi slurm-13761.out

Page 16: Lam Mps

Visulaization of  LAMMPS output 

I have used Atomeye tool for the visualization of lammps tools, which could be installed by doing the following step: When we execute our input script using ./lmp_ubuntu we will get a dump file which basically stores the lammps output in .dump format.  Atomeye works on CFG format only so we need to change the dump file in CFG format which could be done using a lammps tool named lmp2cfg.  Make a in.atomeye file which should be:

cat in.atomeye

500

1

'dump.file' //dump.rigid in our case

1

6

1

26.98

'ao' 

now run the following command:

./lmp2cfg_exe < in.atomeye  // this will convert the dump file into CFG format

./atomeye 0001.cfg               //this will give you the snapshot of 0001.cfg

Note :- For visualization gopenmol tool can also be used

Page 17: Lam Mps

Input script structure:-

This section describes the structure of a typical LAMMPS input script. A LAMMPS input script typically has 4 parts:

1. Initialization2. Atom definition3. Settings4. Run a simulation

(1) InitializationSet parameters that need to be defined before atoms are created or read−in from a file.The relevant commands are units, dimension, newton, processors, boundary, atom_style, atom_modify.If force−field parameters appear in the files that will be read, these commands tell LAMMPS what kinds of force fields are being used: pair_style, bond_style, angle_style, dihedral_style, improper_style.

(2) Atom definitionThere are 3 ways to define atoms in LAMMPS. Read them in from a data or restart file via the read_data or read_restart commands. These files can contain molecular topology information. Or create atoms on a lattice (with no molecular topology), using these commands: lattice, region, create_box, create_atoms. The entire set of atoms can be duplicated to make a larger simulation using the replicate command.

(3) SettingsOnce atoms and molecular topology are defined, a variety of settings can be specified: force field coefficients, simulation parameters, output options, etc.

Force field coefficients are set by these commands (they can also be set in the read−in files): pair_coeff, bond_coeff, angle_coeff, dihedral_coeff, improper_coeff, kspace_style, dielectric, special_bonds.

Various simulation parameters are set by these commands: neighbor, neigh_modify, group, timestep, reset_timestep, run_style, min_style, min_modify.

Fixes impose a variety of boundary conditions, time integration, and diagnostic options. The fix command comes in many flavors.

Various computations can be specified for execution during a simulation using the compute, compute_modify, and variable commands.

Output options are set by the thermo, dump, and

restart commands.

(4) Run a simulation

A molecular dynamics simulation is run using the run command. Energy minimization (molecular statics) is performed using the minimize command. A parallel tempering (replica−exchange) simulation can be run using the temper command.

Page 18: Lam Mps

Input Script For Silver Atom above graphite slabs :-

1) Initialization:-

atom_style atomic

# Define what style of atoms to use in a simulation. This determines what attributes are associated with the atoms. In our case we have used atomic . Atomic style styles define point particles.

boundary p p f

# Set the style of boundaries for the global simulation box in each dimension. A single letter assigns the same style to both the lower and upper face of the box.The style p means the box is periodic, so that particles interact across the boundary, and they can exit one end of the box and re-enter the other end.For style f, the position of the face is fixed. If an atom moves outside the face it may be lost.boundary p p p is defaultWe have used boundary p p f .... Means it is periodic in x and y direction but fixed in z direction.

dimension 3

# Set the dimensionality of the simulation. By default LAMMPS runs 3d simulations. To run a 2d simulation, this command should be used prior to setting up a simulation box.Since we are Running 3d simulation we not need to set this.

2) Atom definition:- lattice command:-

lattice custom 0.456 a1 1.22800000 -2.12695839 0.00000000 a2 1.22800000 2.12695839 0.00000000 a3 0.00000000 0.00000000 6.69600000 & basis 0.00 0.00 0.25 basis 0.00 0.00 0.75 basis 0.33333333 0.66666667 0.25 basis 0.66666667 0.33333333 0.7500000

# Define a lattice for use by other commands. In LAMMPS, a lattice is simply a set of points in space, determined by a unit cell with basis atoms, that is replicated infinitely in all dimensions. The arguments of the lattice command can be used to define a wide variety of crystallographic lattices.

The lattice style must be consistent with the dimension of the simulation - see the dimension command. Styles sc or bcc or fcc or hcp or diamond are for 3d problems.

In our case we have to use 3d dimension.A lattice of style custom allows you to specify a1, a2, a3, and a list of basis atoms to put in the unit cell. By default, a1 and a2 and a3 are 3 orthogonal unit vectors (edges of a unit cube). But you can specify them to be of any length and non-orthogonal to each other, so that they describe a tilted parallelepiped. Via the basis keyword you add atoms, one at a time, to the unit cell. Its arguments are fractional coordinates (0.0 <= x,y,z < 1.0), so that a value of 0.5 means a position half-way across the unit cell in that dimension.

For graphite lattice we have Four basis atoms which we have set in the lattice command.

lattice none is a default value.

Page 19: Lam Mps

region command:-#This command defines a geometric region of space. Various other commands use regions. For example, the region can be filled with atoms via the create_atoms command.

region simu block 0 6 0 6 -2 4#defines a region ID as simu and of block style with xlo = 0 , xhi = 6, ylo = 0 , yhi = 6 , zlo = -2 , zhi = 4. This is used by create_box .

region graphite block 0 6 0 6 -2 -1#defines a region ID as graphite and of block style with xlo = 0 , xhi = 6, ylo = 0 , yhi = 6 , zlo = -2 , zhi = -1. We use this region for creating a slab of carbon atom.

region mob block 0 6 0 6 -1 4#defines a region ID as mob and of block style with xlo = 0 , xhi = 6, ylo = 0 , yhi = 6 , zlo = -1, zhi = 4. We use this region for creating a silver atom.

create_box command:-

create_box 2 simu#(create_box N region-ID) This command creates a simulation box based on the specified region. Thus a region command must first be used to define a geometric domain.The argument N is the number of atom types that will be used in the simulation.We used 2 since we have to use two types of atom one is silver and another is carbon.Simu is the region-ID which is defined in region command hence simu region will act as a simulation box.

create_atoms command:-create_atoms 1 region graphite

#(create_atoms type style args keyword values)This command creates atoms on a lattice, or a single atom, or a random collection of atoms.  A simulation box must already exist, which is typically created via the create_box command. Before using this command, a lattice must also be defined using the lattice command.For the region style, the geometric volume is filled that is inside the simulation box and is also consistent with the region volume.This command will create a carbon atom on custom lattice in graphite region.

Commands for formation of silver atom:-

lattice fcc 3.55 create_atoms 1 random 2 55 mob

#At lattice point fcc 2 silver atom are randomly crated in the region mob of same type .

Page 20: Lam Mps

3) Settings:-

Group command:-#group ID style args

group boundary region graphitegroup mobile region mob

#Identify a collection of atoms as belonging to a group. The group ID can then be used in other commands such as fix, compute, dump, or velocity to act on those atoms together.If the group ID already exists, the group command adds the specified atoms to the group.The region style puts all atoms in the region volume into the group.

set command:-

set atom mob type 2#Set one or more properties of one or more atoms. Since atom properties are initially assigned by the read_data, read_restart or create_atoms commands, this command changes those assignments.

mass command:-

mass 1 1.0mass 2 10.0

#Set the mass for all atoms of one or more atom types.

pair_style command:-

pair_style lj/cut 2.5

#Set the formula(s) LAMMPS uses to compute pairwise interactions

pair_coeff command:-

pair_coeff 1 1 1.0 1.0 1.0pair_coeff 1 2 0.5 0.5 0.5pair_coeff 2 2 1.0 1.0 1.0

#Specify the pairwise force field coefficients for one or more pairs of atom types. The number and meaning of the coefficients depends on the pair style

neigh_modify command:-

neigh_modify exclude group boundary boundary

#The exclude type option turns off the pairwise interaction. In this case it exclude the interaction of carbon atom with another carbon atom.

Page 21: Lam Mps

velocity command:-

velocity all create 10.0 4759669

#Set or change the velocities of a group of atoms in one of several styles. For each style, there are required arguments and optional keyword/value parameters. Not all options are used by each style. Each option has a default as listed below.The create style generates an ensemble of velocities using a random number generator with the specified seed as the specified temperature.

fix command:-

fix 1 mobile nvt temp 10.0 10.0 3.0

dump command:-

dump 1 all atom 500 dump.rigid

#Dump a snapshot of atom quantities to one or more files every N timesteps in one of several styles#dump 2 all atom 50 dump.rigid

timestep 0.01

thermo 50

4) Run a simulation

run 100000

#A molecular dynamics simulation is run using the run command

Page 22: Lam Mps