Skip to Content.
Sympa Menu

forum - Re: [abinit-forum] Abinip and Intel Cluster Tools 2.0

forum@abinit.org

Subject: The ABINIT Users Mailing List ( CLOSED )

List archive

Re: [abinit-forum] Abinip and Intel Cluster Tools 2.0


Chronological Thread 
  • From: mmgarcia@if.usp.br
  • To: forum@abinit.org
  • Subject: Re: [abinit-forum] Abinip and Intel Cluster Tools 2.0
  • Date: Thu, 29 Jun 2006 19:42:14 -0300

Quoting Xavier Gonze <gonze@pcpm.ucl.ac.be>:

> Dear Marcelo Garcia,
>
> Could you send also your input file ?
> Did you succeed to run one of the parallel test cases,
> found in Test_paral ?
>
> Xavier
>
> On 28 Jun 2006, at 00:14, mmgarcia@if.usp.br wrote:
>
> > Hi.
> >
> > I am having problems with abinip (Abinit-4.6.5) file made with
> > Intel Cluster Tools 2.0. (ICT) I always get the following error:
> > ----------------------------------------------------------------------
> > ------
> > ,Min el dens= 3.8631E-05 el/bohr^3 at reduced coord. 0.8906
> > 0.0500 0.8594
> > ,Max el dens= 3.9231E-01 el/bohr^3 at reduced coord. 0.5906
> > 0.5500 0.2188
> > -P-0000 leave_test : synchronization done...
> > suscep_stat : loop on k-points and spins done in parallel
> > rank 3 in job 1 nanomol07.lcca.usp.br_51942 caused collective
> > abort of all ranks
> > exit status of rank 3: killed by signal 9
> > ----------------------------------------------------------------------
> > ------
> > I use Scientific Linux 4.1 (Red Hat EL 4 update 1 clone)
> >
> > My makefile_macros and the schell script used are listed bellow.
> >
> > Does anyone else using ICT 2.0
> >
> > Thanks
> >
> > Marcelo M. Garcia
> >
> > ---------------------------- Shell script -------------------------
> > #!/bin/sh
> > #PBS -N ppvSlab
> > #PBS -l nodes=3:ppn=2
> > #PBS -q workq
> > #PBS -e SLAB.e
> > #PBS -o SLAB.o
> > # Definicao das variaveis de ambiente.
> > ABINIP=/home/mmgarcia/abinit-4.6.5/abinip
> > MPIRUN=/opt/intel/ict/2.0/mpi/2.0/bin/mpiexec
> > MPDBOOT=/opt/intel/ict/2.0/mpi/2.0/bin/mpdboot
> > MPDEXIT=/opt/intel/ict/2.0/mpi/2.0/bin/mpdallexit
> > MPDTRACE=/opt/intel/ict/2.0/mpi/2.0/bin/mpdtrace
> > JOB=ppvSlab
> > SCR=/scratch/mmgarcia/${JOB}
> > if [ ! -d ${SCR} ] ; then
> > echo "Criando diretorio de scratch"
> > mkdir ${SCR}
> > fi
> > HOME=/home/mmgarcia/Systems/${JOB}
> > # Prepara o diretorio de scratch.
> > cd ${SCR}
> > cp -f ${HOME}/${JOB}.in .
> > cp -f ${HOME}/${JOB}.files .
> > rm -f ${JOB}.out
> > echo `pwd`
> > # Prepara linha de comando.
> > # PBS_NODEFILE diz quais nos foram alocados.
> > echo PBS_NODEFILE
> > cat $PBS_NODEFILE
> > # num. de processadores.
> > NPROCS=`wc -l < $PBS_NODEFILE`
> > # num de nos (no nosso caso, metade do numero de processadores).
> > echo NP
> > NP=`expr $NPROCS / 2`
> > echo $NP
> > # Cria o arquivo com os nos que iriamo receber o job.
> > cat $PBS_NODEFILE &> ${SCR}/pbsnodes.txt
> > # Dispara o daemon nos nos.
> > echo MPDBOOT
> > ${MPDBOOT} -r ssh -n ${NP} -f ${SCR}/pbsnodes.txt
> > echo MPDTRACE
> > ${MPDTRACE} &> maquinas.txt
> > # Executa o job paralelo.
> > ${MPIRUN} -np ${NPROCS} ${ABINIP} < ${HOME}/${JOB}.files > $
> > {JOB}.log
> > # Mata os daemons.
> > ${MPDEXIT}
> > # Fim da parte MPI.
> > # Copia o arquivo de saida do job para o diretorio correto.
> > cp -f ${HOME}/${JOB}.out ${HOME}
> > ----------------------------------------------------------------------
> > ---------
> >
> > ------------------- makefile_macros
> > -------------------------------------------
> > # Machine type
> > MACHINE=P6
> > # Fortran optimized compilation
> > #FC=/opt/intel/compiler70/ia32/bin/ifc
> > F90DIR=/opt/intel/fc/9.1.032
> > #F90=${F90DIR}/bin/ifort
> > #FC=${F90DIR}/bin/ifort
> > F90=/opt/intel/ict/2.0/mpi/2.0/bin/mpiifort
> > FC=/opt/intel/ict/2.0/mpi/2.0/bin/mpiifort
> > FFLAGS=-FR -O3 -w -tpp7 -axW -static
> > FFLAGS_Src_2psp =-FR -O0 -w -tpp7 -axW
> > FFLAGS_Src_3iovars =-FR -O0 -w -tpp7 -axW
> > FFLAGS_Src_9drive =-FR -O0 -w -tpp7 -axW
> > FFLAGS_LIBS= -O3 -w
> > MKLDIR=/opt/intel/ict/2.0/cmkl/8.0.1/lib/32
> > #LIBS = ${MKLDIR}/libmkl_lapack.a ${MKLDIR}/libmkl_ia32.a ${MKLDIR}/
> > libguide.a ${F90DIR}/lib/libsvml.a -lpthread
> > LIBS = -L ${MKLDIR} -lmkl_lapack -lmkl_ia32 -lguide -lpthread -lmkl
> > -lvml -lsvml
> > # C preprocessor, used to preprocess the fortran source.
> > CPP=/lib/cpp
> > CPP_FLAGS=-P -traditional -D__IFC
> > # The cpp directive CHGSTDIO changes the standard I/O definition
> > # Uncomment the next line for this to happen.
> > #CPP_FLAGS=-P -DCHGSTDIO
> >
> > # C optimized compilation.
> > #CC=/opt/intel/cc/9.1.038/bin/icc
> > CC=/opt/intel/ict/2.0/mpi/2.0/bin/mpiicc
> > CFLAGS=-O
> >
> > # Location of perl . Used to generate the script fldiff, in ~ABINIT/
> > Utilities .
> > PERL=/usr/bin/perl
> >
> > # List of machine-dependent routines
> > MACHINE_DEP_C_SEQ_SUBS_LIST=etime.o
> >
> > ####################################################################
> > # For the parallel version : MPICH / MYRINET
> >
> > # Compiler flags and definitions
> > FFLAGS_PAR= $(FFLAGS) -I /opt/intel/ict/2.0/mpi/2.0/include
> >
> > # List of machine-dependent routines
> > MACHINE_DEP_C_PAR_SUBS_LIST=etime.par
> >
> > # Location of the MPI library
> > #MPI_A=/usr/local/mpi-intel/lib/libmpich.a /usr/local/mpi-intel/lib/
> > libfmpich.a
> > #-Vaxlib
> > #MPI_A=/opt/intel/ict/2.0/mpi/2.0/lib/libmpigi.a #/opt/intel/ict/
> > 2.0/mpi/2.0/lib/libmpiic.a #/opt/intel/ict/2.0/mpi/2.0/lib/
> > libmpiif.a #/opt/intel/ict/2.0/mpi/2.0/lib/libmpigf.a #/opt/intel/
> > ict/2.0/mpi/2.0/lib/libmpigc.a #/opt/intel/ict/2.0/mpi/2.0/lib/
> > libmpi.a -Vaxlib
> > # Include blas, lapack, and any other libraries here
> > LIBS_PAR=$(LIBS) $(MPI_A)
> >
> > # This is a last line in makefile_macros ----------
> >
>

Mr. Xavier

The input file is bellow. I did not made the parallel tests. I think that I
have to make a few modifications the script "Run" due to changes in MPI2
(mpdboot, etc). So I
try to run tA.in as a regular job, but I am waiting in the queue.

Thanks for your attention.

Marcelo Garcia
===============================================================
# PPV slab with 3 cells in 100 direction
# we are considering 10 angstrom of vaccum
# Auto consistencia somente


#Definition of the unit cell
acell 34.2 6.05 6.54 angstrom # Lattice parameters with 10 angstrom of
vaccum in x direction
rprim 1.0000000000 0.000000000000 0.000000000000
0.0000000000 0.838670561983 -0.544639008264
0.0000000000 0.000000000000 1.000000000000

#SCF preconditioner
iprcel 45

#Definition of the k-point grid
kptopt 1 # Option for the automatic generation of k points, taking
# into account the symmetry
ngkpt 3 3 1
nshiftk 1
shiftk 0.5 0.5 0.5

#Exchange-Correlation functional
ixc 1 # LDA Teter Pade parametrization

#Definition of the planewave basis set
ecut 30.0 # Maximal kinetic energy cut-off, in Hartree

#Definition of the SCF procedure
nstep 3 # Maximal number of SCF cycles
toldfe 1.0d-6

#Definition of the atom types
ntypat 2 # There is only one type of atom
znucl 6 1 # The keyword "znucl" refers to the atomic number of the
# possible type(s) of atom. The pseudopotential(s)
# mentioned in the "files" file must correspond
# to the type(s) of atom. Here, the only type is Carbon and
Hydrogen


#Definition of the atoms
natom 84 # threre are 52 atoms in the unit cell and some vacuum
typat 48*1 36*2 # These atoms are of type 6 and 1, that is, Carbon and
# Hydrogen respectively
xangst
8.354 -0.091 2.901 # C
7.686 1.027 2.358 # C
7.616 1.207 1.001 # C
8.192 0.284 0.100 # C
8.859 -0.834 0.643 # C
8.930 -1.014 2.001 # C
8.497 -0.315 4.298 # C
8.050 0.509 5.283 # C
4.449 2.791 4.543 # C
3.706 1.723 3.999 # C
3.585 1.576 2.641 # C
4.197 2.477 1.743 # C
4.940 3.545 2.286 # C
5.061 3.692 3.645 # C
4.600 3.011 -0.640 # C
4.046 2.257 0.347 # C
0.454 -0.091 2.901 # C
-0.214 1.027 2.358 # C
-0.284 1.207 1.001 # C
0.292 0.284 0.100 # C
0.959 -0.834 0.643 # C
1.030 -1.014 2.001 # C
0.597 -0.315 4.298 # C
0.150 0.509 5.283 # C
20.249 2.791 4.543 # C
19.506 1.723 3.999 # C
19.385 1.576 2.641 # C
19.997 2.477 1.743 # C
20.740 3.545 2.286 # C
20.861 3.692 3.645 # C
20.400 3.011 -0.640 # C
19.846 2.257 0.347 # C
16.254 -0.091 2.901 # C
15.586 1.027 2.358 # C
15.516 1.207 1.001 # C
16.092 0.284 0.100 # C
16.759 -0.834 0.643 # C
16.830 -1.014 2.001 # C
16.397 -0.315 4.298 # C
15.950 0.509 5.283 # C
12.349 2.791 4.543 # C
11.606 1.723 3.999 # C
11.485 1.576 2.641 # C
12.097 2.477 1.743 # C
12.840 3.545 2.286 # C
12.961 3.692 3.645 # C
12.500 3.011 -0.640 # C
11.946 2.257 0.347 # C
13.541 4.527 -2.523 # H
11.332 1.387 0.074 # H
13.114 3.880 -0.367 # H
9.465 -1.879 2.410 # H
7.212 1.763 3.017 # H
7.082 2.073 0.591 # H
7.524 1.434 5.007 # H
9.023 -1.240 4.574 # H
5.432 4.266 1.626 # H
3.214 1.002 4.660 # H
3.005 0.741 2.229 # H
9.334 -1.569 6.564 # H
5.641 4.527 -2.523 # H
3.432 1.387 0.074 # H
5.214 3.880 -0.367 # H
1.565 -1.879 2.410 # H
-0.688 1.763 3.017 # H
-0.818 2.073 0.591 # H
-0.376 1.434 5.007 # H
1.123 -1.240 4.574 # H
21.232 4.266 1.626 # H
19.014 1.002 4.660 # H
18.805 0.741 2.229 # H
1.434 -1.569 6.564 # H
21.441 4.527 -2.523 # H
19.232 1.387 0.074 # H
21.014 3.880 -0.367 # H
17.365 -1.879 2.410 # H
15.112 1.763 3.017 # H
14.982 2.073 0.591 # H
15.424 1.434 5.007 # H
16.923 -1.240 4.574 # H
13.332 4.266 1.626 # H
11.114 1.002 4.660 # H
10.905 0.741 2.229 # H
17.234 -1.569 6.564 # H
===========================================================================



Archive powered by MHonArc 2.6.16.

Top of Page