Skip to Content.
Sympa Menu

forum - Re: [abinit-forum] abinip crashes due to dilatmx statement

forum@abinit.org

Subject: The ABINIT Users Mailing List ( CLOSED )

List archive

Re: [abinit-forum] abinip crashes due to dilatmx statement


Chronological Thread 
  • From: "Aldo Humberto Romero" <aromero@qro.cinvestav.mx>
  • To: forum@abinit.org
  • Subject: Re: [abinit-forum] abinip crashes due to dilatmx statement
  • Date: Wed, 18 Nov 2009 16:02:40 -0600 (CST)
  • Importance: Normal

Marc

did you try to set up the dilatmx to a larger value
as suggested during the run, probably is using the default, just include
in your input

dilatmx 1.2

Please next time send you input file or at least part of it, such we can
try to
find out your problem

regards

-aldo.


-> Dear abinit users and developers,
->
-> abinip crashes during structural optimization if dilatmx statement is set.
->
-> The error message is:
->
-> -P-0000
->
================================================================================
-> -P-0000 initwf : disk file gives npw= 21962 nband= 333 for k pt
-> number= 1
-> -P-0000 initwf : 333 bands have been initialized from disk
-> -P-0000 leave_test : synchronization done...
-> wfsinp: loop on k-points and spins done in parallel
-> pareigocc : MPI_ALLREDUCE
-> -P-0000 leave_test : synchronization done...
-> wfsinp: loop on k-points done in parallel
-> -P-0000 - newkpt: read input wf with ikpt,npw= 1 21962, make
-> ikpt,npw= 1 25492
-> ERRRRRRRRRRR
-> -P-0000
-> -P-0000 leave_new : decision taken to exit ...
-> -P-0000 leave_new : synchronization done...
-> -P-0000 leave_new : exiting...
-> *** An error occurred in MPI_Barrier
-> *** after MPI was finalized
-> *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
-> [quadmachine2:29719] Abort after MPI_FINALIZE completed successfully;
-> not able to guarantee that all other processes were killed!
->
-> --------------------------------------------------------------------------
-> mpirun has exited due to process rank 0 with PID 29719 on
-> node quadmachine2 exiting without calling "finalize". This may
-> have caused other processes in the application to be
-> terminated by signals sent by mpirun (as reported here).
->
-> --------------------------------------------------------------------------
-> *** An error occurred in MPI_Barrier
-> *** after MPI was finalized
-> *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
-> [quadmachine3:29476] Abort after MPI_FINALIZE completed successfully;
-> not able to guarantee that all other processes were killed!
->
->
-> Commenting "dilatmx" runs fine through the first step and stops with
->
-> chkdilatmx: ERROR -
-> The new primitive vectors rprimd (an evolving quantity)
-> are too large with respect to the old rprimd and the accompanying
-> dilatmx :
-> this large change of unit cell parameters is not allowed by the
-> present value of dilatmx.
-> You need at least dilatmx= 1.000367E+00
-> Action : increase the input variable dilatmx.
->
-> Running the same calculation with abinis works fine.
->
-> Can somebody confirm this or comment on why structural optimization
-> seems to be not possible in parallel mode?
->
-> The system setup:
-> 1. abinit 5.8.4p
-> 2. Intel Core 2 Quad machines connected via GBit ethernet
-> 3. Kubuntu 9.04, OpenMPI 1.3.3
-> 4. Intel 11.1 compilers
->
-> Thanks in advance.
->
-> Kind regards,
-> Marc
->
-> --
-> ------------------------------------------------------------------------
-> Dipl.-Ing. Marc Saemann
->
-> Universitaet Stuttgart
-> Institut fuer Physikalische Elektronik ipe
-> Pfaffenwaldring 47
-> 70569 Stuttgart
-> Germany
->
-> room: 0.215
->
-> phone: +49-711-685-67142
-> fax: +49-711-685-67138
->
-> email: marc.saemann@ipe.uni-stuttgart.de
->
->
->
->


--

Prof. Aldo Humberto Romero
CINVESTAV-Unidad Queretaro
Libramiento Norponiente 2000
CP 76230, Queretaro, QRO, Mexico
tel: 442 211 9909
fax: 442 211 9938

email: aromero@qro.cinvestav.mx
aldorome@gmail.com
www: qro.cinvestav.mx/~aromero




Archive powered by MHonArc 2.6.16.

Top of Page