forum@abinit.org
Subject: The ABINIT Users Mailing List ( CLOSED )
List archive
- From: "Anglade Pierre-Matthieu" <anglade@gmail.com>
- To: forum@abinit.org
- Subject: Re: [abinit-forum] some tests on 5.2.3
- Date: Sat, 11 Nov 2006 09:16:50 +0100
- Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=H8EgDX9jkX6Djb0Zwr/iEBpOyidLEtq2b4kjZWIB6519UFWQplOmj/hbicB4/S7KEdpKWRD1Xem5BziKk8des8yTsytFUfbUQ322AjwXuv7Lar2dqWx1XLxWRDePPTdEosClGwCYesfb2Ecr98PYi6biVluqFZqp32AO1eyBfNA=
Hi,
In fact Prof. Gonze makes reference tests result for distributed
version of Abinit with a PGI compiler and without optimisation , not
intel's compiler. Then, there is no reasons intrinsic reasons to try
to reproduce flags used by the abinit developers. Yet, usualy Yann
pouillon implements within the Abinit "tricks" the best set of flags
for compiler/arch couples. So default flags should be best ; they work
around known bugs and problems of compiers. However, the safest for
you is generaly to use the flag set wich result in the least number of
failed tests because it is quite possible that we do not have the
exact same platform you use.
If you want more safety, you must not rely fully on the automatic test
analysis. You'd better look at the output of each failed tests and
check what the failing reasons was. There is a lot of reasons that may
produced a FAILED in the log while you'd consider
that the test successfull or at least not a problem.
Best regards.
PMA
On 11/10/06, mperez@mpi-halle.mpg.de <mperez@mpi-halle.mpg.de> wrote:
Hi
i was wandering whether there is any point in optimising the code in
order to get fast runs
let me explain my point, i have MPI-installed 5.2.3 on an opteron cluster
for -O3 and -O0, using ifort compiler (and gcc)
When i go through the "internal tests", (section 3. How to make the
internal tests and the speed tests ? (sequential version only)) everything
is ok, but when i go through the "other tests" (4. How to make the other
tests ?)
the number of FAILURES that i get is larger for -O0 (11 cases) than for
-O3 (6 cases)
Then, if i use -O2 with some extra optimisation, and, including "safety
options", i get 6 failed cases, whilst using "implicit -O2 with no safety
options", i get only 3 failed cases (THE BEST I'VE SEEN)
i imagine that the best way to make sure my implementation is not too bad
is to use "exactly" the same compilation flags for "ifort" that have been
used by the abinit team in the build up of binaries (for opteron)
Some final point: there are two tests in the section "parall", which FAIL
for any level of optimisation that i use (always): tests G and I
Please let me know whether i should consider seriously some optimisation,
and how i should do that
thanks
Manolo
PS:
(a)"safety options":
-fltconsistency -IPF_fltacc -pc80 -mp1 -fp-model strict -prec-div -prec-sqrt
(b)"extra optimisation": -xW -tpp7
(c)"implicit -O2 without safety options": only library linking options =>
-i-static -Bstatic -static -lmpichf90 -lmpich -lpthread -lrt
--
Pierre-Matthieu Anglade
- some tests on 5.2.3, mperez, 11/10/2006
- Re: [abinit-forum] some tests on 5.2.3, Anglade Pierre-Matthieu, 11/11/2006
- Re: [abinit-forum] some tests on 5.2.3, Xavier Gonze, 11/11/2006
- Re: [abinit-forum] some tests on 5.2.3, mperez, 11/15/2006
- Re: [abinit-forum] some tests on 5.2.3, Xavier Gonze, 11/11/2006
- Re: [abinit-forum] some tests on 5.2.3, Anglade Pierre-Matthieu, 11/11/2006
Archive powered by MHonArc 2.6.16.