Skip to Content.
Sympa Menu

forum - Re: [abinit-forum] memory management in GW with 5.8.3

forum@abinit.org

Subject: The ABINIT Users Mailing List ( CLOSED )

List archive

Re: [abinit-forum] memory management in GW with 5.8.3


Chronological Thread 
  • From: "Matteo Giantomassi" <Matteo.Giantomassi@uclouvain.be>
  • To: forum@abinit.org
  • Subject: Re: [abinit-forum] memory management in GW with 5.8.3
  • Date: Tue, 30 Jun 2009 16:19:16 +0200
  • Importance: Normal

> hi
>
> i have noticed that when you run GW using MPI, with the "gwpara 2" and
> "gwmem 0" options, the memory distribution is such that, at least
> in some parts of the calculation, certain nodes take most of the memory
> load (54%) whilst other nodes take very small amount of memory (3%).

Dear Manuel,

Which part of the code are you running: sigma or screening?
What's the size of the FFT mesh used in the GW part and which
fftgw option are you using?
Could you post the output of

grep setmesh logfile -A 5

and (in the case of sigma calculations) the size of
the inverse dielectrix matrix

grep npweps log


> This effect is not accompanied by a CPU-time unbalance between
> different machines.
> I think that this happens not only with 5.8.3 but also with previous
> versions.
> Is there any way of avoiding/controlling that? Probably with open-mpi?
>
> On the other hand, i would like to ask, has there been any important
> modification of the GW code in 5.8.3, with respect to previous versions?

I have to say that, for what concerns the GW part, there are few
noticeable changes:
Many new features have been moved to 5.9.0.
Hopefully, they will be available in production when version 6.0 will be
released.
For the list of changes and new features available in 5.8.3, you
might have a look at:


http://www.abinit.org/downloads/documentation/helpfiles/for-v5.8/release_notes/release_notes.html

Best Regards
Matteo Giantomassi




Archive powered by MHonArc 2.6.16.

Top of Page