20-24 May 2019, Louvain-la-neuve, Belgium.
In order to benefit from an interesting synergy, the Abinit developer (ABIDEV) workshop 20-22/5/2019, and the workshop on precision quantification in DFT (PQ-DFT) 23-24/5/2019 were organized one after the other, in Louvain-la-neuve, Belgium. Both events attracted together more than 80 people, who attended the joint social dinner, on the evening of Wednesday 22 May.
A brief account of the ABIDEV workshop is now given, followed by the one of the PQ-DFT workshop. Links to the programs, the abstracts, the videos, the lists of participants are mentioned in the text. Group pictures are presented at the bottom.
Brief account on the ABIDEV workshop
The goal was to bring together the community of people actively working on the development of the free and open-source ABINIT software application (abinit.org, >2000 forum members), as well as some developers working in projects to which ABINIT is interfaced, e.g., the Materials Project, the second-principles DFT project -SPDFT-, NOMAD, ELPA, etc.
The possible evolution of the package and its interfaces have been discussed, the formalism of the most recent implementations have been presented, short-term future developments have been discussed and synchronized, as well as long-term strategy and developments.
The program consisted of 38 presentations (invited and contributed, link to the .pdf of the abstract at the top of the program page), 7 flash presentations (10 minutes) and numerous discussion sessions, including a dedicated discussion morning, for a total of nearly 6 hours of discussion. 63 participants were attending.
Most noticeably, the following challenges have been the focus of specific sessions:
- The challenge of databases for material discovery. Several ecosystems have been recently created in order to manage huge databases created by first-principles codes: python frameworks, web portals, interfaces … In this respect, the abipy framework –github.com/abinit/abipy – was particularly highlighted.
- The challenge of multi-scale simulations and reduced scaling. Creating databases of first-principles derivatives of total energy and electronic structure close to the Fermi energy or band gap allows one to develop second-principles models, with considerably reduced scaling, allowing to simulate very cheaply and accurately millions of atoms. This is the focus of Multibinit, delivered with ABINIT, and the above-mentioned SPDFT.
Other presentations and discussions have been centered on ABINIT but also on current and future libraries, like ELSI for HPC, Z2pack.
Validation and verification, and also pseudopotentials, have been addressed in the PQ-DFT event, just after the ABINIT developer workshop.
Brief account on the PQ-DFT workshop.
The goal of the workshop was to discuss precision, accuracy and uncertainty of density-functional theory calculations. The past few years have witnessed a growing interest in assessing the numerical precision of electronic structure software. Ideas about this emerged at different places and in different software communities (Lausanne, Gent, Louvain-la-Neuve, Lyngby, Washington DC, Paris, Berlin, …), whereas actual work often requires coordinated efforts. Meeting from time to time to brainstorm about broader trends and to define common goals and strategies is therefore meaningful. An early meeting of this kind in Paris (2014) has proven to be essential for later progress. This topic is at a crossroad again, and it was the right time to think about the next set of common goals.
The thoughts and insights of the participants did not evaporate in thin air: they are archived on the PQ-DFT conference website for further use by the community. The program included 27 short talks and 7 lively discussion sessions, that have been condensed in nearly 7 hours of video, 7 discussion digest documents and one summarizing action document, whose content is shared below. The link to the .pdf of the abstract can be found at the top of the program page. 45 participants were attending.
Action: towards more diverse test sets. Future steps should focus on creating and documenting a series of relatively small test sets, each of them built for a specific feature/property/purpose. Having all-electron reference data for each set is mandatory. Users can use as many or as few of those sets as they wish for their benchmarking, depending on their need. A sketch list of such sets could be:
- The initial Delta set (mainly because data from many other codes and methods are available to compare with)
- The Delta set for structural variety (fcc/bcc/simple cubic/diamond) (*)
- The Delta set for chemical variety (6 artificial oxides per element) (*)
- The Delta set for experimentally relevant crystals (handpicked list of simple binaries, every element appearing multiple times) (*)
- The GBRV set (combinations of various oxidations states and structures, for experimentally relevant crystals)
- Low-symmetry sets (molecules, surfaces, vacancies)
- A set with crystals where all atoms are randomly displaced, to inspect forces
- A set with all free atoms (to get cohesive energies for all other test sets)
- A set with all elementary crystal ground states (=completion of the initial Delta set) for formation energies of all other test sets
- A set with magnetic materials
When these test sets are used for the sole purpose of pseudopotential development, it would be sufficient to run them with a coarse k-mesh (this requires all-electron data at the same coarse k-mesh). Enforcing more strictly than before which k-mesh is to be used for every crystal in the set is a good strategy (e.g. one coarse k-mesh and one that is very dense, yet not unreasonably dense).
(*) currently under construction at Ghent University, to be delivered during 2019.
Action: work flow managers for benchmarking and convergence testing. It is recommended that the different work flow management packages on the market implement procedures (1) to carry out the benchmark tests for the codes they work with, and (2) to perform the convergence testing that one should do at the beginning of every new project. The former will make it easier to carry out decent benchmarks with many codes/methods. The latter will help ensuring that at least a minimal amount of convergence testing is performed by all/most users.
Action: databases and libraries. Concerted actions and stimulating the development of joint libraries (libxc, ESL,…) as well as databases (Nomad, Materials Cloud, Materials Project, aflowlib, oqmd, Optimade,…) are very much needed.
Action: expert panels and recommendations. More efforts should be done to make the knowledge explicit that is implicitly known and used by some experts in the field. This can be done by implementing this knowledge in workflow managers (particularly suitable for procedures), and/or by community expert panels that review the literature and formulate recommendations for procedures and user input choices (e.g. selecting the XC-functional that is most appropriate for a given case). There might be a task for Psi-k here. The Psi-k 2020 conference could be a place to make progress on this.
Action: accuracy assessment for more properties and more functionals. Documenting the systematic bias and the random scatter for the prediction of properties for a given XC-functional compared to experimental values, remains a useful task. It should be performed for more properties and more functionals, on sets of crystals that are relevant for the property studied. This is a task on which collaboration with experimentalists is relevant (providing better/more experimental data).