Skip to content

CoGL Proxy Application Analyze pattern formation in ferroelastic materials and test in situ visualization.

Four timestps of a
                                                            CoGL run CoGL is a meso-scale simulation proxy app used to analyze pattern formation in ferroelastic materials using the Ginzburg–Landau approach. It has been publicly released on the ExMatEx project Github repository. It is implemented using a data-parallel approach, making use of the PISTON framework, developed at Los Alamos as an extension of NVIDIA’s Thrust library. The exact same simulation code can be compiled to different backends (such as CUDA or OpenMP) to take advantage of various on-node accelerators (such as GPUs or multi-core CPUs). Extensions of this proxy app, not yet publicly released, include additional in-situ visualization operators (such as isosurface) and a distributed implementation that allows it to be run across multiple GPUs or CPUs.

The figure to the right shows four selected time steps (3000, 15,000, 25,000, and 200,000) from the CoGL simulation, plotting the grid points colored according to the deviatoric strain value, showing how the strain field evolves when a parent cubic lattice changes to a daughter tetragonal structure.

CoGL is a meso-scale simulation proxy app used to analyze pattern formation in ferroelastic materials using the Ginzburg–Landau approach. It has been publicly released on the ExMatEx project Github repository. It models transitions from a face-centered cubic parent phase to a body-centered tetragonal product phase due to either a rapid decrease in temperature or an external deformation. By solving the force balance equations that use a nonlinear elastic free-energy functional and also incorporate inertial and viscous forces, the strains are computed at each point on a regular three-dimensional grid. The code allows the study of nucleation and growth of phase changes on loading-unloading and heating-cooling protocols as a function of strain rates.

The simulation code itself is relatively short (a few hundred lines of code), consisting mainly of gradient and Laplacian computations over the grid. Two versions of the code have been written: the original serial Fortran code written by the domain scientists, and a data-parallel implementation using the PISTON framework. PISTON is a portable, data-parallel framework developed at Los Alamos using NVIDIA’s Thrust library. It allows an application developer to compile and run his/her code on different parallel accelerator and multi-core architectures, making efficient use of the available parallelism on each. This is accomplished by constraining the developer to writing algorithms using a limited set of data-parallel primitives, each of which is efficiently implemented for each target architecture.

In the PISTON version of CoGL, the simulation is computed primarily using data-parallel transform primitives, in which, for example, each thread computes the gradient at one grid cell. In an extended version of this proxy app, not yet included in the public release, more complex isosurface, threshold, and cut surface visualization algorithms already implemented in PISTON can be applied to the strain field computed by the simulation and rendered in-situ as the simulation is running. Also, a further extension of this proxy app has been written that uses a distributed PISTON implementation, allowing the code to run across multiple GPUs or CPUs using an MPI-based backend layered on top of the shared-memory backends, allowing the program to take advantage of parallelism both on nodes and across nodes.

Preliminary testing on a single node has shown that the PISTON implementation compiled to an OpenMP backend and run on multiple cores scales well with the number of cores and outperforms the equivalent serial C++ code and especially the original Fortran code. The PISTON implementation compiled to a CUDA backend and run on a GPU improves parallel performance even further. Furthermore, when the simulation is run on the GPU, rendering can be implemented with limited additional performance cost because all the data can be kept on the card using CUDA’s interop feature.

CoGL Publications

Papers

The following is the original paper that describes the physics and used the original FORTRAN implementation (but pre-dates ExMatEx and the PISTON implementation):

Rajeev Ahluwalia, Turab Lookman, and Avadh Saxena,“Dynamic strain loading of cubic to tetragonal martensites”, Acta Materialia, Volume 54, Issue 8, May 2006, Pages 2109-2120.

Presentations

Christopher Sewell, La-Ti Lo, James Ahrens, “Portable Data-Parallel Visualization and Analysis in Distributed Memory Environments”, IEEE Symposium on Large-Scale Data Analysis and Visualization (LDAV), Atlanta, Georgia, October 2013 (LA-UR-13-23809) [external link]

Christopher Sewell, “Analysis and Visualization Capabilities on Multi-core/Many-core Emerging Architectures”, NNSA - CEA Collaboration Meeting, Santa Fe, New Mexico, June 2013 (LA-UR-13-23729) [external link]

Software

“CoGL: Ginzburg-Landau Proxy Application, Version 1.0” (September 2013), [open source at GitHub]

Most of our code is released as open source // Visit the ExMatEx GitHub site