NLMech  0.1.0
NLMech

CircleCI Codacy Badge Coverage Status GitHub issues GitHub release GitHub license status DOI

logo

Introduction

Welcome to the NLMech repository. In this project, we implement Peridynamics model of fracture using meshfree and finite element discretizations. A brief overview of the equations is available here. NLMech primarily served as a code for academic research (e.g., [1,2]), however, we plan to improve it further for a large-scale usage. The plan also includes development of fully distributed solver using HPX library for asynchronous computation to its maximum potential. In [3], we discuss the structure of NLMech and use HPX library for multi-threading computation. For further list of publications based on this library, we refer to the publication list.

At present, the library consists of multiple material models such as

  • RNP - Regularized Nonlinear Potential. This is implemented in class RNPBond.
  • State-based peridynamics - State-based peridynamics model. This is implemented in class ElasticState.

One of the main features of NLMech is the implementation of both explicit time discretization for dynamic problems (see FDModel) and implicit discretization for quasi-static problems (see QuasiStaticModel).

Documentation and getting started

All source and header files are fairly well documented. We used doxygen to automatically generate the documentation of methods, classes, etc. For complete documentation follow this link.

We provide shell scripts to help with the installation of dependencies and the NLMech itself. We also provide Docker images for a quick test of the library and to run the examples. In section Installation, we describe the dependencies, installation of dependencies, and building NLMech code. In section Running NLMech, we discuss how to run the examples.

Installation

Build tools

The following build tools are needed to compile the NLMech and its dependencies:

  • GCC compiler collection (gcc) > 4.9, however, gcc >= 8 is recommended
  • autoconf
  • wget
  • cmake
  • git

    On Ubuntu you might install all dependencies using the papackage manager:

apt-get install build-essential git wget cmake libssl-dev libblas-dev liblapack-dev autoconf freeglut3-dev

On Fedora you might install all dependencies using the package manager

dnf install @development-tools cmake git wget blas-devel lapack-devel freeglut-devel

Dependencies

We use cmake to build the code. We list the dependencies and how they are used in the code below:

  • CMake 3.16
    • To generate makefiles
  • Boost 1.75
    • Required to build HPX and YAML libraries
  • HPX 1.6.0
    • Provides methods for multithreading computation
  • Blaze 3.8
    • Required to build the BlazeIterative library
  • Blaze_Iterative master
    • Provides linear algebra support such as storage and inversion of stiffness matrix
  • gmsh 4.7
    • Our code directly interfaces with gmsh to use input-ouput functions of gmsh
  • VTK 9.0
    • For read-write operations on .vtu type files
  • YAML-CPP 0.6
    • To parse .yaml input files

On Ubuntu you might install all dependencies using the papackage manager:

apt-get install libyaml-cpp-dev libvtk7-dev gmsh boost-devel

Note that on Ubuntu you need to install HPX, Blaze, and Blaze_Iterative since there is no package available.

On Fedora you might install all dependencies using the package manager

dnf install hpx-devel cmake blaze-devel vtk-devel yaml-cpp-devel gmsh-devel

Note on Fedora, you need only to install Blaze_Itertaive.

Following dependencies are optional, but are recommended for the large simulations:

  • PCL 1.11
    • Used in neighbor search calculation (KDTree)
  • Flann 1.9
    • Required to build PCL library

Building dependencies

Building above dependencies is quite a challenge. To help with this, we provide the bash scripts for Ubuntu-20.04 and Fedor operating systems: Bash scripts).

Further, we provide various docker files

In Scripts, bash scripts to build individual dependencies such as blaze, vtk, hpx, etc, on HPC systems is provided.

A more detailed version of the build instruction is available here.

:exclamation: We recommend to use the same CMake version and same compiler version to build the HPX and NLMech.

Compiling library

Assuming all the dependencies are installed at standard paths such as /usr/local, we build NLMech as follows

git clone https://github.com/nonlocalmodels/NLMech.git
cd NLMech
mkdir build && cd build
# turn building documentation and tools, dependency on PCL off by using 'OFF' instead of 'ON'
cmake -DCMAKE_BUILD_TYPE=Release \
-DEnable_Documentation=ON \
-DEnable_Tools=ON \
-DEnable_PCL=ON \
..
# compile the code
make -j $(cat /proc/cpuinfo | grep processor | wc -l) VERBOSE=1

In case certain libraries such as HPX, PCL, Blaze_Iterative are installed at custom paths, one would need to provide correct paths to cmake as follows:

cmake -DCMAKE_BUILD_TYPE=Release \
-DEnable_Documentation=ON \
-DEnable_Tools=ON \
-DEnable_PCL=ON \
-DHPX_DIR=<hpx-path>/lib/cmake/HPX \
-DPCL_DIR=<pcl-path> \
-DYAML_CPP_DIR=<yaml-cpp-path> \
-DBLAZEITERATIVE_DIR=<blaze-iterative path> \
-DGMSH_DIR=<gmsh-path> \
..
make -j $(cat /proc/cpuinfo | grep processor | wc -l) VERBOSE=1

Running NLMech

To quickly run the tests and examples, you may use Docker image with the latest Successful build of the main branch.

podman/docker pull diehlpk/nlmech:latest
podman/docker run -it docker.io/diehlpk/nlmech /bin/bash
cd /app/NLMech/examples/qsModel/1D
# Generate the mesh file
/app/NLMech/build/bin/mesh -i input_mesh.yaml -d 1
# Run the simulation
/app/NLMech/build/bin/NLMech -i input.yaml --hpx:threads=2

In examples, we provide information on how to prepare a simulation setup input file using YAML.

Asume, you have build NLMech on your own, you can go the the build folder and run the executable as below

cd build
./bin/NLMech -i input.yaml --hpx:threads=n

with the first argument -i the input.yaml file is specified and the second argument --hpx:threads the amount of CPU cores HPX is allowed to use is specified. If you do not specify any number there all coes of the CPU are used to run the simulation. Note that in the current version only shared memory parallism is provided, however, we plan to add dsitributed memory to the code in the near future.

The one-dimensional quasi-static example is computational inexpesive, therfore, we used it in the Docker example to finish the simulation soon. For scaling test, we recommend to use any of the two-dimensional examples.

Trouble, issues, bugs

In case you found a bug in the library, want to contribute, or need a feature, please create a new issue.

Releases

The current stable version is GitHub release. Main development branch is the main branch. For more details, we refer to the Changelog file.

Code of conduct

We have adopted a code of conduct for this project.

Contributing

The source code is released under the GitHub license license. If you like to contribute, we only accept your pull request using the same license. Please feel free to add your name to license header of the files you added or contributed to. If possible please add a test for your new feature using CTest. We adapted the Google C++ Style Guide for this project. We use the clang-format tool to format the source code with respect to this style guide. Please run the format.sh script before your do the pull request.

Citing

In publications, please use our paper as the main citation for NLMech:

  • Diehl, P., Jha, P. K., Kaiser, H., Lipton, R., & Lévesque, M. (2020). An asynchronous and task-based implementation of peridynamics utilizing HPX—the C++ standard library for parallelism and concurrency. SN Applied Sciences, 2(12), 1-21. In addition, please use our JOSS for the referencing the code
  • Jha et al., (2021). NLMech: Implementation of finite difference/meshfree discretization of nonlocal fracture models. Journal of Open Source Software, 6(65), 3020, 10.21105/joss.03020

For more references, we refer to NLMech's publication list.

Acknowledgments

NLMech has been funded by:

  • Army Research Office Grant # W911NF-16-1-0456 to PI Dr. Robert Lipton (Professor at Louisiana State University). This grant supported Prashant K. Jha on a postdoctoral position from October 2016 - July 2019.
  • Canada Research Chairs Program under the Canada Research Chair in Multiscale Modelling of Advanced Aerospace Materials held by M. Lévesque; Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grants Program under Discovery Grant RGPIN-2016-06412.
  • We are grateful for the support of the Google Summer of Code program funding internships.

References

[1] Jha, P. K., & Lipton, R. (2019). Numerical convergence of finite difference approximations for state based peridynamic fracture models. Computer Methods in Applied Mechanics and Engineering, 351, 184-225.

[2] Jha, P. K., & Lipton, R. P. (2020). Kinetic relations and local energy balance for LEFM from a nonlocal peridynamic model. International Journal of Fracture, 226(1), 81-95.

[3] Diehl, P., Jha, P. K., Kaiser, H., Lipton, R., & Lévesque, M. (2020). An asynchronous and task-based implementation of peridynamics utilizing HPX—the C++ standard library for parallelism and concurrency. SN Applied Sciences, 2(12), 1-21.