Conference Paper. Slides. Media: ParallelForAll.
Typically, tensor contractions are reshaped so they may be evaluated with GEMM or other high-performance linear algebra primitives. We demonstrate that any explicit memory motion can easily dominate tensor contractions.
The StridedBatchedGEMM primitive allows these contractions to be performed without explicit copies or transposes and can provide up to 10x performance over other libraries. The primitive is now available in CUBLAS 8.0.
Code Repo. Book Chapter. Slides.
A generalized library designed to separate out research areas in fast structured dense matrix algorithms.
This library will facilitate continuing research in developing kernel expansions, scheduling and parallel evaluation of tree-based methods, and abstracted development applicable to a huge range of physics, graphics, and machine learning problems.
The library takes advantage of GPU acceleration when possible, and currently uses OpenMP parallelization for additional performance.
Code Repo. Presentation.
In this paper, we present a novel formulation of a kernel-independent fast multipole method.
The FMM has found many applications in the field of integral equations and boundary element methods, in particular by accelerating the solution of dense linear systems arising from such formulations. Standard FMMs are derived from analytical expansions of the kernel, for example using spherical harmonics or Taylor expansions. In recent years, the range of applicability and the ease of use of FMMs has been extended by the introduction of black-box and kernel-independent techniques. In these approaches, the user need only provide a subroutine to numerically calculate the interaction kernel. This allows changing the definition of the kernel with minimal changes to the computer program. This paper presents a novel kernel-independent FMM, which retains to diagonal multipole-to-local operators. The result is a significant reduction in the computational cost, in particular when high accuracy is needed. The approach is based on an analytic extension of the kernel, Cauchy’s integral formula, and the Laplace transform. We present a numerical analysis of the convergence and numerical results in the case of a multilevel level one-dimensional FMM.
This research presents a number of algorithms to run the fast multipole method graphics processors. The methods described in the paper are applicable to any FMM in which the multipole-to-local (M2L) operator is a dense matrix and the matrix is precomputed. This is the case for example in the black-box fast multipole method (bbFMM) of Fong et al., which is a variant of the FMM that can handle large class of kernels.
In the FMM, two operators represent most of the computational cost and an optimal implementation typically tries to balance those two operators. One is the nearby interaction calculation (P2P) and the other is the M2L operation. We focus on the M2L. By combining multiple M2L operations and reordering the primitive loops of the M2L so that CUDA threads can reuse or share common data, these approaches reduce the movement of data in the GPU. Since memory bandwidth is the primary bottleneck of these methods, significant performance improvements are realized. Four M2L schemes are detailed and analyzed in the case of a uniform tree.
The four schemes are tested and compared with an optimized, OpenMP parallelized, multi-core CPU code. The accuracy of the GPU codes is found to be satisfactory and achieved performance over 200 Gflop/s on one NVIDIA Tesla C1060 GPU. This was compared against two quad-core Intel Xeon E5345 processors running at 2.33 GHz, for a combined peak performance of 149 Gflop/s for single-precision. We also present benchmarks on an NVIDIA C2050 GPU (a Fermi processor) in single and double precision.
Presentation. Group Presentation. Source code.
The fast multipole method (FMM) has had great success in reducing the computational time required to solve the boundary integral form of the Helmholtz equation.
We present a formulation of the Helmholtz FMM using Fourier basis functions rather than spherical harmonics that accelerates some of the time-critical stages of the algorithm. With modifications to the transfer function in the precomputation stage of the FMM, the interpolation and anterpolation operators become straightforward applications of fast Fourier transforms and the transfer operator remains diagonal.
Real-time Video. Presentation. Source code.
The previous research on accelerating the assembly of finite element methods on GPUs is applied to a non-linear Neo-Hookean model for large time-step elastodynamics. To solve the non-linear system of equations, a Newton-Raphson method is used which requires assembling and solving the linearized tangent system of equations for each iteration (approximately 4-6 iterations per time-step). In the linked paper we show that accelerating only the sparse matrix-vector multiplication via the GPU makes the assembly stage a sever bottleneck.
We show that by running each stage of the algorithm on the GPU, the assembly, solution, and visualization of a dynamic FEM problem can benefit from speed-ups in each stage and avoid expensive data transfers between the host device and the GPU coprocessor, resulting in simulations of unprecedented speed and size. The linked demonstration video shows the application to a volumetric mesh of a hand consisting of 125,127 elements and 28,796 nodes run in real time and interacting with the user.
Graphics processing units (GPUs) have had great success in accelerating many numerical computations. In this research, we present apply their unique compute model to computations on unstructured meshes such as those in finite element methods. Multiple approaches in assembling and solving sparse linear systems with NVIDIA GPUs and the Compute Unified Device Architecture (CUDA) are presented and multiple strategies for efficient use of global, shared, and local memory, methods to achieve memory coalescing, the optimal choice of parameters, and the appropriate level of parallelism to choose are discussed. The pros and cons of each technique are presented in addition to how each can efficiently make use of the GPU hardware resources.
This research was my undergraduate senior clinic project, which is similar to a thesis. Over the course of a year, I works on an interdisciplinary team of four Harvey Mudd seniors -- Alan Davidson (CS), Tiffany Head (Math), Dana Mohamed (Math-Bio), and Liam Robinson (Math) -- in conjunction with Los Alamos National Laboratory to vascularize an existing multiscale tumor model. The model was composed of
- Extracellular model: Production, consumption, and diffusion of chemicals.
- Cellular model: Monte Carlo governed cell movement and growth.
- Intracellular model: Cell cycle, protein network, and protein expression.
Primarily, I research and wrote a diffusion equation solver with time and space dependent source and sink terms which could be applied to systems with arbitrary Dirichlet boundary condition geometries in three dimensions. This diffusion model was composed of a backwards Euler approximation solved with an adaptive Gauss-Seidel iteration.
Linked above is a short summary of some of the research into quantum computing an quantum information I conducted with Professor Chris Stone over the summer of 2005. During that summer, we
- Researched and developed a framework for quantum computing languages.
- Investigated the requirements and restrictions of any quantum computing language: quantum operator and quantum bit manipulation, reversibility, state coherence, no cloning, etc.
- Investigated current quantum computing algorithms and looked at their correctness, optimizations, improvements, and generalizations. Included were the Deutsch-Jozsa, Grover, and Shor algorithms in addition to teleportation, encryption, and operator decomposition.
- Implemented some of these algorithms in various existing quantum computing simulation languages including QCL and Q-Language and also implemented small improvements to these languages.