2019 IEEE High Performance Extreme Computing Conference (HPEC ‘19) Twenty-third Annual HPEC Conference 24 - 26 September 2019 Westin Hotel, Waltham, MA USA
Thursday, September 26, 2019 High Performance Data Analysis 2 1:00-2:40 in Eden Vale C1/C2 Chair: Nikos Pitsianis / Aristotle An Interactive LiDAR to Camera Calibration Yecheng Lyu, Lin Bai, Mahdi Elhousni and Xinming Huang (WPI) Recent progress in the automated driving system and advanced driver assistant system has shown that the combined use of 3D light detection and ranging (LiDAR) and the camera is essential for an intelligent vehicle to perceive the driving scenarios. LiDAR-camera fusion systems require precise intrinsic and extrinsic transformation between sensors. However, due to the limitation of the calibration equipment and susceptibility to noise, algorithms in existing methods tend to fail in finding LiDAR-camera correspondences in long-range terms. In this paper, we introduced an interactive LiDAR to camera calibration toolbox that estimates the intrinsic and extrinsic transform parameters. This toolbox automatically detects the corner of a planer board from a sequence of LiDAR frames and provides a convenient user interface to annotate the corresponding pixels on camera frames. Since the toolbox only detects the top corner of the board, there is no need to prepare a precise polygon planar board or a checkerboard with different reflectivity areas as in the existing methods. Furthermore, the toolbox uses genetic algorithms to estimate the transforms and supports multiple camera models such as the pinhole camera model and the fisheye camera model. Experiments using Velodyne VLP-16 LiDAR and Point Grey Chameleon 3 camera show robust results. Lossless Compression of Internal Files in Parallel Reservoir Simulation Marcin Rogowski, Suha N. Kayum, Florian Mannuss (Saudi Aramco) In parallel reservoir simulation, massively sized files are written recurrently throughout a simulation run. A method is developed to compress the distributed data to be written during the simulation run and to output it to a single compressed file. Evaluation of several compression algorithms on a range of simulation models is performed. The presented method results in 3x file size reduction and a decrease in the total application runtime. Scalable Lazy-update Multigrid Preconditioners Majid Rasouli, Vidhi Zala, Robert M. Kirby, Hari Sundar (Univ. Utah) Multigrid is one of the most effective methods for solving elliptic PDE’s.  It is algorithmically optimal and is robust when combined with Krylov methods.  Algebraic multigrid is especially attractive due to its blackbox nature. This however comes at the cost of increased setup costs that can be significant in case of systems where the system matrix changes frequently making it difficult to amortize the setup cost. In this work, we investigate several strategies for performing lazy updates to the multigrid hierarchy corresponding to changes in the system matrix. These include delayed updates, value updates without changing the structure, process local changes, and full updates. We demonstrate that in many cases, the overhead of building the AMG hierarchy can be mitigated for rapidly changing system matrices. IdPrism: Rapid Analysis of Forensic DNA Samples Using MPS SNP Profiles Darrell O. Ricke, James Watkins, Philip Fremont-Smith, Adam Michaleas (MIT-LL) Massively parallel sequencing (MPS) of large single nucleotide polymorphism (SNP) panels enables identification, analysis of complex DNA mixture samples, and extended kinship predictions.  Computational challenges related to SNP allele calling, probability of random man not excluded calculations,  and both reference and complex mixture sample comparisons to tens of millions of reference profiles were encountered and resolved when scaling up from thousands to tens of thousands of SNP loci.  A MPS SNP analysis pipeline is described for rapid analysis of forensic deoxyribonucleic acid (DNA) samples for thousands to tens of thousands of SNP loci against tens of millions of reference profiles.  This pipeline is part of the MIT Lincoln Laboratory (MITLL) IdPrism advanced DNA forensic system. A data-driven framework for uncertainty quantification of a fluidized bed V M Krushnarao Kotteda, Anitha Kommu, Vinod Kumar (Univ. Texas El Paso) We carried out a nondeterministic analysis of flow in a fluidized bed. The flow in the fluidized bed is simulated with National Energy Technology Laboratory's open source multiphase fluid dynamics suite MFiX. It does not possess tools for uncertainty quantification. Therefore, we developed a C++ wrapper to integrate an uncertainty quantification toolkit developed at Sandia National Laboratory with MFiX. The wrapper exchanges uncertain input parameters and key output parameters among Dakota and MFiX. However, a data-driven framework is also developed to obtain reliable statistics as it is not feasible to get them with MFiX integrated into Dakota, Dakota-MFiX. The data generated from Dakota-MFiX simulations, with the Latin Hypercube method of sampling size 500, is used to train a machine learning algorithm. The trained and tested deep neural network algorithm is integrated with Dakota via the wrapper to obtain low order statistics of the bed height and pressure drop across the bed.
Thursday, September 26, 2019 High Performance Data Analysis 2 1:00-2:40 in Eden Vale C1/C2 Chair: Nikos Pitsianis / Aristotle An Interactive LiDAR to Camera Calibration Yecheng Lyu, Lin Bai, Mahdi Elhousni and Xinming Huang (WPI) Recent progress in the automated driving system and advanced driver assistant system has shown that the combined use of 3D light detection and ranging (LiDAR) and the camera is essential for an intelligent vehicle to perceive the driving scenarios. LiDAR-camera fusion systems require precise intrinsic and extrinsic transformation between sensors. However, due to the limitation of the calibration equipment and susceptibility to noise, algorithms in existing methods tend to fail in finding LiDAR-camera correspondences in long-range terms. In this paper, we introduced an interactive LiDAR to camera calibration toolbox that estimates the intrinsic and extrinsic transform parameters. This toolbox automatically detects the corner of a planer board from a sequence of LiDAR frames and provides a convenient user interface to annotate the corresponding pixels on camera frames. Since the toolbox only detects the top corner of the board, there is no need to prepare a precise polygon planar board or a checkerboard with different reflectivity areas as in the existing methods. Furthermore, the toolbox uses genetic algorithms to estimate the transforms and supports multiple camera models such as the pinhole camera model and the fisheye camera model. Experiments using Velodyne VLP-16 LiDAR and Point Grey Chameleon 3 camera show robust results. Lossless Compression of Internal Files in Parallel Reservoir Simulation Marcin Rogowski, Suha N. Kayum, Florian Mannuss (Saudi Aramco) In parallel reservoir simulation, massively sized files are written recurrently throughout a simulation run. A method is developed to compress the distributed data to be written during the simulation run and to output it to a single compressed file. Evaluation of several compression algorithms on a range of simulation models is performed. The presented method results in 3x file size reduction and a decrease in the total application runtime. Scalable Lazy-update Multigrid Preconditioners Majid Rasouli, Vidhi Zala, Robert M. Kirby, Hari Sundar (Univ. Utah) Multigrid is one of the most effective methods for solving elliptic PDE’s.  It is algorithmically optimal and is robust when combined with Krylov methods.  Algebraic multigrid is especially attractive due to its blackbox nature. This however comes at the cost of increased setup costs that can be significant in case of systems where the system matrix changes frequently making it difficult to amortize the setup cost. In this work, we investigate several strategies for performing lazy updates to the multigrid hierarchy corresponding to changes in the system matrix. These include delayed updates, value updates without changing the structure, process local changes, and full updates. We demonstrate that in many cases, the overhead of building the AMG hierarchy can be mitigated for rapidly changing system matrices. IdPrism: Rapid Analysis of Forensic DNA Samples Using MPS SNP Profiles Darrell O. Ricke, James Watkins, Philip Fremont-Smith, Adam Michaleas (MIT-LL) Massively parallel sequencing (MPS) of large single nucleotide polymorphism (SNP) panels enables identification, analysis of complex DNA mixture samples, and extended kinship predictions.  Computational challenges related to SNP allele calling, probability of random man not excluded calculations,  and both reference and complex mixture sample comparisons to tens of millions of reference profiles were encountered and resolved when scaling up from thousands to tens of thousands of SNP loci.  A MPS SNP analysis pipeline is described for rapid analysis of forensic deoxyribonucleic acid (DNA) samples for thousands to tens of thousands of SNP loci against tens of millions of reference profiles.  This pipeline is part of the MIT Lincoln Laboratory (MITLL) IdPrism advanced DNA forensic system. A data-driven framework for uncertainty quantification of a fluidized bed V M Krushnarao Kotteda, Anitha Kommu, Vinod Kumar (Univ. Texas El Paso) We carried out a nondeterministic analysis of flow in a fluidized bed. The flow in the fluidized bed is simulated with National Energy Technology Laboratory's open source multiphase fluid dynamics suite MFiX. It does not possess tools for uncertainty quantification. Therefore, we developed a C++ wrapper to integrate an uncertainty quantification toolkit developed at Sandia National Laboratory with MFiX. The wrapper exchanges uncertain input parameters and key output parameters among Dakota and MFiX. However, a data-driven framework is also developed to obtain reliable statistics as it is not feasible to get them with MFiX integrated into Dakota, Dakota-MFiX. The data generated from Dakota-MFiX simulations, with the Latin Hypercube method of sampling size 500, is used to train a machine learning algorithm. The trained and tested deep neural network algorithm is integrated with Dakota via the wrapper to obtain low order statistics of the bed height and pressure drop across the bed.