Enabling Very-Large Scale Earthquake Simulations on Parallel Machines
Yifeng Cui, Reagan W. Moore, Kim B. Olsen, Amit Chourasia, Philip J. Maechling, Jean B. Minster, Steven M. Day, Yuanfang Hu, Jing Zhu, Amitava Majumdar, & Thomas H. JordanPublished 2007, SCEC Contribution #1115
The Southern California Earthquake Center initiated a major large-scale earthquake simulation called TeraShake. The simulations propagated seismic waves across a domain of 600x300x80 km at 200 meter resolution, some of the largest and most detailed earthquake simulations of the southern San Andreas fault. The output from a single simulation may be as large as 47 terabytes of data and 400,000 files. The execution of these large simulations requires high levels of expertise and resource coordination. We describe how we performed single-processor optimization of the application, optimization of the I/O handling, and the optimization of execution initialization. We also look at the challenges presented by run-time data archive management and visualization. The improvements made to the application as it was recently scaled up to 40k BlueGene processors have created a community code that can be used by the wider SCEC community to perform large scale earthquake simulations.
Key Words
parallel computing, scalability, earthquake simulation, data management, visualization, TeraShake
Citation
Cui, Y., Moore, R. W., Olsen, K. B., Chourasia, A., Maechling, P. J., Minster, J. B., Day, S. M., Hu, Y., Zhu, J., Majumdar, A., & Jordan, T. H. (2007). Enabling Very-Large Scale Earthquake Simulations on Parallel Machines . Advancing Science and Society through Computation:Lecture Notes in Computer Science series, 46-53. doi: 10.1007/978-3-540-72584-8.