TY - GEN
T1 - Scalable microstructure reconstruction with multi-scale pattern preservation
AU - Cang, Ruijin
AU - Vipradas, Aditya
AU - Ren, Yi
N1 - Funding Information:
This work is partially supported by NSF CMMI under grant No. 1651147. R. C. and Y. R. thank the startup funding from the Arizona State University. All source codes and datasets are available at https://github.com/DesignInformaticsLab/ Material-Design.
Publisher Copyright:
© 2017 ASME.
PY - 2017
Y1 - 2017
N2 - A key challenge in computational material design is to optimize for particular material properties by searching in an often high-dimensional design space of microstructures. A tractable approach to this optimization task is to identify an encoder that maps from microstructures, which are 2D or 3D images, to a lower-dimensional feature space, and a decoder that generates new microstructures based on samples from the feature space. This two-way mapping has been achieved through feature learning, as common features often exist in microstructures from the same material system. Yet existing approaches limit the size of the generated images to that of the training samples, making it less applicable to designing microstructures at an arbitrary scale. This paper proposes a hybrid model that learns both common features and the spatial distributions of them. We show through various material systems that unlike existing reconstruction methods, our method can generate new microstructure samples of arbitrary sizes that are both visually and statistically close to the training samples while preserving local microstructure patterns.
AB - A key challenge in computational material design is to optimize for particular material properties by searching in an often high-dimensional design space of microstructures. A tractable approach to this optimization task is to identify an encoder that maps from microstructures, which are 2D or 3D images, to a lower-dimensional feature space, and a decoder that generates new microstructures based on samples from the feature space. This two-way mapping has been achieved through feature learning, as common features often exist in microstructures from the same material system. Yet existing approaches limit the size of the generated images to that of the training samples, making it less applicable to designing microstructures at an arbitrary scale. This paper proposes a hybrid model that learns both common features and the spatial distributions of them. We show through various material systems that unlike existing reconstruction methods, our method can generate new microstructure samples of arbitrary sizes that are both visually and statistically close to the training samples while preserving local microstructure patterns.
UR - http://www.scopus.com/inward/record.url?scp=85034649760&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85034649760&partnerID=8YFLogxK
U2 - 10.1115/DETC201768286
DO - 10.1115/DETC201768286
M3 - Conference contribution
AN - SCOPUS:85034649760
T3 - Proceedings of the ASME Design Engineering Technical Conference
BT - 43rd Design Automation Conference
PB - American Society of Mechanical Engineers (ASME)
T2 - ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, IDETC/CIE 2017
Y2 - 6 August 2017 through 9 August 2017
ER -