Abstract:
Flow field reconstruction is of great value in engineering applications such as super-resolution reconstruction and inversion of experimental fluid mechanics. The existing training paradigms for flow field reconstruction include supervised learning and self-supervised learning. However, when applied to large-scale 3D flow fields, the target flow field size is limited by hardware memory capacity, so it is difficult to pretrain the whole flow field efficiently. To address this bottleneck, this study proposes a Transformer-based two-dimensional (2D) local masked self-supervised learning method, taking the 3D flow field around a circular cylinder as a case study. By performing masked self-supervised pre-training on 2D slices of the 3D flow field, the model learns the underlying physical laws and spatial correlations of the 3D flow field while achieving zero-shot generalization capability, enabling its direct application to downstream global 3D flow field reconstruction tasks. The results show that the proposed method achieves global reconstruction of the 3D flow field using only 4% of the local 2D slice data for pre-training, with a mean relative error of 6.62% on the test set. In addition, multi-dimensional comparisons with supervised learning and global self-supervised learning show that the proposed method significantly reduces memory consumption and training time. The mean relative errors of the supervised learning and global self-supervised learning methods are 8.87% and 8.45%, respectively, indicating higher reconstruction accuracy of the proposed method. The method in this paper provides a new way for efficient and low resource consumption 3D flow field AI modeling.