Seismic velocity inversion plays a vital role in various applied seismology processes. A series of deep learning methods have been developed that rely purely on manually provided labels for supervision; however, their performances depend heavily on us-ing large training data sets with corresponding velocity models. Because no physical laws are used in the training phase, it is usually challenging to generalize trained neural networks to a new data domain. To mitigate these issues, we have embedded a seismic forward modeling step at the end of a network to re -map the inversion result back to seismic data and thus train the neural network through self-supervised loss, i.e., the misfit be-tween the network input and output. As a result, we eliminate the need for many labeled velocity models, and physical laws are introduced when back-propagating gradients through the seismic forward modeling step. We verify the effectiveness of our approach through comprehensive experiments on syn-thetic data sets, where self-supervised learning outperforms the fully supervised approach, which accesses much more la-beled data. The superior performance is even more significant when compared with a new data domain that has velocity mod-els with faults and more geologic layers. Finally, in case of un-known and more complex data types, we develop a network -constrained full-waveform inversion (FWI) method. This method refines the initial prediction of the network by iteratively optimizing network parameters other than the velocity model, as found with the conventional FWI method, and demonstrates clear advantages in terms of interface and velocity accuracy. With these measures (self-supervised learning and network -con-strained FWI), our physics-driven self-supervised learning sys-tem successfully mitigates issues such as the dependence on large labeled data sets, the absence of physical laws, and the difficulty in adapting to new data domains.
展开▼