(a) The main structure of the self-supervised pretraining model, including three parts—a token embedding at the forefront, followed by a hierarchical encoder–decoder and a point reconstruction module.
The International Conference in Optimization and Learning (OLA2024), organized by RIT (Croatia) and the University of Lille (France), focuses on the future challenges of optimization and learning ...