TY - JOUR T1 - Fast Gradient Computation for Gromov-Wasserstein Distance AU - Zhang , Wei AU - Wang , Zihao AU - Fan , Jie AU - Wu , Hao AU - Zhang , Yong JO - Journal of Machine Learning VL - 3 SP - 282 EP - 299 PY - 2024 DA - 2024/09 SN - 3 DO - http://doi.org/10.4208/jml.240416 UR - https://global-sci.org/intro/article_detail/jml/23417.html KW - Optimal transport, Gromov-Wasserstein distance, Fast gradient computation algorithm, Fast algorithm. AB -
The Gromov-Wasserstein distance is a notable extension of optimal transport. In contrast to the classic Wasserstein distance, it solves a quadratic assignment problem that minimizes the pair-wise distance distortion under the transportation of distributions and thus could apply to distributions in different spaces. These properties make Gromov-Wasserstein widely applicable to many fields, such as computer graphics and machine learning. However, the computation of the Gromov-Wasserstein distance and transport plan is expensive. The well-known Entropic Gromov-Wasserstein approach has a cubic complexity since the matrix multiplication operations need to be repeated in computing the gradient of Gromov-Wasserstein loss. This becomes a key bottleneck of the method. Currently, existing methods accelerate the computation focus on sampling and approximation, which leads to low accuracy or incomplete transport plans. In this work, we propose a novel method to accelerate accurate gradient computation by dynamic programming techniques, reducing the complexity from cubic to quadratic. In this way, the original computational bottleneck is broken and the new entropic solution can be obtained with total quadratic time, which is almost optimal complexity. Furthermore, it can be extended to some variants easily. Extensive experiments validate the efficiency and effectiveness of our method.