Only the parallel sparse matrix code was updated. This is used by e.g. LSCM and ABF unwrap, and performance seems about the same or better. Parallel GEMM (dense matrix-matrix multiplication) is used by libmv, for example in libmv_keyframe_selection_test for a 54 x 54 matrix. However it appears to harm performance, removing parallelization makes that test run 5x faster on a Apple M3 Max. There has been no new Eigen release since 2021, however there is active development in master and it includes support for a C++ thread pool for GEMM. So we could upgrade, but the algorithm remains the same and looking at the implementation it just does not seem designed for modern many core CPUs. Unless the matrix is much larger, there's too much thread synchronization overhead. So it does not seem useful to enable that thread pool for us. Pull Request: https://projects.blender.org/blender/blender/pulls/136865
Project: Eigen Description: Template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms URL: http://eigen.tuxfamily.org/index.php?title=Main_Page License: SPDX:MPL-2.0 Upstream version: 3.4.0 Local modifications: None Copyright: Copyright (C) 2008-2010 Gael Guennebaud <gael.guennebaud@inria.fr>. Copyright (C) 2006-2008 Benoit Jacob <jacob.benoit.1@gmail.com