Skip to content

Add a new algorithm for the generalized eigenvalue problem#1235

Open
vlovero wants to merge 1 commit intoReference-LAPACK:masterfrom
vlovero:master
Open

Add a new algorithm for the generalized eigenvalue problem#1235
vlovero wants to merge 1 commit intoReference-LAPACK:masterfrom
vlovero:master

Conversation

@vlovero
Copy link
Copy Markdown

@vlovero vlovero commented Apr 11, 2026

Currently, the algorithms used for the generalized eigenvalue algorithm have a few bottlenecks

  1. The hessenberg-triangular reduction step does not scale very well and givens rotations must be accumulated which adds a lot of redundant work
  2. the QZ algorithm can experience drastic slowdowns in the presence of infinite eigenvalues

The first issue can be addressed using a different HT reduction approach from Steel et al. The second issue can solved by identifying and deflating all of the infinite eigenvalues before the HT step.

These two bottlenecks are addressed in the xGGEV4 routines in this fork leading to 3-5x speedups when computing eigenvalues compared to xGGEV3.

@thijssteel
Copy link
Copy Markdown
Collaborator

This is great work!

Here are some initial comments, I'll try to give your code a more detailed read through when I find some time.

  1. I think QZ really has three bottlenecks: the two you pointed out, but the eigenvector calculations can also be problematic. It amazes me that you still get such speedups in xGGEV3 without modifying that routine.
  2. IterHT, while really fast, is not guaranteed to converge even if infinite eigenvalues are deflated (or at least, I'm not skilled enough to prove that it does.) I think that at the very least, you would need to detect this kind of convergence failure and report the error to the user. From a quick look at the PR, it doesn't seem like that is the case here? Maybe we can switch to the slower variant if that occurs?
  3. I never really investigated the impact of IterHT on the accuracy of the eigenvalues. I was still operating under "if the backward error is small, that is all the user can ask from us" at the time. It wouldn't surprise me if some users prefer the slower version if it gives more accurate results. In that sense, making a new routine in addition to xGGEV3 makes sense.

@vlovero
Copy link
Copy Markdown
Author

vlovero commented Apr 11, 2026

I agree with the eigenvector computation and that's something I'm also working on improving! I've also been trying to prove convergence but haven't managed to come up with anything.

I still need to do a lot more testing for the accuracy, but in my initial tests, I've found that the xggev4 routine gives more accurate results most of the time, but can still struggle on a few edge cases such as when the norms of A and B have very magnitudes.

I'm happy to run the method through any tests!

@thijssteel
Copy link
Copy Markdown
Collaborator

Now that you say it like that, the improved eigenvalue accuracy actually makes a lot of sense. Based on a gut feeling, I think that might be because you have cases with a lot of infinite eigenvalues. And deflating those beforehand can very significantly improve the accuracy of all eigenvalues.

It's something that has been known for a while, but I guess that before this PR, there weren't a lot of publically available tools to deflate those eigenvalues that aren't obvious from the nonzero structure of the matrices.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants