Skip to content

Feature request: singularLocus efficiency #1899

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
8d1h opened this issue Jan 30, 2021 · 17 comments
Open

Feature request: singularLocus efficiency #1899

8d1h opened this issue Jan 30, 2021 · 17 comments

Comments

@8d1h
Copy link
Contributor

8d1h commented Jan 30, 2021

The method singularLocus in M2 is not very efficient. I recently looked at the file bir.m2 written by Giovanni Staglianò for his article, where one gets the option to use Singular to carry out this computation. For example, here is a test case.

R = ZZ/11[x_0..x_3];
I = ideal apply(10, i->random_3 R);
elapsedTime isSmooth(I, Use=>Singular);   -- 0.0754029 seconds elapsed
elapsedTime isSmooth(I, Use=>M2);         -- 3.26353 seconds elapsed

Is it possible to make this an official feature, which will make singularLocus much more usable?
Update: The above example is a bit degenerate. Here is a better one.

I = Grassmannian(1,4,CoefficientRing=>ZZ/101);
R = ring I;
J = I + ideal(random_1 R, random_2 R);
elapsedTime isSmooth(J, Use=>Singular);  -- 2.72101 seconds elapsed
elapsedTime isSmooth(J, Use=>M2);        -- 18.1278 seconds elapsed
@DanGrayson
Copy link
Member

No, we can't require Singular as a prerequisite for Macaulay2, but we could try to devise a faster algorithm.

@8d1h
Copy link
Contributor Author

8d1h commented Jan 30, 2021

That's very reasonable.

Actually I looked at the code for singularLocus, the most compute-heavy parts are the call of minors and the construction of a quotient ring. Both seem to be rather basic methods...

Also I noticed that by changing the strategy of minors to Cofactor seems to improve for this particular case (and several others). Though I'm not sure that this is always the case.

singularLocus(Ring) := QuotientRing => (R) -> (
     f := presentation R;
     A := ring f;
     c := codim(R,Generic=>true);
     A / trim (ideal f + minors(c, jacobian f, Strategy=>Cofactor)))

@DanGrayson
Copy link
Member

Notice also that forming a quotient ring involves computing a Groebner basis, which is perhaps not needed if all you want to do is to determine whether the Jacobian ideal is the unit ideal, especially in the homogeneous case. I'm not sure whether the code for I == 1 is optimized in that sense yet.

@DanGrayson
Copy link
Member

Anyway, if you come up with changes that are improvements, feel free to submit a pull request.

@8d1h
Copy link
Contributor Author

8d1h commented Jan 31, 2021

Yes I'm aware that forming a quotient ring computes a Gröbner basis. In fact this seems to be the bottleneck: quotient ring, dim, degree, etc., almost all the important methods use a Gröbner basis computation, and Singular appears to be far more efficient in doing it. Maybe the Gröbner basis engine needs to be upgraded entirely...

@mikestillman
Copy link
Member

We are working on it :). The default GB being used here is not the best possible, unfortunately.

The routine is still often very much less efificient that it needs to be. In particular, I believe it is computing the saturation of the singular locus, which is a computation heavy step compared to the GB itself. On the other hand @kschwede (and co-authors) has a Macaulay2 package called FastLinAlg which is designed to be smarter about singular loci. I don't think it is (yet) as good as the singular code for smoothness, but it is alot better (I believe) than the default singular locus routine.

@8d1h
Copy link
Contributor Author

8d1h commented Jan 31, 2021

That would be great, thanks! I will take a look at FastLinAlg.

@kschwede
Copy link
Contributor

kschwede commented Jan 31, 2021 via email

@8d1h
Copy link
Contributor Author

8d1h commented Jan 31, 2021

Thanks for the comments! That's definitely an interesting example! Is this algorithm always faster than directly computing the singular locus, or are there certain use cases that it works better?

Also I figured out that for the above example (or when we have a bunch of fat points) we can improve the performance of minors by using the Cofactor strategy. Still, cutting out this factor, Singular is still faster when computing the dimension of the singular locus (or computing dimension and degree in general).

@kschwede
Copy link
Contributor

kschwede commented Feb 1, 2021

In my experience, Bareiss is better when working with single variable rings, or homogeneous things in 2 variables. Otherwise typically cofactor is better unless the matrix has a lot of symmetry / is sparse.

Anyways, this algorithm does not compute the singular locus, but it finds an ideal inside the Jacobian ideal. Many times that's good enough (it certainly is to verify that something is nonsingular).

So what this algorithm does is try to only compute some of the minors of the Jacobian. Check to see if we've verified that the partially computed singular locus satisfies whatever condition (has codimension > N for instance), and if not, repeat (typically computing more and more minors between each codim check). How it chooses which minors depends on the Strategy option.

The default is some heuristics / random. But telling it to find some rational / geometric points in the locus where the partial minors (+ the original ideal) vanish, evaluate the matrix at those points, and then find some interesting minors of the evaluated matrix is another strategy (StrategyPoints). In some cases, finding points is slow but in this example, mixing in some points with the heuristics seems to work quite well.

The algorithm also typically gives up long before all minors have been computed (in which case the function returns null), but you also can tell it not to give up if you'd like (make sure your strategy has some random component as the heuristics can enter a loop). It does not recompute minors its already computed.

In particular, this algorithm is massively (many orders of magnitude) faster than the default when affirmatively checking something is smooth (or verifying something about the codim of the singular locus affirmatively) but will be typically worse (although not hugely worse) when computing something negatively.

At some point in the future, this should be multi-threaded to get the best of all worlds, but multi-threading was too unstable when I wrote it (I haven't played around in 1.17 yet).

@8d1h
Copy link
Contributor Author

8d1h commented Feb 1, 2021

Wow thanks a lot this is really a great explanation!

@DanGrayson
Copy link
Member

@kschwede -- Which algorithm are you referring to when you say "this algorithm"?

Does it do something like this: sort the rows and columns so an entry with minimal lead polynomial is in the upper left hand corner, then do the same with the remainder of the matrix to put something small in position (2,2), and so on?

@kschwede
Copy link
Contributor

kschwede commented Feb 1, 2021

@DanGrayson

I mean the heuristic algorithms. Yes, that is roughly what it does.

First: it chooses a new random Lex or GRevLex order.

Second. then it chooses the smallest matrix element (or the matrix element with the smallest term depending on what strategy is being used) with respect to that order, eliminates that row and column and repeats this 2nd step until right size submatrix is selected.

For the GRevLex order as well, it periodically multiplies an entry (in a dummy matrix) that was previously chosen by a high degree thing, to guarantee that is not chosen in the next round.

Third. It computes the determinant of that minor that we just picked and adds it to running the ideal.

@8d1h
Copy link
Contributor Author

8d1h commented Feb 1, 2021

Eh here is something that must be a bug.

I = Grassmannian(1,4,CoefficientRing=>ZZ/3);
R = ring I;
J = I + ideal(random_1 R, random_2 R);
elapsedTime(S = singularLocus J; dim S;) -- 3.39217 seconds elapsed
elapsedTime dim singularLocus J;         -- 26.4575 seconds elapsed

@kschwede
Copy link
Contributor

kschwede commented Feb 1, 2021

That is really strange.

Eh here is something that must be a bug.

I = Grassmannian(1,4,CoefficientRing=>ZZ/3);
R = ring I;
J = I + ideal(random_1 R, random_2 R);
elapsedTime(S = singularLocus J; dim S;) -- 3.39217 seconds elapsed
elapsedTime dim singularLocus J;         -- 26.4575 seconds elapsed

@DanGrayson
Copy link
Member

That is strange! But it would be easy to find out what's going on: just set errorDepth to 0 and interrupt it while it's busy with all that junk to see (in the debugger) what it's doing.

@8d1h
Copy link
Contributor Author

8d1h commented Feb 1, 2021

Well the bug is really buried very deep and has nothing to do with singular locus... The problem is with try: it gets slow when the thing involved has no name to refer to.

I = Grassmannian(1,4,CoefficientRing=>ZZ/3);
R = ring I; J = I + ideal(random_2 R);
time try (singularLocus J+1)         -- used 11.6103 seconds
time try (S = singularLocus J; S+1)  -- used 1.10086 seconds

When computing the dimension, at some point there is a check using try so it slowed down the entire computation...

Edit. I removed the lines in ringmap.m2 so dim singularLocus J doesn't run so slow now. But I don't know how to fix the problem with try...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants