Friday, August 26, 2011

Awful Benchmarks

come deep enough for good alphabetics.

I don't like benchmarking.
The features are too subtle to create a benchmark without a huge overhead.

And decorating a pass-function is somehow unsatisfying. On easy computations, the decorator overhead is relatively higher. Therefore, a pass-function is probably the worst case.

Nevertheless, here are some numbers:
  • calling a pass-function:
    100000 loops, best of 3: 110 ns per loop
  • calling the above function after decorating:
    100000 loops, best of 3: 295 ns per loop
  • calling a pass-function, that takes some parameters:
    100000 loops, best of 3: 399 ns per loop
  • calling the above function after decorating:
    100000 loops, best of 3: 796 ns per loop
  • defining a pass-function:
    100000 loops, best of 3: 66.4 ns per loop
  • decorating a pass-function:
    100000 loops, best of 3: 24.4 µs per loop
The numbers look quite good. While decorating is really slow (because inspect.getargspec is), calling is really fast.
The difference is less than twice a pass-function (and that is worst-case).

Michael had the idea to benchmark a polynomial addition:
sage: from sage.misc.sage_timeit import sage_timeit
sage: P.<x,y,z>=QQ[]
sage: f = x * y + z
sage: def add(a, b):
....:     return a + b
sage: from sage.citation import cites, citable_items
sage: @cites(citable_items.libsingular)
....: def add_citing(a, b):
....:     return a + b
sage: sage_timeit("add(f, x)", {'add':add, 'f':f, 'x':x},
....:     number=100000, preparse=False)
100000 loops, best of 3: 279 ns per loop
sage: sage_timeit("add(f, x)", {'add':add_citing, 'f':f, 'x':x},
....:     number=100000, preparse=False)
100000 loops, best of 3: 712 ns per loop
Although it looks like the computation is significantly slower, the loss of speed is acceptable. With slower computations (with more complex polynomials), one will not be able to measure the difference.
sage: from sage.misc.sage_timeit import sage_timeit
sage: P.<a,b,c> = PolynomialRing(QQ,3, order='lex')
sage: I = P.ideal([a,a+1])

# Without example-usage applied
sage: sage_timeit("I.groebner_basis('libsingular:slimgb')",
....:     {"I":I}, number=100000, preparse=False)
100000 loops, best of 3: 2.09 µs per loop
sage: sage_timeit("I.groebner_basis('libsingular:slimgb')",
....:     {"I":I}, number=100000, preparse=False)
100000 loops, best of 3: 2.14 µs per loop

# With decorator and citations.add
sage: sage_timeit("I.groebner_basis('libsingular:slimgb')",
...:      {"I":I}, number=100000, preparse=False)
100000 loops, best of 3: 2.06 µs per loop
sage: sage_timeit("I.groebner_basis('libsingular:slimgb')"
...:      {"I":I}, number=100000, preparse=False)
100000 loops, best of 3: 2.16 µs per loop
As you can see, the error in measurement is way higher than the loss of speed. That indicates, that the decorator is really fast.
If you are into groebner bases (I am not), you probably know, that this is a rather trivial computation. Easy computations are bad for benchmarking results (take a look above).
Still, the results are very good.

For detailed information, please take a look at bitbucket.

citation service

Thanks to Burcin Erocal, we now have a document describing the citation module.

Friday, August 12, 2011

Uploading ...

I just uploaded a few patches to sage-trac.

But first things first. Michael and I, we were at the University of Kaiserslautern to visit Burcin Erocal from Tuesday through Thursday. We had quite a nice time together and were coding a lot.

The documentation coverage of the files, I submitted to sage-trac, is now 100%. That was quite tough and we found way too many bugs during doctesting.


The design changed again, but only a little.

In short:
  • "citable_items" holds citation information
  • "@cite" is the decorator to mark functions to be using a citable item
  • Used citations are available through "citations".
They are available through "sage.citation".

Detailed information is available in the docstrings.