Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe. I did not set out to do a thorough benchmark - as I think is clear from the lack of rigour in my reply. Rather I was curious to find out how close Raku would be to Python expecting it to be rather slower. By chance the example picked by the OP - fibonacci - is best written in Raku via the `...` sequence operator and, yes, I would that is using hyper operators and internal concurrency to "cheat". I agree that Raku is not generally faster than Rust, or indeed Python. At this point in its evolution, there is still much work to be done on code optimisation.

When I say "haha" I mean <<that's quite funny since by chance the code given in the OP is much faster in Raku _in this particular instance_ and that's quite apt since some higher level operators such as `...` are actually quite useful and since they are core operations can be more tightly optimised than general code. So actually (in addition to the main thrust that human speed is more important than code speed), it seems that often our preconceptions are not always true>>

I am genuinely sorry that you find the raku gibberish - I suppose that I would find Malay gibberish but that's a reflection on me not on the inhabitants of Malaysia that use it every day.

Let me try a translation:

The sequence operator is spelled `...`. Here are a couple of basic examples:

  say 1,2,3 ... 10;    #(1,2,3,4,5,6,7,8,9,10) - an arithmetic sequence
  say 1,2,4 ... 16;    #(1,2,4,8,16) - a geometric sequence
  say 1,2,4 ... *;     #(1,2,4,8,16,32...) - lazy list with infinite length
The RHS value limits the size of the last item in the output list. A `` on the right means `infinity`.

You can see that this looks at the list of three items on the LHS and determines the remainder of the sequence from that. If you want to use it with a function, you can use the `` to represent a parameter like this:

  say (1, *+7 ... *)[1..4];    #((8 15 22 29)
You will note that I am using the regular `[]` index syntax with the range `1..4` to request only the first 4 items of the resulting lazy list.

So, the final bit of the puzzle is when I have a dyadic function that takes two parameters (ie the previous value and the one before it), like this:

  say (0,1, *+* ... *)[10,20,30,40];
This is the fibonacci series.

---

One last word is that the raku MAIN() function is a very neat way to deploy a raku script on the command line.

  sub MAIN(*@n) {
    say "$_ " ~ (0, 1, *+* ... *)[$_] for @n
  }
So that's "slurp" all the indices provided on the command line to array `@n`, which can then be used as the index to the sequence.

PS. I have tweaked this a bit to include printing the index and the result. ---

Even if this helps to de-gibberish the code, I understand that you may not like it. Personally I find it quite clean and easier to read than the Python code.



> I would [guess?] that [Raku(do)] is using hyper operators and internal concurrency to "cheat".

You missed a word. I've guessed it was the word "guess". :)

I haven't checked Rakudo's code but I'm pretty sure any performance optimizations of your code were not related to using hyperoperators or internal concurrency.

Here's two things I can think of that may be relevant:

* Rakudo tries to inline (calls to) small routines such as `* + *`. Wikipedia's page on inlining claims that, when a compiler for a low level language (like Rust) succeeds in inlining code written in that language it tends to speed the code up something like ten percent or a few tens of percent. In a high level language (like Raku) it can result in something like a doubling or, in extreme cases, a ten fold speed up. The difference is precisely because low level languages tend to compile to fast code anyway. So while this may explain why Raku(do) is faster than CPython, it can't explain your conclusion that Rust is half as fast as Raku. (I think you almost certainly made a mistake, but let's move on!)

* In your Raku code you've used `...`. That means all but the highest number on the command line are computed for free, because sequences in Raku default to lazy processing, and lazy processing defaults to use of caching of already generated sequence values. So a single run passed `10 20 30 40` on the command line would call the `* + *` lambda just 40 times instead of 100 (10+20+30+40) times. That's roughly a doubling of speed right there.

So if Rakudo is doing a really good job of codegen for the fibonacci code, and you removed the startup overhead from your Raku timings, then perhaps, maybe, Raku(do) really is "beating" Rust because of the `...` caching effect.

I still find that very hard to believe but it would certainly be worth trying to have someone reasonably expert at benchmarking trying to repeat and confirm your (remarkable!) result.

> At this point in its evolution, there is still much work to be done on code optimisation.

Understatement!

It took over a decade for production JVMs and JS engines to stop being called slow, another decade to start being called fast (but not as fast as C), and another decade to be considered surprisingly fast.

Rakudo's first production release came less than a decade ago. So I think that, for now, a reasonable near term performance goal (I'd say "by the end of this decade") is to arrive at the point where people stop calling Raku slow (except in comparison to C).

> Let me try a translation:

Let me have a go too. :) But I'll rewrite the code:

    sub MAIN                         #= Print fibonacci values.
        (*@argv);                    #= eg 10 20 prints 55 6765

    print .[ @argv ]                 # print fibonacci values.
      given
        0, 1, 1, 2, 3, 5,            # First fibonacci 
    values.
        sub fib ( $m, $n )           # Fibonacci generator
                { $m + $n } ... Inf  # to compute the rest.**


yes guess (thanks!)

well - try it on your own machine - code as noted above (with reflection of the input value)

then,

  > time raku fibo.raku 10 20 30 40
  
  10 55
  20 6765
  30 832040
  40 102334155
  raku fibo.raku 10 20 30 40  0.11s user 0.01s system 116% cpu 0.133 total
Caveat - this is absolutely not purporting to be a benchmark, it was as big a (pleasant and counterintuitive) surprise to me as to anyone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: