Archive for June, 2012

For the last week or so, I’ve been mostly fighting with the performance of the Rust compiler. The rustc performance graph from the RustBot page tells a lot of the story:

Performance graph for rustc

The wildly varying performance is largely due to a change I made to move the code for addition on vectors out of the compiler and into the core library. The compiler used to generate very optimized vector code. For example, when doing v += [x] (that is, appending a single element to a vector in place), the compiler would avoid allocating a temporary vector for the [x]. In effect, the compiler generated what amounts to an inline call to vec::push.

My work involved adding an overloaded addition operator to Rust’s core library, and changing the compiler to use that version instead of its generated version. The landing of this change corresponds to about build number 70 or so on the graph, where there is a small but significant jump in the time rustc took. Due to the way we handle overloaded operators, the compiler must insert many more vector copies than it did before. It seemed too good to be true that I would only lose a small amount of performance.

And it was too good to be true. It turns out through a bunch of rebasing, I had managed to undo a small switch in is_binopable that controlled whether my new code ever got called. In the meantime, Sully has been doing a lot of work on the way Rust handles vectors, and he landed a change around build 90 in the performance graph above that ended up causing Rust to use the library version of vector append instead. This was more like the performance decrease I was expecting.

In order to fix this, I had to track down all the places in the compiler where we were using the addition operator on vectors, and replace them with things like vec::push, vec::push_all, vec::append and vec::append_one. This was tedious work, and I did it in stages. Each spot where the execution time improves in the graph is where I landed a couple more of these changes. Once this was finally done, we can see that rustc’s performance was even slightly better than it was before. I believe this is because the library versions of these functions are able to remove some copies that rustc could not remove before. Sadly, the code is much uglier than it was before. We are discussing the ability to specify modes on the self argument, which will allow the compiler to again avoid unnecessary copies when using the plus operator. This should allow us to make using the plus operator just as fast as the underlying library functions.

The moral of the story is that copies can kill performance. Rust warns often when it is inserting copies you may not have expected. Pay attention to these warnings and you will write much faster code.

Advertisements

Previously, I talked about a couple of ways to improve the message passing performance in Rust. The obvious bottleneck was that sending and receiving both involved taking a lock that was shared between all tasks. In order to remove this lock, I set out to write a new port/channel system that relied much less on the runtime library.

In order to do this, we first needed a variant of the atomic reference counter that allows us to share mutable data. We accomplish this by adding a mutex and condition variable. The mutex is your standard pthread mutex, while we’ve implemented our own condition variable that the Rust scheduler is aware of. Because we’re using a standard pthread mutex, using this exclusive access is unsafe and should be used with care; if you’re not careful, you could deadlock Rust. Fortunately, the API for Rust’s low-level locks and condition variables makes it harder to accidentally hold a lock for unbounded amounts of time.

Once we have this, ports and channels simply become a locked atomically reference-counted queue. There’s no more global lock, and things are far simpler because we only need the Rust runtime support for providing mutual exclusion and condition variable signalling. This part didn’t take long to implement, and I was eager to try it out. I spent a little while writing up a new benchmark that would really show the benefit of avoiding the global lock, and when I went to run it, things crashed.

It turns out, I had discovered a bug whereby vector addition could copy things that weren’t copyable. I saw two approaches to fixing this: fixing trans (the Rust to LLVM translation pass), or moving vector addition out of trans and into the library. The idea behind the second option is that if we did this, vector addition would be going through the existing and well-tested function call path, rather than a special vector addition codegen path. This seemed like the best option overall, since the first option felt more like a band aid for the root cause that we have too much functionality that is duplicated in subtly different ways.

Thus, I set out to move vector addition to libcore. This exposed some subtle semantics issues around const vectors, but these were mostly not too painful to work out. My firsts working versions were too slow to finish building the compiler in a reasonable amount of time. I was thinking we’d end up taking a 20% to 2x performance hit by doing this change, but thanks to some heroic optimization help from Niko, we got the performance to the point where in some cases the new code even performs ever so slightly better than the old code. In the course of doing these changes, I also discovered another bug, that led to us leaking memory, and in the course of fixing that, I discovered a way to make Rust segfault. Ah, the life of a compiler writer.

At any rate, we have fixes to all of these bugs in the works, and things are working well enough to run a benchmark. This test creates a ring of tasks, who each send a message to their neighbor on one side and receive from the other side. Here are the numbers for the old messaging system.

Sent 1000000 messages in 3.88114 seconds
  257656 messages / second
  3.88114 μs / message

And here are the numbers for the new system.

Sent 1000000 messages in 1.87881 seconds
  532253 messages / second
  1.87881 μs / message

As you can see, we’re about 1.9x faster than we were before.

This new system doesn’t yet have the exact same features as the old system, and it needs some help with ergonomics. My next task will be to work on adding these missing things and hopefully be able to incrementally replace the old code with the new, faster version.

The code for this post isn’t yet in the main Rust tree, but it should be landing soon as we squash the bugs we found in the process of doing this new message passing system.

Today we’re going to take a brief look at message passing performance in Rust. Rust is already pretty quick on this front, but there are some really obvious possibilities for making it even faster. It looks like I’ll be spending most of the summer working on this, so I thought it’d be good to start out with some baseline performance numbers. Rust’s testsuite has a benchmark called msgsend.rs, which is a microbenchmark for message passing performance. It is based on a benchmark from Paul Keeble that original compared Erlang and Scala. Here’s a table summarizing the results:

Language Messages per second Comparison
Scala 378,740 100.0%
Rust 175,373 46.3%
Erlang (Bare) 78,670 20.8%
Erlang (OTP) 76,405 20.2%

These numbers were generated on an Early 2011 MacBook Pro 2.3 GHz Intel Core i7 with 8GB RAM and Lion 10.7.4. Don’t read too much into these numbers; they are incredibly unscientific. The only real use for them will be to see if running the same benchmark for Rust in a few weeks yields a larger number. It’s also worth noting that my results disagree with the original author, who saw Erlang performing about 6 times faster than Scala. I suspect that Erlang’s message passing may not be as efficient on Mac OS X, but I do not know what system the tests were originally run on.

So, where do we go from here? I mentioned that there are some obvious first steps. The most obvious is that right now sending a message involves taking a global lock. In effect, this means we can only send one message in a time. Suppose we had four tasks, A, B, C and D, where A was sending to B and C to D. Intuitively, these two sends should be completely independent of each other, but this is not currently the case in Rust. This is one thing I hope to change.

It would be nice if we could do at least some of the communication without any locks at all, using a lock-free data structure. This is worth experimenting with, but we’ll have to wait for the benchmarks to show whether this is the best idea.

Somewhat orthogonal to message passing performance, I hope to be able to implement more of the communication system in Rust itself. When the communication system was first written, we had to put almost all of it in the runtime library that was written in C++. Rust is significantly more powerful now, and it seems like it would not take too much more work to write the communication system almost entirely in Rust.