Scala 2.9 parallel collections for MandelActors

Scala 2.9 is out, and one of the new features in this release are parallel collections. These make it almost trivial to do for example a foreach over a collection in parallel. I added a ParallelCollectionsRenderer to MandelActors which looks very simple:

object ParallelCollectionsRenderer extends Renderer {
  override def render(sampler: Sampler, compute: Sample => Color, pixelBuffer: PixelBuffer) {
    for (batch <- sampler.batches.par; sample <- batch) pixelBuffer.add(sample, compute(sample))
  }

  override def toString = "ParallelCollectionsRenderer"
}

It looks almost the same as the SingleThreadRenderer. The only difference is the .par that I added to the collection of sample batches. Scala does the rest under the covers; it performs the loop over the batches in parallel, and the program runs just as fast as the other multi-threaded renderers. With VisualVM I can see that on my dual-core system it creates two threads to iterate over the collection.

No need to program with actors or threads yourself anymore!

Specializing for primitive types

One interesting feature that was added to Scala in version 2.8 is specialization, using the @specialized annotation. First, a little background information.

Generics in Java and the JVM, and consequently also in Scala, are implemented by using type erasure. That means that if you have an instance of a generic class, for example List[String], then the compiler will throw away the information about the type argument, so that at runtime it will look like List[Object].

In Java, primitive types are treated very differently from reference types. One of the things that you cannot do in Java is use primitive types to fill in type parameters – for example, you cannot make a List<int>. If you want to create a list that holds integers, you’ll have to use the wrapper class Integer. A drawback of this approach is that you need to box each of your primitive ints into an Integer object that takes up a lot more memory. I’ve done some tests and found that on my JVM a double only takes up 8 bytes, but a Double object takes up 24 bytes. Also, the boxing and unboxing takes up processor time, which can become a serious bottleneck if you’re dealing with large collections of numbers.

In Scala, the distinction between primitive types and reference types is far less great than in Java, and you can use types like Int directly for type arguments. But a List[Int] is still converted to a List[Object] at runtime because of type erasure and since primitive types are not subclasses of Object on the JVM, Scala will still box the Ints to objects of some wrapper class – which makes a List[Int] in Scala just as inefficient as a List<Integer> in Java.

The @specialized annotation and compiler support are meant to get rid of this inefficiency. Iulian Dragos explains it very clearly in this video. If you create a generic class and you use the @specialized annotation, for example like this:

class Container[@specialized(Int) T](value: T) {
  def apply(): T = value
}

the compiler will actually generate two versions of the class: the normal, generic one, in which the type parameter is erased, and a special subclass that uses the primitive type Int, without the need to box or unbox the value. You can see this when you compile the code using the -print option of the compiler:

[email protected]:~/Projects/boxing$ scalac -print Container.scala
[[syntax trees at end of cleanup]]// Scala source: Container.scala
package <empty> {
  class Container extends java.lang.Object with ScalaObject {
    <paramaccessor> protected[this] val value: java.lang.Object = _;
    def apply(): java.lang.Object = Container.this.value;
    <specialized> def apply$mcI$sp(): Int = scala.Int.unbox(Container.this.apply());  // #1
    def this(value: java.lang.Object): Container = {
      Container.this.value = value;
      Container.super.this();
      ()
    }
  };
  <specialized> class Container$mcI$sp extends Container {
    <paramaccessor> <specialized> protected[this] val value$mcI$sp: Int = _;
    override <specialized> def apply(): Int = Container$mcI$sp.this.apply$mcI$sp();
    override <specialized> def apply$mcI$sp(): Int = Container$mcI$sp.this.value$mcI$sp;  // #2
    override <bridge> <specialized> def apply(): java.lang.Object = scala.Int.box(Container$mcI$sp.this.apply());
    <specialized> def this(value$mcI$sp: Int): Container$mcI$sp = {
      Container$mcI$sp.this.value$mcI$sp = value$mcI$sp;
      Container$mcI$sp.super.this(scala.Int.box(value$mcI$sp));
      ()
    }
  }
}

In this output you see that there is a class Container, which is the regular generic version with erased type parameters, and a class named Container$mcI$sp which is a subclass of the regular Container. That class is the version that’s specialized for Int. Notice that both the classes have a method apply$mcI$sp(), which is called by their apply() methods. In the regular Container, this method (line 7 – #1) unboxes the Object that’s in the container. But in the specialized subclass, it directly returns the Int without unboxing (line 17 – #2).

However, life is not always so simple. Here is an example in which specialization is not as effective as you might think at first sight.

object Boxing01 {
  def clamp[@specialized(Double) T : Ordering](value: T, low: T, high: T): T = {
    import Ordered._
    if (value < low) low else if (value > high) high else value
  }

  def main(args: Array[String]) {
    val a = clamp(25.0, 13.0, 40.0)
    println(a)
  }
}

I wanted the clamp() method to be generic, so that I could use it on anything for which there is an Ordering available, but I wanted to avoid it having to box and unbox when using it with Double. At first sight you’ll probably think that using @specialized will do the job here. But let’s look at the output of compiling this code with the -print option. Here it is (with some parts omitted):

// The normal, type-erased version
def clamp(value: java.lang.Object, low: java.lang.Object, high: java.lang.Object, evidence$1: scala.math.Ordering): java.lang.Object = if (scala.package.Ordered().orderingToOrdered(value, evidence$1).<(low))
  low
else
  if (scala.package.Ordered().orderingToOrdered(value, evidence$1).>(high))
    high
  else
    value;

// The specialized version
<specialized> def clamp$mDc$sp(value: Double, low: Double, high: Double, evidence$1: scala.math.Ordering): Double = if (scala.package.Ordered().orderingToOrdered(scala.Double.box(value), evidence$1).<(scala.Double.box(low)))
  low
else
  if (scala.package.Ordered().orderingToOrdered(scala.Double.box(value), evidence$1).>(scala.Double.box(high)))
    high
  else
    value;

Again, the compiler generated two versions of the clamp() method: the normal one that works on java.lang.Objects, and a version named clamp$mDc$sp() that’s specialized for Double. Take a closer look at the specialized version. You’ll see that it still contains calls to scala.Double.box() and scala.Double.unbox(). Using the @specialized annotation didn’t help at all here!

The reason for this is that the method orderingToOrdered() (defined in object scala.math.Ordered) and the trait scala.math.Ordered are not specialized. So, to call orderingToOrdered() and the < and > methods in trait Ordered, the Double must be boxed, and the return value must be unboxed. Instead of getting rid of the boxing and unboxing, all that we have achieved is that it happens somewhere else.

In Scala’s standard library (version 2.8.1), @specialized is used only very sparingly. Only a handful of traits and classes are specialized. It would be especially nice if the collection classes were specialized for the primitive types. A drawback of specialization is that it causes lots of extra classes and methods to be generated – if all collection classes would be specialized for all primitive types, the Scala language library would suddenly become about ten times as large. Hopefully, some more specialization will be added to the library in places where it really matters in future versions of Scala.

MandelActors

I recently bought the pre-print edition of the book Actors in Scala. At the moment the books is not yet complete; the last three chapters are missing and I found a number of minor mistakes, but it looks like this is going to be a useful book on actors.

To experiment with actors, I created a new hobby project: MandelActors. It’s a simple Mandelbrot fractal generator that uses Scala actors. MandelActors works as follows.

First of all, instead of simply mapping each pixel to one point on the complex plane and using that point in the Mandelbrot algorithm to compute what the color of the pixel should be, MandelActors uses a more sophisticated way to generate samples and integrate them into an image, which makes it possible to make more accurate and higher quality images. A sampler generates one or more samples for each pixel. Each sample is mapped to the complex plane and a color is computed using the Mandelbrot algorithm which is then integrated into the output image using a reconstruction filter. The sampling and reconstruction filter code is borrowed from another hobby project, ScalaRay, and the ideas come from the book Physically Based Rendering – From Theory to Implementation. The sampler and reconstruction filters are implemented in Sampler.scala and Filter.scala.

The interesting things with regards to actors happen in the implementations of trait Renderer, in Renderer.scala. I implemented four different renderers:

  • SingleThreadRenderer – This renderer simply renders the image on a single thread. This is useful as a baseline for performance comparisons.
  • EventActorsRenderer – Uses event-based actors.
  • ThreadActorsRenderer – Uses thread-based actors.
  • ThreadsRenderer – Uses threads instead of actors.

One of the things to be aware of when working with Scala actors is that there are two kinds of actors: event-based actors and thread-based actors. Most of the tutorials and descriptions of actors I found on the web don’t explain this distinction very well, which is strange, because it’s very important to understand the difference.

If you’ve read something about Scala actors, then you’ll most likely already know that there are two ways that an actor can receive messages: by using receive or by using react.

When you use receive, the actor will wait for a message to arrive that matches one of the cases in the block that you pass to receive, blocking the thread that it’s running on. So in this model, each actor needs its own thread – these are thread-based actors. Thread-based actors are relatively heavyweight, because each thread takes up system resources (it needs its own stack frame, for example).

When you use react, things work in a very different way. What this does is register the block that you pass it as an event handler. The actor system has a pool of threads that call the event handlers when messages arrive for the actors. Here actors are decoupled from threads – unlike with thread-based actors, there can be as few or as many threads as is optimal for the system, independent of the number of actors. Event-based actors are much more lightweight than thread-based actors – you can easily create tens or hundreds of thousands of actors without using up too many system resources.

In MandelActors, EventActorsRenderer creates a separate actor for each batch of samples. Depending on the configuration, it might create up to a few thousand event-based actors. ThreadActorsRenderer creates just a few thread-based actors, and sends batches of samples to them in a round-robin scheduling fashion.

If I’d have to choose, I’d go for event-based actors as my first choice, because they are simpler to program and you can leave things like scheduling completely in the hands of the actors system (which hopefully does it in a way that’s optimal for the hardware that you’re running the program on).

One thing I’d like to try out with MandelActors when I get the time for it is remote actors. (Unfortunately that’s in one of the chapters of Actors in Scala that’s still missing in the pre-print edition).

Besides the actors in Scala’s standard library, there are a number of other implementations of actors in Scala. Scalaz, Akka and Lift each have their own implementations. You can find a comparison of these together with standard Scala actors in this blogpost by Viktor Klang.

I don’t know why the various libraries have their own implementations of actors, but I did find this post on David Pollak’s motivations for moving away from the standard actors library. (When reading that, keep in mind that that post was written in May 2009, before Scala 2.8 – things have almost certainly changed since then). And here is a presentation about Scalaz actors.

A generic interpolate method using type classes

In a previous post I wrote about different interpolate methods in my program for linearly interpolating numbers and vectors. The methods for numbers and vectors are exactly the same, except for the types of the arguments and return value. So, ofcourse I wanted to write a generic interpolate method to avoid repeating myself.

For a type V to be useable by the interpolate method, it must satisfy two requirements:

  1. It can be multiplied by a number, resulting in a V.
  2. You can add two Vs together, resulting in a V.

I started with the idea to do this with a structural type (despite the performance disadvantage that structural types suffer from because methods are called via reflection – I just wanted to know if this could work in principle).

The requirements on the type can be described by a structural type directly, and then the interpolate method can be written in terms of the type V:

trait Container {
  type V = {
    def *(t: Double): V
    def +(v: V): V
  }

  def interpolate(t: Double, a: V, b: V): V = a * (1.0 - t) + b * t
}

That looks straightforward. But unfortunately, it doesn’t work…

<console>:8: error: recursive method + needs result type
           def +(v: V): V
                        ^
<console>:7: error: recursive method * needs result type
           def *(t: Double): V
                             ^

Note that the methods * and + in the type V refer to the type itself. Unfortunately, this is not allowed: structural types cannot be recursive, and that’s why you get these error messages. It’s impossible to use a structural type for the generic interpolate method.

On StackOverflow, oxbow_lakes put me on another track: you can do this with type classes. Type classes are an idea that comes from Haskell. The description on Wikipedia reads a bit academic and abstract, but the idea is really simple: a type class is simply a type that specifies some property of other types (it classifies types). Have a look at these two type classes, for example:

// A Multipliable is something that can be multiplied with a T, resulting in an R
trait Multipliable[-T, +R] { def *(value: T): R }

// An Addable is something that you can add a T to, resulting in an R
trait Addable[-T, +R] { def +(value: T): R }

These two can be combined into a type class Interpolatable, which satisfies the two requirements that are necessary for the interpolate method:

trait Interpolatable[T] extends Multipliable[Double, T] with Addable[T, T]

The interpolate method can now be written using this type class as follows:

def interpolate[T <% Interpolatable[T]](t: Double, a: T, b: T): T = a * (1.0 - t) + b * t

So, now we have a generic interpolate method that works on anything that can be viewed as an Interpolatable[T] (note the view bound, specified using <%). Ofcourse you now have to tell Scala that a type you want to use this with can indeed be viewed as an Interpolatable[T]; this can be done with an implicit conversion. For example for Double you can put the following implicit conversion somewhere so that it’s in scope:

implicit def doubleToInterpolatable(v1: Double) = new Interpolatable[Double] {
    def *(t: Double): Double = v1 * t
    def +(v2: Double): Double = v1 + v2
}

This all works great, but note that all in all we didn’t gain much with this approach. Although the interpolate method is generic, it’s still necessary to write an implicit conversion for each type that we want to use with the generic method – the repetition has simply shifted from the interpolate method itself to the implicit conversions that we have to write. In principle, though, the approach is valuable; interpolate is just a simple example. Suppose that you’d have a much larger and more complicated method than interpolate, then using this approach with type classes would really be worth it.

Here’s an interesting presentation by Daniel Spiewak in which he also talks about type classes.

A sidestep to C++

Note that in C++, using templates, it’s very easy to write a generic interpolate function that works on any type that satisfies the requirements, and you don’t need to specify this for each type that you want to use:

template <class T>
T interpolate(double t, T a, T b) {
    return a * (1.0 - t) + b * t;
}

There’s a big difference between templates in C++ and generics in Scala or Java. A template in C++ is used to generate code for some concrete type at the moment its necessary, and the compiler checks if the code generated from the template is valid (which it is if the concrete type satisfies the requirements). For example, if you use the template on doubles, the compiler generates an instance of the template specifically for double:

double interpolate(double t, double a, double b) {
    return a * (1.0 - t) + b * t;
}

In C++ there is no need to explicitly specify what the * and + functions do for the types that you want to interpolate; they automatically fall into place in the generated code.

Strange limitation when importing from different scopes

Recently I ran into the following issue.

In one package object I have a method to linearly interpolate between numbers:

def interpolate(t: Double, a: Double, b: Double): Double = a * (1.0 - t) + b * t

And in another package object I have a method to linearly interpolate between vectors (where Vector is my own 3D vector class, which contains a * method to scale a vector with a number and a + method to add two vectors). It looks a lot like the method for numbers, but with different argument and return types:

def interpolate(t: Double, a: Vector, b: Vector): Vector = a * (1.0 - t) + b * t

I have another source file where I import both the package objects, so both interpolate methods are in scope. I’m trying to use one of the methods:

import org.jesperdj.scalaray._              // contains the interpolate method for numbers
import org.jesperdj.scalaray.vecmath._      // contains the interpolate method for vectors

val d: Double = interpolate(0.8, 1.5, 6.5)  // should call the first interpolate method

Unexpectedly, this doesn’t work – the Scala compiler (at least of version 2.8.0.RC7) gives you the following error message:

Example.scala:19: error: reference to interpolate is ambiguous;
it is imported twice in the same scope by
import org.jesperdj.scalaray.vecmath._
and import org.jesperdj.scalaray._
                val d: Double = interpolate(0.8, 1.5, 6.5)
                                ^
one error found

Very strange – the methods have different signatures, and I’m clearly calling the one with Double arguments, so there is really no ambiguity here. If I just define the two methods in the same scope, it works without a problem:

object Example {
  def interpolate(t: Double, a: Double, b: Double): Double = a * (1.0 - t) + b * t
  def interpolate(t: Double, a: Vector, b: Vector): Vector = a * (1.0 - t) + b * t

  def main(args: Array[String]) {
    // No problem!
    val d: Double = interpolate(0.8, 1.5, 6.5)
  }
}

I entered a ticket into the Scala bug tracking system, but it turns out I’m not the first one to discover this, there was already another ticket describing the same issue. Unfortunately, the Scala team decided that this will not be fixed, because it requires changes in the Scala specifications, which is a lot of work.

You can do something similar in Java, with import static. I tried it out and in Java it works without a problem:

import static example.Scope1.*;
import static example.Scope2.*;

public class Example {
    public static void main(String[] args) {
        // No problem!
        double d = interpolate(0.8, 1.5, 6.5);
    }
}

(where Scope1 and Scope2 are two classes that contain the two public static interpolate methods).

Avoid structural types when pimping libraries

I came across this answer to a StackOverflow question by Mitch Blevins, about using the “Pimp My Library” pattern. Most Scala programmers will sooner or later discover this pattern. I’ve used it myself in my Scala ray tracer. I have a class Vector that represents a vector in 3D space. One of the methods it contains is a method to scale a vector uniformly with a factor:

final class Vector (val x: Double, val y: Double, val z: Double) {
        // ...
        
        // Scale a vector
        def *(f: Double) = new Vector(x * f, y * f, z * f)
}

So now I can write:

val a = new Vector(1.0, -1.0, 2.0)
val b = a * 1.5

// But I also want to be able to write this:
val c = 1.5 * a

Ofcourse, the Double type doesn’t have a * method that takes my class Vector as an argument. To make it work, I wrote an implicit method:

// Implicit conversion for scaling vectors by multiplying a numeric type with a vector
implicit def implicitScaleVector[N <% Double](f: N) = new {
        def *(v: Vector) = v * f
}

Note that the implicit method returns an “anonymous object” that contains the actual * method, which calls the * method defined in class Vector to do the work.

Ok, good, that works. But there’s an interesting problem with this implementation that Daniel Spiewak points out in a comment to Mitch’s answer: he says that the new { ... } syntax causes the compiler to infer a structural type, so that the * method inside the “anonymous object” will be called via reflection – which is much slower than a regular method call.

I wanted to see that for myself, so I wrote a small test program. Let’s first see how long the * method in class Vector takes to run:

class Vector (val x: Double, val y: Double, val z: Double) {
        def *(f: Double) = new Vector(x * f, y * f, z * f)
}

object Test {
        def time(block: => Unit) = {
                val start = java.lang.System.nanoTime()
                block
                val time = java.lang.System.nanoTime() - start
                println("time: " + time / 1e6 + " ms")
        }

        def main(args: Array[String]) {
                time {
                        var v = new Vector(1.0, -1.0, 2.0)
                        for (i <- 0 to 1000000) v = v * 1.5
                }
        }
}

I ran this a few times, it takes approximately 130 ms on my system. Now let's try the original version with the new { ... } syntax:

object Test {
        implicit def implicitScaleVector(f: Double) = new {
                def *(v: Vector) = v * f
        }

        // ...
        
        def main(args: Array[String]) {
                time {
                        var v = new Vector(1.0, -1.0, 2.0)
                        for (i <- 0 to 1000000) v = 1.5 * v
                }
        }
}

That is indeed a lot slower, on my system this takes about 280 ms, more than twice as slow! This performance problem can be solved by avoiding using a structural type:

object Test {
        class Result (f: Double) {
                def *(v: Vector) = v * f
        }

        implicit def implicitScaleVector(f: Double) = new Result(f)

        // ...
}

This indeed performs much better, it takes only about 140 ms - almost as fast as calling the * in class Vector directly.

There is one thing I don't like about this however, and that's that I need an additional class that's polluting the namespace. I tried to fix this by declaring the class inside the method:

object Test {
        implicit def implicitScaleVector(f: Double) = {
                class Result (f: Double) {
                        def *(v: Vector) = v * f
                }

                new Result(f)
        }

        // ...
}

Unfortunately that doesn't work! This is just as slow (280 ms) as the original version with the new { ... } syntax. That's because class Result is now not known to the caller of the implicit method, so that it still needs to infer the structural type and call the method via reflection.

This is an interesting gotcha to be aware of when pimping libraries or using structural types in other ways.

Scala on NetBeans 6.9

My favorite IDE is NetBeans, and I’m using the Scala plugin for NetBeans. It isn’t yet perfect – for example, sometimes it reports bogus errors in my Scala code, or it misses real errors in my source code, and the support for Scala isn’t yet as complete as it is for Java, but it’s usable. From other people using other IDEs (most of them use Eclipse or IntelliJ IDEA from JetBrains) I hear that the Scala plugins for their IDEs also still have bugs and missing functionality.

Last week, NetBeans 6.9 was released, but there is no release of the Scala plugin for NetBeans 6.9 yet (at least not in the form of a ZIP file containing the necessary NetBeans modules that you can install in the IDE). At the moment you have to install the Scala plugins via the NetBeans nightly builds server. You can do that with the following steps:

  • Select Tools -> Plugins -> Settings
  • Click the “Add” button to add a new update center
  • Enter a name for the update center (for example “NetBeans Nightly Builds”)
  • Use this URL: http://deadlock.netbeans.org/hudson/job/nbms-and-javadoc/lastStableBuild/artifact/nbbuild/nbms/updates.xml.gz
  • Goto the tab “Available Plugins”
  • Click the “Reload Catalog” button
  • Search for “Scala”
  • Select the two plugins: “Scala Kit” and “Scala Runtime Library”
  • Click the “Install” button and follow the instructions, at the end it will prompt you to restart the IDE

After restarting:

  • Select Tools -> Plugins -> Settings
  • Uncheck the checkbox for NetBeans Nightly Builds

If you leave the nightly builds update center active, you’ll frequently get messages from NetBeans to update lots of modules. You probably don’t want these updates, unless you want to use the latest development build of NetBeans, which I don’t recommend if you use NetBeans for your work – you never know if something will break which will cost you time to roll back. (Ofcourse, if you want to help with the development of NetBeans and you also want to get updates for the Scala plugin automatically, leave it active).

Update 19 July 2010 – There’s now a release package of the plugin for NetBeans 6.9 and Scala 2.8.0 final: you can get it on SourceForge. The installation instructions are the same as for NetBeans 6.8.

Converting an Option[A] to an Option[B]

While working on a Scala hobby project, I wanted to convert an Option[A] into an Option[B]. I’ve noticed that it’s often straightforward to come up with an ugly way to do things in Scala, but with a little more thought it’s almost always possible to find a much more elegant and concise way to do things. I had initially written the following:

def intersect(ray: Ray): Option[Intersection] = {
  val opt = shape intersect ray    // Returns an Option[(DifferentialGeometry, Double)]
  if (opt.isEmpty) None else {
    val (dg, t) = opt.get
    Some(new Intersection(dg, t, this))
  }
}

This code is part of my Scala ray tracer (which isn’t yet publicly available – at the moment it’s too incomplete). It calls the intersect method on a shape, which returns an Option containing two values: a DifferentialGeometry object that contains information about the intersection point on the surface of the shape, and a Double which is the distance of the intersection point along the ray.

The intersect method above is part of class Primitive, which represents an object in the scene – something that has a shape, a material and other properties. This method converts the Option[(DifferentialGeometry, Double)] into an Option[Intersection] – class Intersection represents an intersection between a primitive and a ray.

I wasn’t satisfied with the code above, and after looking at the Scala API documentation I discovered that it can be written much more concisely by using the collect method of class Option, like this:

def intersect(ray: Ray): Option[Intersection] =
  shape intersect ray collect { case (dg, t) => new Intersection(dg, t, this) }

The collect method takes a partial function as an argument. It returns None for inputs for which the partial function isn’t defined, and a Some(...) for inputs for which the partial function is defined. The partial function can be written as { case value => ... }. By writing the code this way, I don’t have to deal with checking, unpacking and packing the Option objects myself, which makes the code nice and clean.

edit: Using map instead of collect is a little bit more efficient, as Brian Howard points out. See the comments for details.