Warpcore Labs

Transitioning through time and space, one blog post at a time

Type Safe Decorators

In OOP there exists a design pattern for extending functionality of methods via a form of composition, this pattern is commonly referred to as the Decorator Pattern. You can find this design pattern in your dusty copy of the gang of four. In case you don’t have a copy handy here’s the wikipedia definition.

In object-oriented programming, the decorator pattern (also known as Wrapper,
an alternative naming shared with the Adapter pattern) is a design pattern that
allows behavior to be added to an individual object, either statically or
dynamically, without affecting the behavior of other objects from the same
class.

To model this design pattern in Scala with a user service that we would want to decorate with caching and logging. We would first define a common interface as an abstract trait. We would then implement the base user service such that it implements the abstract trait and all the decorators such that they also implement the abstract trait. You would then compose them together via dependency injection. This would look something like this:

Decorator.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
trait UserService {

  def getUser(username: String): Option[User]

}

object UserService {

  def apply(): UserService = {
    val dbService      = new DatabaseUserService(new Database())
    val cacheService   = new CacheUserService(new Cache(), dbService)
    new LoggingUserService(new Logger(), cacheService)
  }

}

class DatabaseUserService(db: Database) extends UserService {

  def getUser(username: String): Option[User] = db.get(s"user:$username")

}

class CacheUserService(cache: Cache, service: UserService) extends UserService {

  def getUser(username: String): Option[User] = {
    cache.get(username) orElse service.getUser(username)
  }

}

class LoggingUserService(logger: Logger, service: UserService) extends UserService {

  def getUser(username: String): Option[User] = {
    logger.info(s"Attempting to get user: $username")
    val user = service.getUser(username)

    if (user.isDefined) {
      logger.info(s"Successfully got user: $username")
    } else {
      logger.info(s"No such user: $username")
    }

    user
  }

}

In the decorator pattern, the composition happens at the value level when we inject the obect to decorate into the decorator. Scala provides a means for us to do this at the type level through a feature known as Stackable Traits. For the most part Stackable Traits are exactly what they sound like, that’s traits that allow you to compose methods by stacking them on top of each other.

Stackable methods in scala must be identifed with the keywords abstract override before the def keyword. Yes you read that right and it’s not a typo. It seems like a weird use of the abstract keyword, but it is correct. This combination of keywords give you access to a super pointer in your method which reference the next class or trait in the stack.

Modeling our previous implementation of the decorator pattern using Stackable Traits instead would look something like this

Stackable.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
trait UserService {

  def getUser(username: String): Option[User]

}

object UserService {

  def apply(): UserService = {
    new DatabaseUserService extends RiakDatabaseService
                            with CacheUserService
                            with MemcacheCacheService
                            with LogginUserService
                            with LogglyLoggerService {}
  }

}

trait DatabaseUserService extends UserService with DatabaseService {

  def getUser(username: String): Option[User] = db.get(s"user:$username")

}

class CacheUserService extends UserService with CacheService {

  abstract override def getUser(username: String): Option[User] = {
    cache.get(username) orElse super.getUser(username)
  }

}

class LoggingUserService extends UserService with LoggingService {

  abstract override def getUser(username: String): Option[User] = {
    logger.info(s"Attempting to get user: $username")
    val user = super.getUser(username)

    if (user.isDefined) {
      logger.info(s"Successfully got user: $username")
    } else {
      logger.info(s"No such user: $username")
    }

    user
  }

}

Now we’ll know if our decorators are implemented properly if our program type checks. In the example above were also leveraging existential types via the Cake Pattern in lieu of constructor injection.

The stacked traits are executed from right to left such that in our example, the LoggingUserService will run first and it’s invocation of super.getUser() will pass control to the CacheUserService that when it invokes it’s super.getUser() will pass control to DatabaseUserservice

The Composite Pattern and Monoids

When you hear some one mention the Composite Pattern you might have visions race through your head of “The gang of four”, enterprise development and old neck beard programmers. Likewise, when you hear some one mention Monoid, visions of Haskell, complicated FP development and hipsters may race through your head. Bringing such jarringly different first impressions, what can the Composite Pattern possibly have in common with Monoids?

To begin to explore a possible relationship we must first start at definitions.

The Composite Pattern

In software engineering, the composite pattern is a partitioning design
pattern. The composite pattern describes that a group of objects are to be
treated in the same way as a single instance of an object. The intent of a
composite is to “compose” objects into tree structures to represent part-whole
hierarchies. Implementing the composite pattern lets clients treat individual
objects and compositions uniformly.

You can effectively model this in Scala by having an abstract trait as an interface and applying that trait to both a singular representation of an object and to a plural collection of the same object. Using a Raven object as the singular and a RavenFlock object as the plural and having them both implement the abstract trait Bird we would lay this out in Scala as follows

Composite.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
sealed trait Bird {

  def fly(vector: MovementVector): Bird

  def eat(food: Food): Bird

}

case class Raven(position: Coordinates, belly: Stomach) extends Bird {

  def fly(vector: MovementVector): Bird = {
    this.copy(position = position.move(vector))
  }

  def eat(food: Food): Bird = {
    this.copy(belly = belly.consume(food))
  }

}

case class RavenFlock(birds: List[Raven]) extends Bird {

  def fly(vector: MovementVector): Bird = {
    birds.foldLeft(List()) { (birds, bird) =>
      birds :+ bird.fly(direction)
    }
  }

  def eat(food: Food): Bird = {
    birds.foldLeft(List()) { (birds, bird) =>
      birds :+ bird.eat(food)
    }
  }

}

This implementation allows us to control entire flocks of ravens through the same interface which we would use to control any singular raven. This is pretty useful and is the net effect of using the Composite Pattern.

Monoid

In abstract algebra, a branch of mathematics, a monoid is an algebraic structure
with a single associative binary operation and an identity element. Monoids are
studied in semigroup theory as they are naturally semigroups with identity.
Monoids occur in several branches of mathematics; for instance, they can be
regarded as categories with a single object. Thus, they capture the idea of
function composition within a set.

So in essence, a Monoid is just an interface with one abstract value and one abstract method. The abstract method for a Monoid is the append operation. It’s also refered to as mappend or simple aliased as the + operator. The abstract value for a Monoid is the identity value. The identity value is defined as the value you can append to any value that will always result in the original value unmodified. In terms of integers, 0 is the identity value because anything you add/append to it will produce the value you tried to append to it. In terms of collections the identity value is typically the empty collection because appending a collection to an empty collection will typically produce the same collection unmodified.

So we can model this out in Scala with Raven objects and RavenFlock objects by adding an append (+) method that takes Raven and RavenFlock objects, joins them together and produces a new flock of of both of them.

Monoid.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
sealed trait Bird {

    def +(other: Bird): Bird

}

case class Raven(position: Coordinates, belly: Stomach) extends Bird {

  def +(other: Bird): Bird = other match {
    case Raven(p, b)   => RavenFlock(Raven(p, b)) + this
    case RavenFlock(b) => RavenFlock(b) + this
  }

}

case class RavenFlock(birds: List[Raven]) extends Bird {

  def +(other: Bird): Bird = other match {
    case Raven(p, b)   => this.copy(birds = birds :+ Raven(p, b))
    case RavenFlock(b) => this.copy(birds = birds + b)
  }

}

This allows us to take either a singular Raven or a plural RavenFlock, add them together and get back a composite of the two. The identity value in this case is an empty RavenFlock RavenFlock(List()).

Bringing it all together

An interesting observation i’ve made with the Composite Pattern and Monoids is almost every time you have one, theres almost always an opportunity for the other. Meaning every time you have a Monoid, you know you can almost always implement the Composite Pattern on the same type effectively for free and vice versa.

Bringing our implementations together in Scala would look like this

United.scala
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
sealed trait Bird {

  def fly(vector: MovementVector): Bird

  def eat(food: Food): Bird

  def +(other: Bird): Bird

}

case class Raven(position: Coordinates, belly: Stomach) extends Bird {

  def fly(vector: MovementVector): Bird = {
    this.copy(position = position.move(vector))
  }

  def eat(food: Food): Bird = {
    this.copy(belly = belly.consume(food))
  }

  def +(other: Bird): Bird = other match {
    case Raven(p, b)   => RavenFlock(Raven(p, b)) + this
    case RavenFlock(b) => RavenFlock(b) + this
  }

}

case class RavenFlock(birds: List[Raven]) extends Bird {

  def fly(vector: MovementVector): Bird = {
    birds.foldLeft(List()) { (birds, bird) =>
      birds :+ bird.fly(direction)
    }
  }

  def eat(food: Food): Bird = {
    birds.foldLeft(List()) { (birds, bird) =>
      birds :+ bird.eat(food)
    }
  }

  def +(other: Bird): Bird = other match {
    case Raven(p, b)   => this.copy(birds = birds :+ Raven(p, b))
    case RavenFlock(b) => this.copy(birds = birds + b)
  }

}

Now our Bird objects are both Monoids and implementations of the Composite Pattern. We can join various combinations of them together to produce aggregate versions and we can operate on the plurals through the same interface we would the singulars.

Other

This seems slightly wrong though. Should we ever allow a developer in our system to create, use or have an empty RavenFlock? It seems like an illegal state that should be made unrepresentable. Luckily for us, we can easily make the state of emptiness illegal by using Semigroups instead of Monoids, i’ll leave this as an exercise for the reader for now in the name of not derailing the intent of this post. In cases where it is viable to have empty collections floating around your system Monoid is more then suitable.

Functors: Values With Context

Functors are a really important concept to understand if you plan on doing any programming in any strongly typed functional programming language. The thing is they appear to be such a simple thing at first glance that most novices tend to dismiss them as a fairly insignificant thing. Afterall, how useful can an interface be that only has one method?

Other then just having to implement map (or fmap, language dependent), a proper functor should also obey the two functor laws. These laws are the functor identity and the functor composition laws as defined bellow:

1
2
fmap id = id
fmap (g . h) = (fmap g) . (fmap h)

The combination of these laws ensure that your implementation of (f)map only changes the value without altering it’s context. So there it is, i dropped the magic “C” word, context. Yet, this is exactly what a functor is, a value with context.

A functor is a value itself, that contains an inner value with an associated context. An example of this is the Maybe monad in haskell. An instance of Maybe is a value that wraps an inner value that may or may not exist. Whether or not the inner value exists and how (f)map behaves when invoked on a Maybe instance is informed by the context.

This description of context has the feeling of a statefull thing and this is a correct feeling. Most typically the state of a context for a functor will be specified through it’s type. In the case of the Maybe functor, it’s two possible contexts are represented in the types Just a and Nothing to represent existence and non-existence respectively. It’s defined as follows:

1
data  Maybe a  =  Nothing | Just a

The function you supply to (f)map to operate on the inner value is said to operate on the inner wrapped value within the contextual bounds of the functor. In the case of Maybe, (f)map for Just a applies the supplied operation on the inner wrapped value and returns the result of the operation in a Just b, while (f)map for Nothing will unconditionally do nothing and return Nothing. Their typeclass definition is as follows:

1
2
3
instance Functor Maybe where
    fmap _ Nothing  = Nothing
    fmap f (Just a) = Just (f a)

Heres a simple example on using Maybe to safely perform an age based calculation on an age we may or may not have.

1
2
age   = Just 24
adult = map (\x -> x >= 18) age

Effectively getting at the value through side channels can be looked at as side stepping contextual bounds and can be a fairly dangerous thing to do.

An example of this danger can be found in Scala’s Option functor. Option is basically just Scala’s version of Haskell’s Maybe. The Dangerous part of the Option functor is the #get() method. Calling #get() on an Option returns the inner wrapped value if it exists, if not an exception is thrown. The throwing of this exception is what i’m referring to as the dangerous behaviour.

The result of calling #get() on a None.

1
2
var nothing = None();
var value   = nothing.get(); // <-- Unconditionally throws exceptions

The result of calling #get() on an Option.

1
2
3
def double(optional: Option[Int]) : Int = {
    return optional.get() * 2; // <-- May or may not throw an exception
}

We can get safe access to the inner wrapped value by operating on it within the contextual bounds of the functor and eliminate all the boiler plate existence checking and exception handling code we’d need otherwise for simply depending on the context to just do the right thing.

The safe way to return the double of an Option[Int] with (f)map

1
2
3
def double(optional: Option[Int]) : Int = {
    return optional map (_*2) // <-- Never throws and only operates on `Some`
}

In short, functors are like safe fluffy pillows for values, while non contextual side channel access is like playing russian roulette with yourself.

Reduce vs Fold in Scala

For better or worse in functional programming there exists fuzzy concrete notions on what it means to fold over some input versus what it means to reduce over some input. The concrete implementations vary from language to language and even between some libraries. Yet, the underlying fundemental concept tends to hold true, that you are taking some value of N and performing aggregation operations on it such that the end result is typically some value of <= N.

Even in haskell the fold family of functions is overloaded in the sense that it is capable of doing both what scala calls reducing and folding. For example, take the following haskell function that takes a list of integers and produces a sum value in terms of folding:

1
2
sumList :: [Int] -> Int
sumList = foldl (+) 0

In haskell one might also implement the filterList combinator in terms of folding. The definition of which would resemble the following example:

1
2
filterList :: (a -> bool) -> [a] -> [a]
filterList p = foldl (\ m n -> if (p n) then m ++ [n] else m) []

While the fold derived operations of sumList and filterList both take a value of N and perform aggregegate operations on it such that the end result is a a value of <= N, their type signatures tell a different story.

sumList takes a list of integers and produces a single integer while filterList takes a list of integers and produces a subset of that list. Putting their type signatures next to each other highlights this important difference.

1
2
sumList :: [Int] -> Int
filterList :: (a -> bool) -> [a] -> [a]

This is all possible because foldl’s type signature is really generic in the sense that all it cares about is that the type of the final value is the same type of the initial value. It’s type signature is as follows:

1
foldl :: (a -> b -> a) -> a -> [b] -> a

Now this all becomes important when we take the type signatures for scala’s implementations of folding and reducing into consideration. Both of which are as follows:

1
2
def foldLeft [B] (z: B)(f: (B, A) => B): B
def reduceLeft [B >: A] (f: (B, A) => B): B

A quick glance reveals some of the following differences:

  • foldLeft has a paramaterized type of B while reduceLeft has a paramaterized type of B >: A
  • foldLeft is a curried function where the first curried group takes the folds initial value and that reduceLeft is non curried and doesn’t take an initial value at all
  • In foldLeft the accumulator and the final return type (B) must be related to the item type (A).

So if reduceLeft doesn’t take an initial value as an argument, then what is it’s initial value? The initial value of reduceLeft is the initial value of the collection. So if you have a List[Int] and you reduceLeft over it, you will only ever be able to produce an Int (or a supertype of Int) from that operation. However, foldLeft is capable of producing any type you can pass into the initial value.

So we would be able to easily convert our sumList implementation from haskell to scala using reduceLeft like so:

1
def sum(inputs: List[Int]): Int = inputs.reduceLeft (_+_)

It’s impossible for us to implement filterList in terms of reduceLeft. However, we can easily implement filterList in terms of foldLeft like so:

1
2
3
4
5
6
7
8
def filter(inputs: List[Int], p: (Int) => Boolean) = {
  inputs.foldLeft(List[Int]()) { (m, n) =>
    p(n) match {
      case true  => m ++ List(n)
      case false => m
    }
  }
}

Some interesting things about what can be represented in terms of reduceLeft and foldLeft outside of the type system are related to wether the parameter type for the iterable is a monoid or a semi-group. If the type parameter of your iterable is a monoid, meaning it has a zero value and an append operation, you can implement reduceLeft in terms of foldLeft. But if your type parameter is a semi-group, meaning it has an append operation but no zero value then there are a subset of reduceLeft operations that can not be implemented in terms of foldLeft.

Exhaustivity Checking

One of the really cool things about Scala that helps you write more reliable code is a nifty compiler feature called Exhaustivity checking. In a nutshell it’s the compilers way of letting you know if you forgot to check for any possible polymorphic permutations when you’re doing pattern matching.

Take this sealed trait Operation and it’s concrete implementations as an example:

1
2
3
4
5
sealed trait Operation
case class Addition(left: Int, right: Int) extends Operation
case class Subtraction(left: Int, right: Int) extends Operation
case class Division(left: Int, right: Int) extends Operation
case class Multiplication(left: Int, right: Int) extends Operation

Making Operation a sealed trait means that any class that wants to extend it must be declared within the same file, preventing any implementations of Operation to exist anywhere else.

So the moment the compiler does a first pass over the file containing the sealed trait, it knows every possible implementation of that sealed trait. In this case the only possible implementations of Operation are Addition, Subtraction, Division, and Multiplication.

Just like most other abstract and case class implementations in scala we can perform pattern matching on Operation as expected:

1
2
3
4
5
6
7
val operation: Operation = Addition(1, 3)
val result = operation match {
  case Addition(left, right)       => left + right
  case Subtraction(left, right)    => left - right
  case Division(left, right)       => left / right
  case Multiplication(left, right) => left * right
}

Now where things start to get interesting is when we forget to implement the case clause for one implementation of Operation in the pattern matching statement:

1
2
3
4
5
6
val operation: Operation = Addition(1, 3)
val result = operation match {
  case Addition(left, right)       => left + right
  case Division(left, right)       => left / right
  case Multiplication(left, right) => left * right
}

The previous code produces the following compiler warning:

1
2
3
adt.scala:11: warning: match may not be exhaustive.
It would fail on the following input: Subtraction(_, _)
    val result    = operation match {

How cool is that! The compiler is letting us know that we forgot to add a specific implementation of Operation to the pattern matching statement and it’s telling us the implications this has!

Bind vs Curry in JS

As of ECMAScript 1.8.5 javascript implemented the bind() function which can also be found in underscore.js as _.bind(). It allows you to bind a function to a this context as well as bind some predefined arguments. Binding predefined arguments in other functional languages like haskell is known as partial function application and goes by the name of currying.

So whats wrong with bind

The problem with bind is that it has two majorly different use cases, context binding and partial function application. This makes reading code that contains bind() calls in it slightly less concise because just reading the name of the function isn’t enought to tell you what the intended goal of the operation is. It can be used for context binding, partial function application or both.

Sounds like a swiss army knife

Swiss army knife functions are something to be weary of when you intend to write concise and readable code. By looking at the funciton protocol for bind it seems like it was originally intended to be used just for context bind and partial function application was almost an afterthought. For this reason, when you use the bind function strictly for context binding not only does it create intuitive code but also cleaner looking code. You’ll see what i mean when you try to use the bind function strictly for currying as it creates the opposite effect.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
/**
 * `bind` used strictly for context binding
 */
var base = {
    greeting: "Welcome",
    name:     "Joseph",
    greeter:  function() {
        console.log(this.greeting + " " + this.name);
    }
}

var asyncOp = function(callback) {
    // do async work then invoke the callback
    callback();
}

// nice clean and intuitive use of `bind`
ascynOp(_.bind(base.greeter, base));

/**
 * `bind` used strictly for currying
 */
var lines   = function(text)       { return text.split("\n"); },
    unlines = function(lines)      { return lines.join("\n"); },
    drop    = function(count, arr) { return arr.slice(count); };

var stripFirstLine = _.compose(
    unlines,
    _.bind(drop, undefined, 1), // note the `undefined` 
    lines
);

var text = "line1\nline2\nline3";
console.log(stripFirstLine(text));

We can do better then that

So the undefined parameter in the previous example is a staple wtf code sighting in javascript. We can prevent having to pass undefined while making our code more concise by using a custom implementation of curry. You can see how the method we composed becomes more concise in the following example.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
/**
 * Custom curry implementation. There are probably cleaner ways to 
 * implement `curry()` but this is just the one i use and it works
 * for me.
 */
// i like to create it as an underscore mixin
_.mixin({
    curry: function(func) {
        var applied = Array.prototype.slice.call(arguments, 1);
        return function() {
            var args = applied.concat(Array.prototype.slice.call(arguments));
            return func.apply(this, args);
        };
    }
});

var lines   = function(text)       { return text.split("\n"); },
    unlines = function(lines)      { return lines.join("\n"); },
    drop    = function(count, arr) { return arr.slice(count); };

var stripFirstLine = _.compose(unlines, _.curry(drop, 1), lines);

Conclusion

  • Use bind strictly to bind functions to contexts
  • Use an implementation of curry instead of bind to create partially applied functions.

The Pass Pattern

Javascript is a programming language that has most of the killer functional programming features you can possibly bake into a C syntax styled language with a hint of prototypical inheritance from the self programming language. This presents you with an interesting multi-paradigm programming environment that through better or worse practices can let you choose which style to program in.

When i write javascript my inner geek gets all giddy because it’s always the perfect excuse for me to exercise functional programming practices in a widely accepted mainstream language. I love functional programming so much in javascript that the library i get the most mileage out of by a landslide is underscore.js, a functional programming library for javascript. Even if you exclude the fact that i use underscore.js in both client side javascript and server side in node.js, it still remains the library that i use the most independently in each environment.

Patterns and abstraction

In functional programming whenever you find yourself typing a similar code pattern multiple times you should consider abstracting it. This is actually true with most programming languages but something of a religious practice in functional programming. However, functional programming languages like javascript have the advantage of having high order functions and lexical closures allowing you to abstract certain patterns better and more concisely then is possible in non functional languages.

Error handling in node.js

As an awesomely consistent and well implemented set of APIs, all API calls in node.js that perform any kind of IO take a function callback to run on IO completion. This is a very fundamental practice at the core of the node.js asynchronous evented model. A familiar site to anyone familiar with node is the pattern of arguments that are passed to the callback. Typically, asynchronous callbacks will take one or two arguments, (error) and (error, value) . The most important part for this article being that error is always the first argument passed to the callback.

The error passed as the first argument to any aysnchronous callback is representative of any errors that might have occurred during the asynchronous operation. Error will be false if no error was raised and non false otherwise. In node.js you often have nested aysnchronous requests and a common approach to error handling is to pass the errors back up the event chain to be handled by a layer knowing what to do with it. This is demonstrated in the bellow pseudo code js example.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
var express = require("express"),
    redis   = require("redis").createClient(),
    app     = express.createServer();

functon getLatestNews(next) {
    redis.lrange("news", 0, 25, function(error, value) {
        if (error) {
            next("failed fetching news");
        } else {
            next(false, value);
        }
    });
}

app.get("/", function(req, res){
    getLatestNews(function(error, news) {
        if (error) {
            globalErrorHandler(error);
        } else {
            res.send(news);
        }
    });
});

app.listen(80);

There it is the error passing pattern. If you write node.js code you can probably find this pattern throughout your code base. If you can’t it means you’re probably not handling errors which is a big no go.

The pattern can be written a couple different ways

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
/**
 * strictly nested approach
 */
if (error) {
    // pass failure
    next(error);
} else {
    // pass success
    next(false);
}

/**
 * refactored to avoid deeply nested code
 */
if (error) {
    // pass failure and return
    next(error);
    return;
}

// do work
// pass success
next(false);

So the money making question is how do we factor out this repetitive pattern being the ace functional programmers that we like to think that we are?

Heres the solution i came up with, i call it the pass pattern. I call it the pass pattern, but really it’s more of an abstraction.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
var express = require("express"),
    redis   = require("redis").createClient(),
    app     = express.createServer();

function pass(next, after) {
    return function(error, value) {
        if (error) {
            after(error, value);
        } else {
            next(value);
        }
    };
}

functon getLatestNews(next) {
    redis.lrange("news", 0, 25, pass(function(value) {
        next(false, value);
    }, next));
}

app.get("/", function(req, res){
    getLatestNews(pass(function(news) {
        res.send(news);
    }, globalErrorHandler));
});

app.listen(80);

So this requires you to setup a function called pass which acts as a closure generator for augmenting the asynchronous callback. The error checking gets dropped from the asynchronous callback, yet the error checking still takes place. The asynchronous callback itself is only ran when no error is present and as a byproduct, we can safely drop the error argument from the first argument of the callback. Just as important, the next callback is automatically forwarded or “passed” the error if one is present.

The above example of pass is just introductory code. The full version of the pass abstraction i use in my code is as follows.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// Yeah i use underscore.js so much i add it as a mixin to underscore
_.mixin({
    pass: function(next, after, customError, context) {
        return function(error) {
            var args = Array.prototype.slice.call(arguments, 1);
            if (error) {
                args.unshift(customError || error);
                after.apply(context, args);
            } else {
                next.apply(context, args);
            }
        }
    }
});

This version allows me to override error messages so if i get back a DB failure error from the database while calling it from the registration form i can override the message with something more meaningful to the user that will get passed along instead. My version also allows context binding so i can avoid some var self = this; annoyances.

Recomended Reading

Parallel Tests.

Some people know them as A/B tests and i think thats really a shame. The name A/B testing implies your intent is to use the parallel testing suite of your choice to gather performance analytics usually on content based differences and variances in the UI. There is a huge value add to using A/B testing just for collecting performance analytics so i’m not claiming that it’s a misuse of parallel test suite implementations. However, the value value adds with parallel testing doesn’t just end with performance testing as it can be used to enhance your product release process, your QA process and even your development process.

Improving the release process

I’ve worked on web products where we’ve done production code releases at midnight once a week and on ones that have one or more releases a day during peak hours. Depending on your past experiences you may think that releasing during peak hours that often is just asking for the fail whale to strike on a daily basis. However, my experience with both was that the midnight releases during hours of really light load where the most chaotic and error prone by an order of magnitude. So coming from that background wouldn’t releasing every day during peak hours be one of the most counter intuitive things ever?

There are no silver bullets and the best way to slay a beast is with lots of lead bullets. I can talk about the value of hiring competent developers, using continuous integration and various release strategies to improve the release process. However, the lead bullet of this blog post is parallel testing.

When you’re developing a new feature you can start it off by wrapping both the new and old implementation in a parallel test and setting the exposure of the old implementation to 100% with the new implementation set to 0% with a manual override on your test user id so that your test account will be presented with the functionality of the new implementation. This new feature can then be committed up the chain and even to prod with the same 100% to 0% test ratio and public facing end users will not have access to the new feature. At this point the only people with access to the new feature would be typically developers, Q/A and various business owners inside the company all of which would be able to test the new functionality in a production environment.

Once the new functionality has been pushed to production, sufficiently tested internally and vetted by all product owners. The release of the product is simply a toggle of the percentage in the parallel test suite from 100% old 0% new to 0% old 100% new. The concurrent test wrapper can even be left in place for undefined amounts of time and you can roll it back and push it back out all from a GUI/WebUI.

Improving the Q/A process

The Q/A process at the places i’ve worked at is always tightly coupled with the release process so i’ve already inherently discussed some of the benefits on how parallel test suites can be used to improve the Q/A process by talking about how it can be used to improve the release process. IE: being able to test new features in production in a manner not publicly visible to users is a huge improvement to the Q/A process alone if you couldn’t already do that before.

Another way parallel tests can be used to improve the Q/A process is they allow you to do selective releases. It’s not uncommon for certain sites to have select users who are highly active, vocal or are just in good contact with the business. You can do selective releases where only employees and those selected users will get initial “exclusive” test access to the feature before it gets released with 100% exposure. This is an easy and almost free way of crowd-sourcing a part of your Q/A department to users you can trust and have past relationships with.

Improve the development process

It first occurred to me that parallel tests can be used to improve the development process when i encountered an article by John Carmack about parallel implementations. Basically whenever he wants to test an alternative implementation of a feature, instead of refactoring the old implementation in place or scrapping it he wraps the old and the new implementation in parallel tests that he can toggle in real time from a console.

This is great for if you want to run benchmarks against alternate implementations because theres a minimal impact on the rest of the code and if the new implementation turns out to be worse you can simply throw out the concurrent test and restore the original implementation as it was without having to dig through branches, tags or commits in version control.

TL;DR

  • Parallel tests for releases
    • Enable/Disable features from a control panel instead of only source control
    • Incremental releases - bucket testing
  • Parallel tests for Q/A
    • Selective releases - crowd sourcing Q/A
    • Private beta tests in production before public release
  • Parallel test for the development environment
    • Enables you to develop alternative implementations and test them back and foward with a toggle

See Also

Initial Blog Entry

Just checking this out.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
   var createNinja = function()
   {
      var idk = 8055;
      var secret = function()
      {
          return idk *= 1337;
      };

      return {
          watchANinjaGetDown: function()
          {
              if (console.log) console.log("go ninja go ninja go!");
              return secret();
          }
      };
   };