javascriptsequence-generators

Difference between ES6 Generators and Array of Functions


When reading javascript blogs and articles I see a lot of interest in ES6 Generators but I fail to understand how they differ in essence to a current sequence made with an array of functions. For example, the factory below would take an array of funtion steps and yield between steps.

function fakeGen(funcList) {
    var i = 0, context;
    return function next() {
        if (i<funcList.lenght) {
            return {value: funcList[i++](context)}
        } else return {done:true}
    }
}

What benefit am I missing and how do transpilers implement the magic in ES6?


Solution

  • @tophallen is right. You can implement the same functionality entirely in ES3/ES5. But not the same syntax. Let's take an example which will hopefully explain why the syntax matters.

    One of the main applications of ES6 generators is asynchronous operations. There are several runners designed to wrap generators which produce a sequence of Promises. When a wrapped generator yields a promise, these runners wait until that Promise is resolved or rejected, and then resume the generator, passing the result back or throwing an exception at the yield point using iterator.throw().

    Some runners, like tj/co, additionally allow to yield arrays of promises, passing back arrays of values.

    And here is the example. This function performs two url requests in parallel, then parses their results as JSON, combines them somehow, sends combined data to other url, and returns the (promise of an) answer:

    var createSmth = co.wrap(function*(id) {
      var results = yield [
        request.get('http://some.url/' + id),
        request.get('http://other.url/' + id)
      ];
      var jsons = results.map(JSON.parse),
          entity = { x: jsons[0].meta, y: jsons[1].data };
      var answer = yield request.post('http://third.url/' + id, JSON.stringify(entity));
      return { entity: entity, answer: JSON.parse(answer) };
    });
    
    createSmth('123').then(consumeResult).catch(handleError);
    

    Notice that this code contains almost no boilerplate. Most of the lines perform some action that exists in the description above.

    Also notice the lack of error handling code. All errors, both synchronous (like JSON parsing errors) and asynchronous (like failed url requests) are handled automatically and will reject the resulting promise.

    If you need to recover from some errors (i.e. prevent them from rejecting the resulting Promise), or make them more specific, then you can surround any block of code inside a generator with a try..catch, and both sync and async errors will end up in the catch block.

    The same can be definitely implemented using an array of functions and some helper library like async:

    var createSmth = function(id, cb) {
      var entity;
      async.series([
        function(cb) {
          async.parallel([
            function(cb){ request.get('http://some.url/' + id, cb) },
            function(cb){ request.get('http://other.url/' + id, cb) }
          ], cb);
        },
        function(results, cb) {
          var jsons = results.map(JSON.parse);
          entity = { x: jsons[0].meta, y: jsons[1].data };
          request.post('http://third.url/' + id, JSON.stringify(entity), cb);
        },
        function(answer, cb) {
          cb(null, { entity: entity, answer: JSON.parse(answer) });
        }
      ], cb);
    };
    
    createSmth('123', function(err, answer) {
      if (err)
        return handleError(err);
      consumeResult(answer);
    });
    

    But that is really ugly. The better idea is to use promises:

    var createSmth = function(id) {
      var entity;
      return Promise.all([
        request.get('http://some.url/' + id),
        request.get('http://other.url/' + id)
      ])
      .then(function(results) {
        var jsons = results.map(JSON.parse);
        entity = { x: jsons[0].meta, y: jsons[1].data };
        return request.post('http://third.url/' + id, JSON.stringify(entity));
      })
      .then(function(answer) {
        return { entity: entity, answer: JSON.parse(answer) };
      });
    };
    
    createSmth('123').then(consumeResult).catch(handleError);
    

    Shorter, cleaner, but still more code than in the version that uses generators. And still some boilerplate code. Notice these .then(function(...) { lines and var entity declaration: they do not perform any meaningful operation.

    Less boilerplate (=generators) makes your code easier to understand and modify, and much more fun to write. And these are ones of the most important characteristics of any code. That's why many people, especially those who got used to similar concepts in other languages, are so ecstatic about generators :)

    Regarding your second question: transpilers do their thanspiling magic using closures, switch statements and state objects. For example, this function:

    function* f() {
      var a = yield 'x';
      var b = yield 'y';
    }
    

    will be transformed by regenerator into this one (the output of Traceur looks very similar):

    var f = regeneratorRuntime.mark(function f() {
      var a, b;
      return regeneratorRuntime.wrap(function f$(context$1$0) {
        while (1) switch (context$1$0.prev = context$1$0.next) {
          case 0:
            context$1$0.next = 2;
            return "x";
          case 2:
            a = context$1$0.sent;
            context$1$0.next = 5;
            return "y";
          case 5:
            b = context$1$0.sent;
          case 6:
          case "end":
            return context$1$0.stop();
        }
      }, f, this);
    });
    

    As you can see, nothing magical here, the resulting ES5 is rather trivial. The real magic is in the code that generates that resulting ES5, i.e. in the code of transpilers, because they need to support all possible edge cases. And preferably do this in a way that results in performant output code.

    UPD: here is an interesting article that dates back to 2000 and describes implementation of pseudo-coroutines in plain C :) The technique that Regenerator and other ES6 > ES5 transpilers use to capture generator's state is very similar.