Saturday, 27 June 2015

Completely Routed

Routing Options in Shelf

This post is a journey through some of the main routing options in shelf. It is intended as a useful resource to help developers choose how to implement their routes for a particular task.

Routeception

backlog.io uses mojito (and therefore shelf_rest, shelf_route and shelf_bind) for routing. As such it provides a real world application that helps me drive how I evolve those packages.
However, backlog.io actually owes it’s existence to these packages. That’s because, as I was creating and evolving an increasing number of Dart packages, each of which lives in its own git repo with its own issue tracker, I ran into an issue (if you’ll excuse the pun). There was no easy way for me to prioritise what I needed to work on at any point in time.
What I needed of course, was a way to create a backlog across these repos issues. To make things more interesting, some of the repos are in BitBucket and some in GitHub.
And so I created backlog.io to solve this exact use case. But, this post is about routing, so I’ll leave it for another post to talk about backlog.io itself.

Simplicity, Usability, Flexibility, Extensibility and anything else that ends in ility

When I first created shelf_route, I wanted to create a routing API that was:
  • easy to use
  • familiar
  • discoverable
  • a native shelf component
  • magic free
  • and extensible
By discoverable I mean that you can easily discover the features of the routing api, by leveraging Dart’s awesome tooling for autocomplete and also that if you find the root of the route hierarchy you can easily navigate through the entire set of routes.
Coming from the Java Spring world with lots of classpath scanning of routing components, I was disappointed at how difficult it could sometimes be to figure out what the routing hierarchy actually was.
As a native Shelf component, I wanted it to deal directly with shelf’s classes like Request, Response, Handler and Middleware. This makes it very easy for people familiar with shelf, to pick up and be productive with immediately.
Also, the absence of magic (mirrors etc) means that it is pretty easy to debug and reason about.
The downside of course is that this comes at the cost of a fair bit of boilerplate. So whilst shelf_route was kept magic free, it was always designed to allow other packages to reduce boiler plate by adding some more magic.

Just add Magic

Of course, reducing boilerplate is a good thing. That is essentially the goal of shelf_rest. It introduces some annotations plus conventions and then uses mirrors to significantly reduce boilerplate.
But enough talk. Lets see how all this looks in code.

Routing Options

In this section I’ll walk through some of the different options you have for creating routes in shelf. To ground it in reality, I will base the discussion on a subset of the actual routes from backlog.io .
On start up mojito prints out the routes. This is what that looks like, for the subset of backlog.io routes I will talk about.
GET     ->  /ui
GET     ->  /api/v1/backlogs{?creator}
POST    ->  /api/v1/backlogs
GET     ->  /api/v1/backlogs/{backlogId}
PUT     ->  /api/v1/backlogs/{backlogId}
GET     ->  /api/v1/backlogs/{backlogId}/issues
PUT     ->  /api/v1/backlogs/{backlogId}/issues/{issueHash}
POST    ->  /api/v1/backlogs/{backlogId}/issues/bulk
GET     ->  /api/v1/users/oauth/github/requestToken
GET     ->  /api/v1/users/oauth/github/authToken
First up you will notice that the routes for the UI and API are separate. This makes it easy to apply different middleware to them. For example, the UI resources are all public (accessed anonymously) and highly cacheable, where as the API tends to be mostly private and often non cacheable (at least not in a shared cache).

Shelf 101

For anyone not familiar with shelf, let me give a quick intro. To handle a request in shelf you need to create what is called (ironically) a Handler. A simple Handler looks like
var helloWorldHandler = (Request request) => new Response.ok('hello world');
In other words, a Handler is a function that takes a Request and returns a Response (or a Future<Response>).
Before I launch into code, I want to make the point that, there is no right way to create your routes. Essentially you will trade off boilerplate for magic, so choose the approach you feel most comfortable with.
Also don’t feel like you need to use one approach for everything. Some tasks may lend themselves better to some approaches and other tasks to other approaches.
Often I start with one approach and evolve into others over time as the routes grow and the need to structure the code into separate units / files grows.
I will start from the least magic / most boilerplate options and finish up at the most magic / least boilerplate options.

Option 1: The Manual Way

So the most straightforward, zero magic way is just to hand code the routing logic.
  var backlogHandler = (Request request) async {
    if (request.url.path.startsWith('/ui') && request.method == 'GET') {
      return staticHandler(request);
    } else if (request.url.path == '/api/v1/backlogs' &&
        request.method == 'GET') {
      var creator = request.requestedUri.queryParameters['creator'];
      var backlogs = await _searchBacklogs(creator);
      var resultJson = _createBacklogJson(backlogs);
      return new Response.ok(resultJson);
    } else if (request.url.path.startsWith('/api/v1/backlogs/') &&
        !request.url.path.contains('/issues') &&
        request.method == 'GET') {
      var backlogIdStr = request.url.path.substring('/api/v1/backlogs/'.length);
      var backlogId = int.parse(backlogIdStr);
      var backlog = await _fetchBacklog(backlogId);
      var resultJson = _createBacklogJson(backlog);
      return new Response.ok(resultJson);
    }

    /* .... */

    else {
      return new Response.notFound('oops');
    }
  };
For the UI routes we use shelf_static.
Note: backlog.io actually uses pub serve (via shelf_proxy) in dev mode and shelf_static in production mode. This is an out of the box feature of mojito and will be covered later
For the backlog search we look the backlogs up in the database, turn them into JSON and return the result.

Option 2: Simple Flat Routing

We can improve on this a little by introducing shelf_route and implementing the routes in the most straightforward way. This reduces the need for manual checking of paths and methods, plus it will handle the path parameters like backlogId for us.
  var backlogHandler = router()
    ..add('/ui', ['GET'], staticHandler, exactMatch: false)
    ..get('/api/v1/backlogs{?creator}', (Request request) async {
      var creator = getPathParameter(request, 'creator');
      var backlogs = await _searchBacklogs(creator);
      var resultJson = _createBacklogJson(backlogs);
      return new Response.ok(resultJson);
    })
    ..get('/api/v1/backlogs/{backlogId}', (Request request) async {
      var backlogIdStr = getPathParameter(request, 'backlogId');
      var backlogId = int.parse(backlogIdStr);
      var backlog = await _fetchBacklog(backlogId);
      var resultJson = _createBacklogJson(backlog);
      return new Response.ok(resultJson);
    });

Option 3: Hierarchical Routes

Since many of the routes start the same we can set the routes up hierarchically and make it a little more DRY.
Note, whilst the examples has so far not included any middleware, in the real app there is middleware and in general many routes have the same middleware applied. Setting them up hierarchically allows the middleware to be applied to all the child routes, making this much DRYer too.
  var backlogHandler = router()
    ..add('/ui', ['GET'], staticHandler, exactMatch: false)
    ..addAll(
        (Router r) => r
          ..get('{?creator}', (Request request) async {
            // ...
          })
          ..addAll(
              (Router r) => r
                ..get('', (Request request) async {
                  // ...
                })
                ..put(
                    '',
                    (Request request) async {
                  var backlogJson = await request.readAsString();
                  var backlog = new Backlog.fromJson(JSON.decode(backlogJson));
                  // ...
                }),
              path: '{backlogId}'),
        path: '/api/v1/backlogs');
Here, I’ve omitted most of the handling for brevity. It is the same as the previous example.
Now we use the addAll method which creates a child router. We use this twice.
This first child router has a path of '/api/v1/backlogs' which is applied to all its routes. This includes the second childRouter, which has a path of {backlogId}.
If you look at the last put method you can see that it is contained inside the second addAll which is inside the first addAll. It inherits the '/api/v1/backlogs' path of the first and '{backlogId}' from the second, so this makes up the expected
PUT     ->  /api/v1/backlogs/{backlogId}

Option 4: Take a REST

So far the routing has all been provided by shelf_route, which deliberately avoids using mirrors. This keeps magic to a minimum and makes it suitable for use in a browser, but there is only so much you can do to remove boilerplate.
Now it’s time to spice things up with a little magic. This magic will be available to us simply by importing shelf_rest rather than shelf_route. In other words
import 'package:shelf_rest/shelf_rest.dart';
instead of
import 'package:shelf_route/shelf_route.dart';
Actually, the previous two options could also have been implemented by importing shelf_rest rather than shelf_route as shelf_rest is a drop in replacement for shelf_route that supports all its functionality and then adds in some boilerplate reducing magic.
First up, lets get rid of the extracting of path variables and parsing them into other types like ints. And while we are at it let’s get rid of parsing the body into JSON and manually creating the Backlog object.
  var backlogHandler = router()
    ..add('/ui', ['GET'], staticHandler, exactMatch: false)
    ..addAll(
        (Router r) => r
          ..get('{?creator}', (String creator) async {
            // ...
          })
          ..addAll(
              (Router r) => r
                ..get('', (int backlogId) async {
                  // ...
                })
                ..put('',
                    (int backlogId, @RequestBody() Backlog backlog) async {
                  // ...
                }),
              path: '{backlogId}'),
        path: '/api/v1/backlogs');
At first it looks very similar, but if you look closely at the handler methods you will see that they no longer take Request. Instead the first one takes a String creator, the second an int backlogId.
The put also takes the Backlog object directly as we used the @RequestBody annotation to tell shelf_rest to parse the body from JSON into a Backlog object.

Option 5: Route Classes

As the number of routes grows, it can become quite unwieldy to keep them all in one big routing definition. There are many options here. You can simply split out sub routes into separate functions or you can take advantage of Darts emulator functions and put them in separate classes.
class BacklogResource {
  call(Router r) => r
    ..get('{?creator}', (String creator) async {
      // ...
    })
    ..addAll((Router r) => r
      ..get('', (String backlogId) async {
        // ...
      })
      ..put('', (String backlogId, @RequestBody() Backlog backlog) async {
        // ...
      }), path: '{backlogId}');
}

void main() {
  var backlogHandler = router()
    ..add('/ui', ['GET'], staticHandler, exactMatch: false)
    ..addAll(new BacklogResource(), path: '/api/v1/backlogs');
}
Here we put the bulk of the routes in the BacklogResource class with a call method that is the emulator function for our Handler.

Option 5: Route Classes with Separate Methods

Rather than including the handler functions directly in the route definition, lets split them into separate methods. While we are at it, we will take advantage of another shelf_rest feature that allows us to give the call method a more meaningful name, such as createRoutes.
class BacklogResource {
  createRoutes(Router r) {
    r
      ..get('{?creator}', searchBacklogs)
      ..addAll((Router r) => r
        ..get('', fetchBacklog)
        ..put('', updateBacklog), path: '{backlogId}');
  }

  Future<List<Backlog>> searchBacklogs(String creator) async {
    // ...
  }

  Future<Backlog> fetchBacklog(String backlogId) async {
    // ...
  }

  Future<Backlog> updateBacklog(
      String backlogId, @RequestBody() Backlog backlog) async {
    // ...
  }
}
This gives you another benefit. Since the methods have been defined to return objects like Backlog, shelf_rest is going to do us another favour. It will automatically turn them into JSON and populate the Response object for us. Yes you can thank me later ;-)

Option 6: Route Annotations

Instead of using the fluent API of the Router class we can put annotations on the handlers instead. These correspond directly to Router methods of the same name and have the same features.
class BacklogResource {
  @Get('{?creator}')
  Future<List<Backlog>> searchBacklogs(String creator) async {
    // ...
  }

  @Get('{backlogId}')
  Future<Backlog> findBacklog(String backlogId) async {
    // ...
  }

  @Put('{backlogId}')
  Future<Backlog> updateBacklog(
      String backlogId, @RequestBody() Backlog backlog) async {
    // ...
  }

  @AddAll(path: '{backlogId}/issues')
  IssueResource issues() => new IssueResource();
}
Note: I snuck in the next child resource issues so you can see the use of the @AddAll annotation
Take a break. I don’t know about you but that is already a lot to absorb so I’m gonna take 5 mins for a coffee.
So much option…

Option 7: CRUD Time

A fair amount of your routes will end up following a standard pattern. These are the normal CRUD operations. These 4 backlog methods are typical
GET     ->  /api/v1/backlogs{?creator}
POST    ->  /api/v1/backlogs
GET     ->  /api/v1/backlogs/{backlogId}
PUT     ->  /api/v1/backlogs/{backlogId}
You search for existing representations with a GET taking query parameters to search on (creator here); you POST to create a new instance; you fetch a single instance by doing a GET with the id of the instance as part of the path etc.
In the interest of DRYness and consistency, shelf_rest has in built support for these CRUD methods.
@RestResource('backlogId')
class BacklogResource {
  Future<List<Backlog>> search(String creator) async {
    // ...
  }

  Future<Backlog> find(String backlogId) async {
    // ...
  }

  Future<Backlog> update(
      String backlogId, @RequestBody() Backlog backlog) async {
    // ...
  }

  @AddAll(path: 'issues')
  IssueResource issues() => new IssueResource();
}
To use this feature, you first add a @RestResource annotation to class which contains that pesky path variable (backlogId) that kept popping up in all the routes.
Next you either follow the standard naming convention for the methods (search, create, find, update, delete) or you use @ResourceMethod annotations to tell shelf_rest which of the CRUD operations your method implements.

Fancy Some HATEOAS

I won’t go into this in detail here, as the blog is already crazy long, and I’ve likely lost most of you by now, but if you are a fan of HATEOAS (I am and use it on backlog.io), then shelf_rest comes with support to help you create your links.
  Future<BacklogResourceModel> update(String backlogId,
      @RequestBody() Backlog backlog, ResourceLinksFactory linksFactory) async { ... }
Note: this is the real method signature for the update method
Simply adding an argument of type ResourceLinksFactory will give you some methods to generate links for your resource.

Time for a Drink - Grab a Mojito

Up until now we have covered just core routing, which isn’t surprising as that is the purpose of the blog. However, firstly you don’t end up with a running server with only routing and secondly you tend to want a whole bunch of other stuff like auth, static resource handling, logging and so on.
Shelf packages exist for many of these things and shelf makes it easy to glue all that together with your router. However, it doesn’t hurt to get a little more out of the box. So you may as well chill and grab a mojito
mojito bundles shelf_rest, so all of the options covered above, you get out of the box with mojito too. But wait, there’s more….
Sorry no steak knives on offer, but bear with me.
To start with, instead of importing shelf_rest, we import mojito.
import 'package:mojito/mojito.dart';
Now we use a special method on mojito’s router called addStaticAssetHandler to handle the ui resources and then we start an actual web server with app.start
  var app = init();

  app.router
    ..addStaticAssetHandler('/ui')
    ..addAll(new BacklogResource(), path: '/api/v1/backlogs');

  app.start();
As alluded to earlier, addStaticAssetHandler will use pub serve in dev mode and serve from the filesystem in production mode.

OAuth 1 & 2 handlers

One thing we didn’t cover in the previous options, was how we handle the OAuth routes
GET     ->  /api/v1/users/oauth/github/requestToken
GET     ->  /api/v1/users/oauth/github/authToken
In backlog.io we handle these with shelf_oauth and mojito gives you this out of the box.
router.oauth.gitHub().addClient(
                githubClientIdFactory,
                oauthStorage,
                uiLoginCompletionUrl(GitProvider.GITHUB));

Connecting to github is just a matter of providing your client credentials, how you want to store the oauth data (e.g. memcache) and where you want the browser to be redirected with the obtained token when authentication is finished.

Bitbucket and google also have out of the box support. Adding other servers is also straight forward.

That’s about enough for now. Let me know where you want more details and I’ll consider follow up blogs.

Mix and Match

One last note, pretty much all of the above options can be used at the same time. Mix it up as you wish.

Monday, 22 June 2015

Managing your Dart Projects like a Jefe

Managing your Dart Projects like a Jefe


Even though it’s a little premature, as there are still several things to do before backlog.io comes out of beta , I thought I’d kick off a series of blogs on various aspects of how the app is built.

The first blog is about jefe, a tool I’ve just now built to tame my project dependencies.


enter image description here

I recently decided that it was time to get more sophisticated with the way I manage the Dart projects behind backlog.io.

My project set up is split into two halves:
  1. The core bit that has no dependencies on appengine. This is where 99% of the code lives
  2. The appengine bit
For most of backlog.io’s brief history, these two parts were contained in two separate (somewhat randomly named) git repos. Each had several Dart projects. The structure was as follows:
  • gissue – the non appengine part
    • gissue_common
    • gissue_client
    • gissue_server
    • gissue_integration_tests
  • gitbacklog – the appengine part
    • gitbacklog_client
    • gitbacklog_server
    • gitbacklog_tool
Unsurprisingly, there are dependencies between these projects and these were all set up manually as path dependencies.

In turn the Dockerfile that was needed to deploy to appengine had to be carefully crafted to match.

So what’s wrong with that?


This is already somewhat complex to manage, but it gets worse when I need to change more things.

For example, I maintain several shelf packages and add features / fix bugs on these as needed for backlog.io.

So additionally my workspace might also include a structure like:
  • shelfish
    • mojito
    • shelf_route
    • shelf_bind
    • shelf_rest
    • shelf_auth
    • …. etc
and as you’d expect, there are more path dependencies between these as well as the backlog projects, whilst developing.

Of course, my code also depends on several third party pub packages and at times I need to fork to add features or fix bugs on these.

enter image description here

All in all, this leads to lots of fiddling around with the dependencies in the pubspec.yaml files. As it’s a manual process it is also prone to getting it wrong, forgetting to remove path dependencies when you are done etc.

Automation to the Rescue


Automation

Clearly, the answer to alleviating the pain associated with managing these sets of related projects is to automate it.

So how do we go about that?

Pubspecs already capture the relationships between different Dart packages. However, how do we differentiate between packages which we are working on from those we simply import as is?

We need to make this distinction so that we can automate aspects that relate to only those packages that we are maintaining.

Since this is kinda augmenting the information in pubspec.yaml it seemed sensible to follow a similar style to capture these relationships.

And so jefe.yaml was born.

Breaking up is hard to do


So it was time to take the plunge and break up my projects into separate git repos and link them back together with my new yaml format.

First up gissue. I created a file called jefe.yaml which looks like:
name: gissue

projects:
  gissue_common: git@bitbucket.org:andersmholmgren/gissue_common.git
  gissue_client: git@bitbucket.org:andersmholmgren/gissue_client.git
  gissue_server: git@bitbucket.org:andersmholmgren/gissue_server.git
  gissue_integration_tests: git@bitbucket.org:andersmholmgren/gissue_integration_tests.git

The yaml is pretty straight forward. We are simply defining the set of projects that we want to manage as a group.

We can now do the same for the gitbacklog repo. The only additional concern is to capture that we also want to pull in the gissue group. We do that with a new key called groups



name: gitbacklog

groups:
  gissue: git@bitbucket.org:andersmholmgren/gissue.git

projects:
  gitbacklog_client: git@bitbucket.org:andersmholmgren/gitbacklog_client.git
  gitbacklog_server: git@bitbucket.org:andersmholmgren/gitbacklog_server.git
  gitbacklog_tool: git@bitbucket.org:andersmholmgren/gitbacklog_tool.git

Firing it up


So we’ve defined which projects we want to work on and manage as a unit. Now what can we do with it?

First we need some tools

pub global activate jefe

Let’s see what el jefe can do

> jefe -h

Description:

  Manages a set of related Dart projects

Usage:

  jefe [options] <command> [<args>]

Options:

  -h, --help    Print this usage information.

Commands:

  install             Installs a group of projects                             
  init                Installs or updates a group of projects                  
  start               Sets up for the start of development on a new feature    
  finish              Completes feature and returns to development branch      
  release             Create a release of all the projects                     
  exec                Runs the given command in all projects                   
  set-dependencies    Set dependencies between projects                        
  completion          Tab completion for this command.                         

  See 'jefe help [command]' for more information about a command.

Installing sounds like a good place to start.

> jefe help install

Description:

  Installs a group of projects

Usage:

  jefe install [options] <git-uri>

    <git-uri>    The git Uri containing the project.yaml.    

Options:

  -d, --install-directory    The directory to install into
                             (defaults to ".")

  -h, --help                 Print this usage information.

OK so let’s install the gitbacklog source. Oh it helps to be me to do this step as you won’t have permission ;-)

> jefe install git@bitbucket.org:andersmholmgren/gitbacklog.git

Once it completes we now have the following:
  • gitbacklog_root
    • gissue_root
      • gissue
      • gissue_client
      • gissue_common
      • gissue_server
      • gissue_integration_tests
    • gitbacklog
    • gitbacklog_client
    • gitbacklog_server
    • gitbacklog_tool
So essentially the same as before except for the addition of the _root container directories and the gissue and gitbacklog directories that contain the yaml files for the group definitions.

Moar Interesting


OK so we now have a way to checkout a collection of Dart projects with one command. Kinda useful but not super exciting.

Lets move on to setting things up for the development of a new feature.

jefe start shiny_new_feature

So what just happened?

Firstly, lets take a look at each project’s git workspace

git branch
  develop
* feature/shiny_new_feature
  master

OK so managing the git branching is at least as easy as when we had two repos (actually easier as we can branch with one command). But not huge.

What else? Lets take a look at the pubspec.yaml files. How about gitbacklog_client


dependencies: 
  browser: '^0.10.0+2'
  polymer: '^0.16.0+7'
  gissue_client: 
    path: /mydir/gitbacklog_root/gissue_root/gissue_client
  gissue_common: 
    path: /mydir/gitbacklog_root/gissue_root/gissue_common


Correctly set up path dependencies! - noice.

And as a bit of icing on the cake, pub get was also run on all the projects for us (awthanks).

Now I gotta do some actual work


Sadly, jefe can’t write the code for the shiny new feature for me so I gotta do that myself.

OK so rather than have you wait around while I do that I’m just gonna go ahead and pretend I’ve written some code for the new feature and am ready to test it with the local appengine.

For that I have some grinder commands that integrate with jefe to automatically generate the Dockerfile and then fire up appengine. These live in the gitbacklog_tool project

grind run

The grind task for generating the Dockerfile looks like

Future genDockerfile(GrinderContext context) async {
  final currentFeatureNameOpt = await (await jefeExecutor())
      .executeOnGraph(jefe.feature.currentFeatureName());

  // use pub serve if on a feature branch
  final usePubServe = currentFeatureNameOpt is Some;

  final genDocker = jefe.docker.generateDockerfile(
      'gitbacklog_server', 'gitbacklog_client',
      dartVersion: '1.9.3',
      environment: {'USE_PUB_SERVE_IN_DEV': usePubServe},
      exposePorts: [8080, 8181, 5858],
      entryPointOptions: ['--enable-vm-service:8181/0.0.0.0']);

  await (await jefeExecutor()).executeOnGraph(genDocker);
}

I provide the command the name of my client and server projects and a bunch of other info that will go into the Dockerfile.

As I’m developing a feature, I want to run with pub serve and don’t particularly want to wait while docker creates all the images for my client and related projects, since I’m not going to use that anyway.

Well jefe knows this, so simply omits the client projects and its dependencies. The Dockerfile looks like


FROM google/dart:1.9.3
RUN apt-get update
RUN apt-get install -y ssh
ADD ["id_rsa", "/root/.ssh/id_rsa"]
RUN ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
RUN ssh-keyscan github.com >> /root/.ssh/known_hosts
ADD ["gissue_root/gissue_common", "/Users/blah/gitbacklog_root/gissue_root/gissue_common/"]
ADD ["gissue_root/gissue_server", "/Users/blah/gitbacklog_root/gissue_root/gissue_server/"]
ADD ["gitbacklog_server/pubspec.yaml", "/Users/blah/gitbacklog_root/gitbacklog_server/pubspec.yaml"]
ADD ["gitbacklog_server/pubspec.lock", "/Users/blah/gitbacklog_root/gitbacklog_server/pubspec.lock"]
WORKDIR /Users/blah/gitbacklog_root/gitbacklog_server
RUN pub get
ADD ["gitbacklog_server", "/Users/blah/gitbacklog_root/gitbacklog_server/"]
WORKDIR /Users/blah/gitbacklog_root/gitbacklog_server
RUN pub get --offline
ENV USE_PUB_SERVE_IN_DEV true
EXPOSE 8080 8181 5858
CMD []
WORKDIR /Users/blah/gitbacklog_root/gitbacklog_server
ENTRYPOINT ["/usr/bin/dart", "--enable-vm-service:8181/0.0.0.0", "/Users/blah/gitbacklog_root/gitbacklog_server/bin/server.dart"]

Feature Complete


My shiny new feature is now complete so time to close it up.

jefe finish shiny_new_feature

If we now go back and look at the git branches and pubspec.yaml files we will notice that we are back on the develop branch and the path dependencies have been replaced by git dependencies referencing the current hash of those projects.

dependencies: 
  browser: '^0.10.0+2'
  polymer: '^0.16.0+7'
  gissue_client: 
    git: 
      ref: 942c4aa3b1fdaaadb8b95d66f02da38e01ac8ead
      url: git@bitbucket.org:andersmholmgren/gissue_client.git
  gissue_common: 
    git: 
      ref: d31aaca406c501a2a8e1bc147e3323044404d616
      url: git@bitbucket.org:andersmholmgren/gissue_common.git

This time when I run the server I want to include the client project in the Dockerfile and build it using pub build. This ensures that there are no difference with how it will run in production.

This time when we run

grind run

jefe sees that we no longer have any path dependencies so includes the client and optimises the paths. It also omits all the dependent projects as they will just add weight and slow down the build and deploy. The Dockerfile now looks like

FROM google/dart:1.9.3
RUN apt-get update
RUN apt-get install -y ssh
ADD ["id_rsa", "/root/.ssh/id_rsa"]
RUN ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
RUN ssh-keyscan github.com >> /root/.ssh/known_hosts
ADD ["gitbacklog_server/pubspec.yaml", "/app/gitbacklog_root/gitbacklog_server/pubspec.yaml"]
ADD ["gitbacklog_server/pubspec.lock", "/app/gitbacklog_root/gitbacklog_server/pubspec.lock"]
WORKDIR /app/gitbacklog_root/gitbacklog_server
RUN pub get
ADD ["gitbacklog_server", "/app/gitbacklog_root/gitbacklog_server/"]
WORKDIR /app/gitbacklog_root/gitbacklog_server
RUN pub get --offline
ADD ["gitbacklog_client/pubspec.yaml", "/app/gitbacklog_root/gitbacklog_client/pubspec.yaml"]
ADD ["gitbacklog_client/pubspec.lock", "/app/gitbacklog_root/gitbacklog_client/pubspec.lock"]
WORKDIR /app/gitbacklog_root/gitbacklog_client
RUN pub get
ADD ["gitbacklog_client", "/app/gitbacklog_root/gitbacklog_client/"]
WORKDIR /app/gitbacklog_root/gitbacklog_client
RUN pub get --offline
RUN pub build
ENV USE_PUB_SERVE_IN_DEV false
EXPOSE 8080 8181 5858
CMD []
WORKDIR /app/gitbacklog_root/gitbacklog_server
ENTRYPOINT ["/usr/bin/dart", "--enable-vm-service:8181/0.0.0.0", "/app/gitbacklog_root/gitbacklog_server/bin/server.dart"]

Deploy All The Things


enter image description here

So we’ve tested that our feature is working as planned and want to get it to users.

First off we cut a release

jefe release

This bumps the versions of all the projects, merges to master and tags the commits.
Just to make sure all is as we expect, we can now run exactly the code that we will in production

grind runProd

This uses a slightly different command to generate the production Dockerfile. The set up is pretty much the same as for the previous though so I’ll omit here. The resulting Dockerfile is now

FROM google/dart:1.9.3
RUN apt-get update
RUN apt-get install -y ssh
ADD ["id_rsa", "/root/.ssh/id_rsa"]
RUN ssh-keyscan bitbucket.org >> /root/.ssh/known_hosts
RUN ssh-keyscan github.com >> /root/.ssh/known_hosts
RUN git clone -q -b 0.12.0 git@bitbucket.org:andersmholmgren/gitbacklog_server.git /app/gitbacklog_server
WORKDIR /app/gitbacklog_server
RUN pub get
RUN git clone -q -b 0.15.5 git@bitbucket.org:andersmholmgren/gitbacklog_client.git /app/gitbacklog_client
WORKDIR /app/gitbacklog_client
RUN pub get
RUN pub build
ENV USE_PUB_SERVE_IN_DEV false
EXPOSE 8080 8181 5858
CMD []
ENTRYPOINT ["/usr/bin/dart", "--enable-vm-service:8181/0.0.0.0", "/app/gitbacklog_server/bin/server.dart"]

Very similar to the previous Dockerfile except it is now cloning the client and server repos to make sure that we are releasing off a tag committed to our remote repository.

Another grinder task does the actual deploy

grind deploy

Happy Days


enter image description here

This is making my life much easier when managing my codebase and I hope it will be helpful for others.

If so then you may want to consider contributing to jefe.

Da Future


One of the things I want to tackle soon is adding support for hosted packages. In that case a jefe release would also publish to pub and change the dependencies to hosted dependencies rather than git.
May be able to automate some tasks with the changelog too (although that could be done for git projects too).
Let me know what you think.