Projects as Code
Dom De Re
8 mins
ecology, productivity, projects-as-code

Well, its been a while in between posts here, and we had said we would be posting more frequently.

So, we’re still working on it…

So we open sourced a new project last week, we call it ecology and you can find the source code here.

Background Context

Hopefully its clear from the Philosophy that we published (but haven’t re-worded for clarity yet :( ) that at Irreverent, we aren’t just concerned with making games, we are interested in making a culture and workplace where we enjoy making games.

That means that even if we think it slows down production of our games (even our very first one…), but will make the experience of making games (or other things) a lot more enjoyable for us, we’ll do it.

If you look at our GitHub account, its pretty clear that we go with the “multi-repo” approach, as opposed to the “mono-repo” approach. Some companies use the multi-repo approach for their open source projects, and have a mono-repo internally, internally we also operate with multi-repos.

We do this because, to us, it embodies the FP (Functional Programming) spirit a little more. When we program in the small, FP lets us focus on individual functions a lot of the time, without having to worry about grand effects on mutable state etc… . We like having that feeling when working on projects too, hence we like each repo having a small self-contained focus. As close as we can get anyway.

Anyway, multi/mono repos have nothing to do with FP, just saying, one of the things FP delivers, is something we’d like to be delivered by our “project structure paradigm/model” as well.

Again because of that property, we feel like multi-repos enforces a more modular approach. Again though, mono-repos don’t preclude a modular approach, neither is a clear winner, ultimately what it comes down to which one feels better to you, and which weaknesses you want to invest time in compensating for, long CI build times for mono-repos, or messy integrations with multi-repos, etc…

Anyway this isn’t a “multi repo vs mono repo” post, this is just for context, theres a more detailed post (by someone else) on comparing the pros and cons of each approach here.

So, Ecology?

So that brings us to the point of this post, we are using the multi-repo approach and are looking at compensating for the problems that multi-repos have compared to the mono-repo approach.

This is the focus of the ecology project we open sourced last week.

Here are some explicit specific goals, but its not limited to these:

It still requires some documentation, and rather than start from the “easily implement this by calling the one function we provide” which offers convenience, but lacks flexibility, we have started with the “combine these functions to do what you want” approach, and we’ll work towards also providing a single function that does what we think is the least restrictive default. Once we figure that out.

So when you want to create a new project (or multiple projects that combine to form a system):

  1. Raise a PR on your ecology project that adds the declaration of your project to the overall list of projects.
  2. Your team mates can discuss whether:
    • The project is necessary
    • Is named appropriately
    • Is tagged appropriately
    • Is configured correctly
  3. If the project is deemed unnecessary, decline the PR, otherwise once you merge it, ecology will:
    • Create the repositories on Bitbucket/GitHub
      • Perform any additional config (e.g. setting user group privileges)
    • Pull down the appropriate template and bootstrap it with the appropriate project details.
    • Add any CI config to the repo (e.g. bitbucket-pipelines.yaml or .travis.yaml)
    • Perform cloud side CI config (e.g. activate bitbucket pipelines, add SSH keys, set environment variables)

Here’s an example of what a project that you add might look like:

exampleProject :: IrreverentEcologyProject
exampleProject =
    ciVars :: [EcologyEnvironmentPair]
    ciVars = [
          (EnvironmentVariableName "PARAMFOO")
          (EnvironmentVariableReferenceValue "common/PARAMFOO")
      , EcologyEnvironmentPair
          (EnvironmentVariableName "PARAMBAR")
          (EnvironmentVariableReferenceValue "common/PARAMBAR")
      , EcologyEnvironmentPair
          (EnvironmentVariableName "PARAMBAR3")
          (EnvironmentVariableReferenceValue "common/PARAMBAR3")
      , EcologyEnvironmentPair
          (EnvironmentVariableName "PARAMOPEN")
          (EnvironmentVariableValue "In the repo")
  in EcologyProject
    (EcologyProjectName "ecology-test")
    (EcologyProjectDescription "A test repo for ecology to create")
    (pure I.DevProject)
    (EcologyProjectCI BitbucketPipelines $ awsCreds <> ciVars)
    (EcologyTag <$> ["haskell"])
    (Username <$> ["domdere"])

The definition of EcologyProject is here

And because you are writing this in haskell code, if there are repetitive patterns you see come up, you can write functions to simplify those instances, for example:

  :: T.Text -- ^ Name
  -> T.Text  -- ^ Description
  -> I.IrreverentGitPlatform
  -> I.IrreverentProjectCategory
  -> [T.Text] -- ^ Tags
  -> [T.Text] -- ^ Experts
  -> IrreverentEcologyProject
haskellProject name desc plat cat tags' experts' = EcologyProject
  (EcologyProjectName name)
  (EcologyProjectDescription desc)
  (pure cat)
  (EcologyProjectCI NoCI [])
  (EcologyTag <$> tags')
  (Username <$> experts')

-- A simpler more concise project expression
ecologyExtras :: IrreverentEcologyProject
ecologyExtras =
  "Additional extras for writing ecology APIs"
  ["ecology", "projects", "projects-as-code"]

So to us, this is better than JSON or YAML. We’d be interested to see if we could switch to processing S-expressions though…

So unless your project requires other integrations (e.g. a DockerHub repo being created) you should have a new repo, with build configuration in it, empty test-suites, and a green CI build, i.e you should be ready to go and start writing code. Its still not instantaneous, but we find its not enough to break the flow yet.

Maybe one day we will add an API allowing you to specify additional things to setup, like DockerHub repos and etc… but we might wait until we have got the Repo and CI experience nicely ironed out before that.


In addition to working towards solving the problems described above, we are experiencing other benefits.

The Future

Theres a lot more we can do with that data.

I think we’ll always want to declare the projects in “code” in a language like Haskell (i.e not JSON/YAML/ TOML/etc…), so that we can define functions for repetitive patterns.

This will in general be necessary for performing mass migrations on repos of various kinds.

Ecology would be useful for any scenario where would want to iterate over a bunch of similar projects.

To support this we will likely add a feature to help iterate over projects with certain tags so you can perform some operations en masse.

While we want to keep defining projects in Haskell rather than JSON, it will probably be nice to serialise the list into JSON as part of the sync process and store it somewhere. That way we can write a generic slack bot or something that can perform useful queries and do useful tasks using that data.

I can see that data be coming in handy very much, for instance, whenever you want to upgrade your Haskell compiler, you can add that new target to all your Haskell CI builds very quickly and get a result back quickly to see if such a move breaks any of your projects.

So there is a lot that can be done with it, we’ll be very excited to see where it goes.

See Also