Skip to main content

Open Source Development Practices

Links and terms throughout were updated on June 6th, 2018 to reflect the current state of the Cesium project.

Cesium is the largest open-source project that I’ve contributed to in a substantial way. This is probably true for all of the current contributors. Many of us have worked together for years - developing APIs and even 3D APIs at that - but Cesium is our first take at a serious open source project. Here I’ll share many of our development practices, some we’ve done for years, others come from the great book, Producing Open Source Software, and many fall into both categories.

Communication

Currently, our core contributors all work for the same company, Analytical Graphics, Inc., but we strive for a diverse contributor community so we are careful to make communications public even if we are all physically located next to each other.

Design ideas for major features are put on the public roadmap, and then proposed on the public forum. Terrain is a good example (see the discussion). Even this blog was proposed on the forum. So far, not everything enjoys feedback; for example, folks must really trusted us with the data-driven renderer and the dynamic texture atlas. Even if there’s no feedback, the point is to give everyone a say if they want. We also use the forum to solicit feedback for work-in-progress. Good examples include terrain and Sandcastle.

Perhaps the bulk of our public communication is in public code reviews on pull requests. Just like the roadmap and forum are an archive of feature and design decisions, code review comments are an archive of more fine-grained decisions. More on them below.

Of course, not every word we say to each other is public. For example, we still help each other debug code in person, or even design a feature in person before discussing on the public forum, but all the major communication is public so that external folks shouldn’t feel like outsiders.

Code Reviews

I first read about code reviews in Code Complete. Research shows that different types of quality assurance, e.g., unit tests, system tests, code reviews, etc., find different types of bugs. Even though I was convinced that code reviews were a good thing, I didn’t get serious about them until this project.

Although some of the original Cesium code is in master without a review, all new code is reviewed, usually by several developers. The reviews can be detailed, for example, the pull request for imagery layersfrustum culling, and the material system each have way over 100 comments.

I’ve been with Cesium since the start, and have written a lot of code, but I don’t expect any of my non-trivial pull requests to be merged without a change. A pull request is a request for feedback, and a reviewer shows interest in the work, and wants to help get it into master. We certainty bikeshedded a few reviews when we first started, but we are much better now at focusing on the meat and not on, local variable names, for example.

We try to minimize the time between opening a pull request and merging it. Simple pull requests are commonly merged on the same day, and even large pull requests can be merged in less than a week (for example, the multi-frustum). We want to get the code into master and get everyone using it. This, of course, comes at a cost - we spend a lot of time reviewing code.

Several small pull requests are preferred to one large one. Small pull requests are easier to review, and get merged faster. For example, we broke the data-driven renderer into several pull requests. However, for this to work, we have to be accepting of incremental improvements and control the scope of our reviews.

Finally, code reviews force developers to also act as testers. This removes the us-and-them mentality between developers and testers. Developers are testers, and testers are developers; we are all contributors.

We have a tips page with more ideas on code reviews.

Tests

For unit tests, we use Jasmine with some modifications for running and debugging individual tests. I didn’t spend much time evaluating JavaScript testing frameworks, but I was able to get Jasmine up and running quickly, and almost 3,000 tests later, we are pretty happy with it. We’ve focused on writing tests that run fast - because if they don’t, we won’t run them. On a decent machine, our current tests run in 15 seconds.

We also focus on reliable tests, that is, tests that won’t create false failures. If a test fails, we want our code to be wrong. We don’t want to blame it on the poorly written test or difference in systems. This means some tests are not precise as they could be, for example, rendering tests usually render into a 1x1 canvas instead of doing full image compares.

Perhaps the most important thing we do about tests is we actually write and run them. A pull request needs to have tests for it to be merged, and reviewers run the tests before merging.

There’s still plenty of room for improving our tests, including a build-and-test server, the ability to flag different tests for different platforms, and eventually separate tests so even when we have, say 10,000 tests, we can still quickly run a smoke test in 15 seconds that tests most of the code.

Documentation

For reference doc generation, we use jsdoc-toolkit with some modifications for styling and supporting GLSL. Just like tests, we only merge pull requests with reference documentation. We try to provide code examples for all non-trivial functions and cross-references where appropriate.

We still have a ways to go to improve our reference doc. A lot of the earliest code in Cesium still needs reference doc, and we could use more figures, or better yet, perhaps embedded Cesium.

In addition to reference doc, we have higher-level doc on buildingarchitecture, etc. on our contributor documentation. We also give talks and write publications on Cesium, which is a fun way for contributors to get their work noticed.

Releases

We try to be light on process, and don’t want titles creating a chip on anyone’s shoulder, so we don’t have a formal release manager. Instead we document the release steps, and allow the task of creating the release to change hands from month to month. Any committer can create the release for a given month, and at any point, they can pass the responsibility to someone else, or someone else can ask for it. This spreads knowledge, avoids stratification, avoids a single point of failure, and is beautifully unstructured.

Like many other things, this was discussed on the forum first.