Skip to main content

Starting at the End

This weekend, my sons and I were building a marble run. I started with this huge pillar to start the balls off very high so they would have momentum for the full run. But then, right after I added the first slide, they started dropping the ball in and watching it slide down to nowhere and then roll under the couch. When I asked them to wait, both boys, in unison, echoed a phrase I've taught them, and that I'm quite proud they know: we have to test it as we go. I think they'll make great engineers one day. :)

They also reminded me to build for testability, so I set aside my pillar, and started at the end of the marble run--with the marble deposit, adding pipes and bends, while they pushed the balls through with every addition. This was great because they were engaged while we were building, helping and testing, but also because it reminded me of an important aspect of engineering.

I had an engineer at work over the last few weeks making a pretty significant change to our test strategy in our pipeline: they started by creating a new pipeline, setting up the server component in the pipeline, then the client, and then the tests. However, none of it was working; the server and client repo checkouts were stepping on each other and deleting each other, the client wasn't talking to the server, and the tests weren't passing. Each step worked individually, but it didn't integrate.

I sat down with them to work it from a different angle; I'd recently made similar changes in another pipeline, so I started with those changes--I just copied and pasted it into their pipeline on the new component. This gave me a working pipeline to test changes against, so each change could be validated. Doing it this way, I was able to figure out exactly how to check out the repos so that I had the right repo at the end--because I already had it, so I was able to just check out what I needed from the other repo, and ensure that worked. The tests were already passing, so I had a means to test the integration against the client and server and find out what wasn't working. It didn't work, which was expected, but now we knew exactly why, because it was working until it stopped, rather than not working until it did.

Continuous integration and building for testing are two sides of the same coin--when we can continuously integrate our code, we can continuously test it. The frequency with which we can use a component in the place it was designed to is the frequency with which we can be certain that our changes are working. The goal is to make those two synonymous--that every functional change is integrated, tested, and shipped to production.