#begin

 

The next section of the chapter is all about automation. Well, we talked about that before but let’s see what else Andrew and David have to say about it. They mention that automation is a way to ensure guaranteed repeatability. Well that sounds pretty straightforward but it’s really profound isn’t it?

Automation indeed guarantees repeatability, and it does so on many levels. Everywhere in de SDLC human interaction is involved mistakes can be made in a heartbeat. Computers however, don’t make mistakes; well unless you write bugs into the code or have some shitty AI system making decisions. How many times has ChatGPT suggested API’s that simply do not exist. The hallucinations are the works of the spaghetti monster in the sky.

David and Andrew also put some emphasis on the differences between our dev boxes. So even when every team member has the exact same hardware and development environment; there will be differences which can cause all kinds of issues, bugs and crashes. So even when you have automated everything, but each developer needs to run some pipeline on their own dev box, there will be differences in output. Yes… Yes, this is one of the major reasons Docker was invented! A Docker image never changes, and will run exactly the same way every single time. For this reasons it’s really important to setup a proper CI/CD pipeline in the could or on premise and everyone should use it.

 

Compiling the Project

Next; David and Andrew will touch upon some of the actions you should automate; So, first of all, compiling! Having an IDE compile your project after each edit will show you type errors, etc. quickly. If you’re using some shitty text editor for editing you’ll have to use the CLI manually to compile your bowl of spaghetti and find out in the process. Fortunately, in the .Net ecosystem provides us with a lot of really wonderful tooling. Both VisualStudio and Rider are amazing, and VSCode is finally get a little bit up to speed as well. Without this tooling you would be stuck in the stone age. Maybe you could do without compiling, but then you would definitely need some linter to check for errors, or employ a very strict TDD practice to cover your ass.

 

Generating Code

In order to fight the evils of duplication the pragmatic approach would be to use custom code generators where possible. I agree to some degree here. Remember, we’ve also talked about the dangers involved with code generators and so called evil wizards. When you do not understand the code these evil generators spit out it will come back to bite you in the ass later on. Nonetheless I think code generators serve their purpose. Heck, I’ve written a couple over the years. For example I have my Sjwagmeister project that generates OpenAPI 3.0 compatible API clients. Currently it only supports json requests without any from of validation though. It’s still a really early version and I’ve not really found the time or motivation to continue development on it. But yeah… Another code generator I wrote was generating classes based on some custom communication protocol we’ve made to communicate over HTTP. It basically generated partial classes that merely contained data, and then another class contained the methods that could operate on the data. It was very similar to how Entity Framework works.

 

Regression Tests

Next they explain how we can use our automation to run regression tests as well. I would argue that we can use automation to run any kind of tests, except for the manual tests of course, hence the name. We have many kinds of tests covering a large part of our package. Most of it of course are unit tests, but we also have component, integration and even visual tests. We have an in-house custom built visual test frameworks that compares a reference screenshot to the one taken from the test run. Its a pretty nice utility to have to cover these tests cases that are just really hard, or impossible to cover with integration tests. It does bring its own set of challenges however and I’ll give you a quick example; Some shaders compile differently on different platforms and thus produce different outputs. So in some edge cases you might need to have platform dependent tests as well. That’s nothing new but still something to keep in mind.

 

Build Automation

The next subject they touch upon is build automation. There’s nothing really new we can discuss here other than they fact that they suggest that through build automation they like producing final deliverables on CD-ROM haha. Yeah, welcome to the 90’s! They also mention the idea of making nightly builds. This is a nice idea, which I’ve done in the past a lot as well, but with modern practices its just better to create builds when new changes are merged into master. You could still set a rule that only if the minor version increases a new build is created, but for patches it doesn’t. We currently have a setup based on semantic git commits. I really like this approach. By using sensible git commits you can prefix a commit message with feat:, chore: test: ci: fix: refactor: docs: and based on the prefix the build automation tool can make decisions whether to create a build or not. So, this way you can push changes to the repo with a chore, ci, test or refactor prefix and don’t create a build, only a feat or fix prefix could create a build. They are really useful and there’s gh-actions to create your workflow based on them. Great stuff!

 

Web Site Generation

Another form of automation David and Andrew want to discuss is web site generation. What they mean here however is generating documentation and hosting it somewhere. I’ve quickly discussed this earlier. You can, without much effort really, compile some documentation from your project and host it on gh-pages. Doxygen is pretty nice and straightforward to use and there are gh-actions available to push it to the pages of your project. Simple and easy.

 

Approval procedures

Another great subject they talk about are approval procedures. This here, is exactly why David and Andrew are such visionaries. They talk about leaving special markers in the source code that say “needs review” like all those ugly ToDo comments in the code. They are describing the modern code review process here. And, yes I know that code review can become a major bottleneck in some teams. I think the current metric for PR approval in our team is about 6 hours or so, which is really fast. This includes any comments, discussions and rework! Don’t get me wrong, there are always these occasional PR’s that stretch en entire week. But most of the time these are either really impactful or they spark such a big discussion that some further investigations or rework is needed. Scope creep is still a real thing and might derail your magnificent PR process. But to get back to the book; David and Andrew give some tips about scanning your source code and producing some list to host on a website to do code review. Isn’t that amazing!? That’s github, right? :p They should have patented that or something haha, they missed out. Nha, I think most engineers support the open source movement but still, money must be made to support ones family.

#end

01010010 01110101 01100010 01100101 01101110

Hey, sorry to bother you but you can subscribe to my blog here.

Never miss a blog post!

You have Successfully Subscribed!