#begin
David starts chapter five with another definition. This time it’s the definition of the word ‘feedback’; “The transmission of evaluative or corrective information about an action, event, or process to the original, or controlling, source.” Without feedback, there is no opportunity to learn. Without it, we can only guess rather than making informed decisions.
Feedback is crucial to an iterative workflow. It allows us to establish a source of evidence for our discussions. It’s essential in a system that operates in a changing environment. And since software is a very organic and changing environment we need to embrace it. It’s often said that in software, change is the only constant. And it’s true 😉
In the context of software there can be different kinds of feedback. Let’s start with the feedback in coding. How would this work in practice? Well we talked about this in the previous section but TDD is a very nice way of achieving this. But also if you have some other kind of rapid feedback like a powerful REPL (like Clojure) you could use that as well. Yet, you could end up without tests, which will most likely hurt you down the road. But in the book, David proposes you use TDD to get an iterative and incremental workflow based on rapid feedback. Apart from feedback, TDD also provides confirmation that existing code still works. So it helps you with maintaining code as well.
I think it’s pretty interesting that there is still a lot of push back against TDD. I feel it has proven its worth over the past two decades but people are still reluctant to use it. Probably because they think writing the test before production code is weird. Some people practice it for a short while only to find out it is actually pretty difficult to do proper TDD. I remember when I started TDD and I just did not know how to get started writing a test for code that didn’t exist. But I encourage you to keep on trying and practicing because in the end it will show its value.
Next, David talks about feedback in integration. He says that with each commit to the repository the continuous integration system evaluates his code in context of the system and everyone else’s code. So there’s a new level of feedback, a deeper understanding of the system as a whole. If the CI pipeline returns with a green check mark he knows his changes were solid and can continue on working. If not, he needs to dive back in to see what’s happening. David also emphasizes the fact that it’s really important to dive into the code when the pipeline fails. Fix it as soon as possible.
You cannot leave a pipeline failing. As soon as the pipeline stops working you (and your team) must get it running asap because everyone is impacted by the fact it’s not passing.
David then goes on talking about CI vs. Feature Branching (FB). He says FB can be problematic in teams working on large systems. Merging code can become a huge problem and stagger production. This problem is known as merge hell and I think everyone with some experience in software development has seen this. FB is still widely used in game development though. I’ve never worked on a project that does trunk based (CI) development in the context of games. I think in games there is just too much that can go wrong. It’s not just code that can break ‘everything’ it’s collision detection, shaders, physics, animation glitches and all kinds of weird stuff. A CI pipelines simply cannot catch these issues. Sometimes there are big dirty merges but I don’t see a nice way around it.
You should just keep your branches small, that’s the key to all of this. I believe in environments that are ‘just code’ a trunk based development approach can work very well.
But taking a CI approach in any modern project is definitely a must. Even when you are doing FB you can setup nice CI/CD pipelines for testing, building, deployment and other steps. We included a step to code sign our .exe file to mark it as a trusted source. So even when your CI/CD practice is a bit skewed you can reap the profits.
The next section is about feedback in design and David mentions TDD immediately. He says that if the tests are hard to write, there is something wrong with the design and the quality of the code. David considers the ease of writing tests to be a good indicator of the quality of the code. And I agree. I’ve noticed this as well.
When you practice TDD and you are able to quickly write new tests to cover all execution paths for existing and new code, the code is probably well designed. When I notice that I need to reach into the code by use of reflection for example, it tells me the code needs to improve. When I was starting out with TDD I used a lot of reflection in my tests to reach into encapsulated methods or data. This is bad! Don’t do this because it exposes the inner workings of you system and leads to brittle tests. Just test the public API and when you notice you cannot test everything you want to, you just need to start refactoring more.
David lists a couple of attributes he associates with good code:
- Modularity
- Separation of concerns
- High Cohesion
- Information Hiding (abstraction)
- Appropriate coupling
These hallmarks of quality also help you fight complexity. Prof. Ousterhout talks a lot about complexity creep as well. He also more or less mentions the exact same principles and so does Uncle Bob. These hallmarks are so widely known that I think every software engineer knows something about them.
And the last type of feedback is feedback in architecture. David says that a more subtle form of feedback from iterative workflows is feedback concerning your system’s architecture. When working with an iterative workflow testability and deployability become really high priorities.
Davids advice is to aim for software to be releasable at least every hour. I guess this is a bit difficult to do in gaming but I see where he’s coming from. If you are able to reach this momentum you are in a really great spot.
He also says that testability can be a really big constraint in this. If you cannot run your tests in parallel or a single test takes more than one hour. You will never be able to reach the one hour release time. The key here is to test in isolation. If you for example are building a micro-service app, you really need to be able to test each service individually. If you cannot do that, you don’t have a micro-service app but a distributed monolith. And I would rather have a regular monolith than a distributed monolith because it has all the same pro’s and con’s but can be released and deployed must easier than a distributed monolith.
Maybe you can increase the horizontal scaling a bit more easier with a distributed monolith but that’s about it.
Keeping to this one hour release time is at the root of continuous delivery. It promotes modular, better abstracted and loosely coupled designs guided by tests and deployability. This means we prioritize feedback in our development approach and thus we are more effective at making architectural decisions.
David then starts the last section of the chapter which is about preferring early feedback. He talks about TDD and CI/CD again but we wont repeat that 🙂 He does mention something new: shift-left or fail fast mentality. This is a popular mindset by CD and DevOps practitioners. It involves working on identifying defects in the compile-ability and unit-tests. Only when those succeed you move on to higher level tests like acceptance, performance and security tests.
David says we also need feedback in product design. He starts by saying that lots of software engineers feel tension because we are payed to create useful features, not well designed, testable and deployable software. Some people ask why would I do all this when it’s simply not required by our customers. I guess this is where the software craftsmanship community shows its head again.
But how do we know when the stuff that we create is good enough..? Well you involved the customer(s) and ask for feedback of your product. You really need to close this feedback loop! I’ve been in ‘agile’ projects where the customer gave me some really disappointing feedback in the delivery of the very last sprint of a project. We were creating a VR simulation and you were able to pick up large items and drag them across a room. I built the controls so you could pick and drag items with 1 controller. If you picked things up with two controllers you were able to rotate them around the y-axis on the position of the controller. I hope this makes sense haha. The customer wanted it the other way around. 1 Hand for rotation, 2 for position. But these controls were in the project for 2 or 3 months already.
Which means they simply never really tested it. So there goes your agile process.
He then mentions a very concrete subject to get feedback on product design and that is telemetry. Adding telemetry to your system allows you to gather data about which features of your system are used and how they are used. This has gained so much popularity in the micro-service world over the past decade or so that all logging major systems support this thing called OpenTelemetry. Which is an open source standard for gathering feedback.
Past year I wrote a high throughput event bus to use in Unity3D including some reactive API’s. I didn’t want to use UniRX since we did not want to add a dependency to our project and our usecase was limited anyway. So I build a custom version of an event bus in which I also added the option for adding telemetry. This made for a very cool system where or example a QA engineer could export his telemetry log when he found a bug so a developer could simply play it back to get the same application state.
Cool stuff, however it as more of a side-effect of using some form of telemetry. Telemetry is about gathering feedback, how the system is used and for what. This was just a bonus feature of an event driven app.
The very last part of this chapter is about feedback in organization and culture. David boils this down to two approaches.
The fist one comes from the agile manifesto itself and he’s talking about “individuals and interactions over processes and tools”. This made the move from big-ceremony practices like waterfall to agile possible. It’s deeply rooted into agile to work small iterations and then we can do things on an individual level and have many more interactions. Agile is all about inspect and adapt.
The second approach is about findings by the Google DORA group which is less subjective and we can apply more easily. It’s the stability and throughput metrics we described earlier. Stability and throughput still aren’t perfect but they do provide us with a more measurable means of tracking the situation. This feedback is invaluable as a fitness function for guiding our efforts forward.
Yet David mentions something interesting to end the chapter and I’ll end it with this quote: “If your stability and throughput numbers are good, your technical delivery is good. So if you are not successful with good stability and throughput, your product ideas or business strategy is at fault”.
This makes perfect sense if we take the stability and throughput metrics as a source of truth.
#end
01010010 01110101 01100010 01100101 01101110
Recent Comments