#begin

In my previous blog I talked about the misconceptions of the responsibilities of a Scrum master (SM). I talked how the SM is most often a dignified version of a project manager which mostly focuses on the `people side’ of scrum or Agile. The `technical side’ of scrum is often forgotten, not understood due to the lack of technical knowledge / experience of the SM, or simply not important enough to spend time on.

In this blog I want to talk about the technical aspects of Agile. Why they are important and why the SM must defend them. Without doing the technical practices a SCRUM project can turn into a big mess really quickly.

So, what are the technical practices of Agile and SCRUM? Let’s summarize them and then discuss them individually:

  • Test-Driven Development
  • Refactoring
  • Simple design
  • Pair programming

Those four are the original technical practices of Agile, but I would like to add one more which is:

  • Continuous integration and deployment

Now, why would I? Well because CI pipelines are very popular and battle proven. I think there is a general consensus in software industry that CI/CD is a good idea and it fits perfectly well with Agile and Scrum methodology. Also, with the hype around micro-services there is no (practical) way around CI/CD.

Test-Driven Development

Haha yes.. this is quite a controversial topic among programmers. Many appose it all together, others have a very bad experience with it and on the opposite side many programmers swear their life on it. So what happened here? Let’s first discuss what Test-Driven Development (TDD) is.

First of all, there are three simple rules of TDD:

  1. Do not write any production code until you have first written a test that fails due to the lack of that code.
  2. Do not write more of a test than is sufficient to fail – and failing to compile counts as a failure.
  3. Do not write more production code than is sufficient to pass the currently failing tests.

Any programmer with just a little experience will think these rules are absurd. These rules result in a very short cycle (maybe some 15 seconds) of programming.

The moment you start writing any test code, it will fail to compile, then you must write some production code, but ju(uuu)st enough to pass the currently failing test. Only when it compiles or passes you are allowed to write more production code. You will constantly switch between writing production code and test code.

What could be the advantages of this `horrible’ programming experience you might ask? What does TDD promise to deliver?

Well, first of all, if every member of the team works this way; all the code, that everyone is working on always worked 15 seconds ago since everything passed its tests.

Secondly, you will not be debugging as much as you are doing right now, because, if you made a mistake you simply ctrl+z a couple of times and try again.

Third; all these tests you are writing are the perfect (technical) documentation, it does not lie, it is always in sync with the application and it is written in a language all programmers will understand.

Fourth, somehow, passing all these tests is fun. It feels like you are accomplishing something. Every single time the light turns green, you will get that slight hit of dopamine (ah yeah).Β  Also, you know you do not have to test it manually since you have already tested it. This saves you a lot of time and energy. Yeah, I also can’t remember any developer ever saying they enjoy manual testing, especially their own code. Another important thing here is that every programmer has probably experienced some manager saying that they cannot merge their branch before writing tests. He makes you write tests in retrospective… How much fun is this? You have already tested this code manually; now why do you need to write automatic tests? Probably just to pass some rules set by the company. Then while you are writing these tests you notice that some code is very hard to test since it is highly dependent and woven into the system. Your solution is probably to not test it, since you tested it manually and you verified that it worked (for now). This way, you leave a hole in the test suite. And if you are doing this, everyone is, and thus the test suite is useless. Have you ever worked on a project where everyone would snicker when the automated tests passed because it did not mean anything? Yeah that’s bad and a total waste of time..

Fifth; when everyone on the team is programming like this you will feel obligated to strive for completeness of the test suite. You will think longer about the problem you are trying to solve, trying to find all possible edge cases. This will greatly improve the quality of the code you are writing.

All this seems nice and perfect however there is a lot of resistance and disagreement among software developers in regards to TDD. I have heard many people complain that TDD is very slow, and you will only get any benefits `in the long run’. Also not complying with the rules of TDD all the time makes them write the tests afterwards, which is not very fun. Then there is also the test fragility problem that happens when you treat the test suite as a second class citizen in your system. Meaning, you have a nicely designed production system, but your tests are not designed well. And, of course, developers with proven experience will tell you it is not necessary to write code this way. And lastly, there is the known problem trying to do TDD when you don’t know what you actually need to test. If you do not know what the production code is supposed to do, you simply cant write a test for it. These situations often arise is software that shows or calculates simulations. You can write tests for individual parts, yet not for the end result.

So what’s my opinion? I think TDD is a very useful and powerful practice. I think you need to apply it whenever you can, but… There is always a but, I also think it is irrational to strive for 100% code coverage. Not that I do not agree with the three rules of TDD but that it is often impractical in real life situations. Everyone on the team needs to do TDD, everyone needs to accept there is no other way to write production code, without writing a failing test first. Also, people need to understand how to properly design the test code or you will run into the test fragility problem even after the first iteration of your agile project. In Kent Beck’s book; Test-Driven Development he talks about you need to test the behaviour of your system, not specific functions. So writing unit-tests (on a function level) already does not really make sense since you are testing the smallest most intricate parts of your system. I think you can write as many unit tests as you need to be comfortable to merge your feature into production, however you should delete the tests that are too specific to the implementation. With proper encapsulation, not just in OOP, we have the ability to hide implementation details from the behaviour of the system. With TDD we are interested in testing the behaviour, not all it’s details. When you have many unit tests, that test encapsulated behaviour, you end up with the test fragility problem really soon since you need to fix all those tests once you want to change the internals. This talk by Ian Cooper explains it perfectly:

Refactoring

Ahhh, yes.. the dreaded word no manager ever wants to hear but oh so familiar in an Agile or SCRUM project. I think everyone who has ever worked with Agile or SCRUM will remember these sprints which soul purpose was to refactor and stabilize the code base. When managers and customers hear this word they first might not know what it means but as soon as they do their reaction will be like this: so you want me to spend money on redoing something that already works? Hell no! As developers you never ask to do refactoring, it is simply part of your job description and professional behaviour. You for example calculate it in with the next user-stories. However sometimes, when the impact is too large it might be smart to involve the SM and PO. They way you do bigger refactorings is in simple, small steps, where you gradually improve the code. This process might take days, weeks or even months.

But how does the SCRUM team, or any software development team for that matter, protect the code from the ever growing technical debt induced by rapid workflow. Well you continuously refactor the code, never ever will you stop. You might implement a feature today, but due to requirements changes the implementation does not make sense anymore for the next iteration. This forces you to refactor the old code as well, not just simply hack the new requirements in with the old feature.

Now to refer back to TDD; refactoring is part of the cycle of TDD. With TDD you are dependent on a failing test (red), a passing test (green)Β  and lastly, refactoring and cleaning the code, while the light stays green. This small cycle is often called red-green-refactor. So when you practice TDD, continuous refactoring should not be a problem.

What I have experienced is that refactoring often does appear on the schedule. No one ever likes it being on the schedule since it always feels like it is a waste of money, especially the customer. I think as a developer you refactor relentlessly, always questioning the current state of the code in regard to the user stories that are in the iteration, while keeping in mind future tasks as well. I also think that code reviews are deeply related to this. Doing proper code reviews, checking for imperfections, duplication, weird control flows or abstractions and bugs, of course, will be a first guard before code ends up in the production environment.

So when you do this, will refactoring ever appear on the schedule again? I think so, yes… But why? I think not in its unnecessary form it might appear on the schedule now. But I think for large refactorings there might be items on the schedule, just for the sake of writing them down and making them known to the team and management. I recently watched the conference talk below by Philippe Kruchten, yeah the one known for the “4+1 Architecture model”, about how to manage technical debt. His main give-away was to always try to make technical debt known. The first step is to put it on the schedule. He compared technical debt with software architecture. Try to convince a manager or customer to spend time on software architecture instead of features. I think everyone can remember those discussions, right?

https://www.agilealliance.org/resources/videos/technical-debt-in-software-development/

Simple Design

Isn’t this something everyone does? Why would anyone design software in a difficult way? Well because people over-analyze and over-engineer things. Simple Design originates from eXtreme Programming and needs to conform to four rules.

  1. Pass all tests
  2. No duplication
  3. Fewest classes and functions
  4. Reveal the intent of the code

That’s it. It seems simple yet, how often have you seen a problem totally being overblown when you read the code?

Often when people talk about Simple Design they throw these acronyms at you like YAGNI or KISS; so let’s describe these first.

KISS

Keep It Simple Stupid… I always liked this one since this is used is so many different contexts and fields of work. I first heard about KISS not is software industry, but in a video about fitness / strength training. It always reminds me of KISS in this context, so let me explain: In strength training there are a couple main (compound) exercises, like the Squat, dead-lift, Pull-up/chin-up, bench press, overhead press (I might have missed your favorite). By simply doing these exercises you can achieve great results. These exercises are easy to learn (yet hard to master) and target many different muscles. By simply repeating these exercises you can build strength en train coordination and muscle contraction. When you have got some experience you can start to throw in some more difficult exercises like snatches for example. If you do not get the basics right and start doing difficult / weird / pointless exercises right away I promise you will end up on social media when you are monkeying around on some piece of equipment. When I see another video of some dude hanging upside down in the cable station trying to work his bicep it always reminds me of KISS (it always makes me laugh though).

Now why would I tell this little anecdote? Well when you deliver software with an overly complex implementation compared to the problem you are solving; the software team will look at you like you are that random dude flailing in the cable station. No one wants to be that guy… So always try to adhere to KISS and solve problems in a pragmatic manner. Try to stay true to the current design or known best practices. Sometimes you will find that some requirement is a total mismatch compared to the current design or architecture of the code. This will force you to think through the problem properly. It might even force you to re-implement some of the existing design; but when you do try to conform to things like the enterprise design or design patterns. Do not blow up the problem. I will remind you of the following statement called Anderson’s Law:

I have yet to see any problem, however complicated, which, when you looked at it in the right way, did not become still more complicated.

YAGNI

You aren’t gonna need it: This essentially boils down to programmers over-engineering things for some future benefit. However, requirements to match that benefit, might never come. This is classical thing that drives complexity of many software systems. I have seen it happen all to many already. You are working in some iteration solving an easy feature but you expect the customer to expand the current feature in the future (maybe next iteration already) and try to anticipate it by implementing some overly complex solution. I am guilty of this too, I think everyone who has ever written software is guilty of this, especially in an agile context. It is all to easy to peek into the backlog and see whether there are some stories in there that justify you implementing something you do not need to solve the issue you are working on right now.

So how would you anticipate future features, and yet still adhere to yagni? Well by setting up the correct and minimum amount of abstraction for extension in the future. This can simply be achieved by, yes you guessed it, refactoring your current solution. You can extract classes or interfaces from the current code and build the feature on that instead of some concrete implementation. So when you finally get that story that expands the current feature you can simply re-implement or extend the interfaces.

Duplication

Every software developer has learned and knows code duplication is a code smell. Yet I bet you can think of some place in the code you are working on where code is simply copy-pasted around, maybe to `save some time’. Duplication can become a really big problem when it is done too often. When bugs arise in the duplicated code you must track down all the duplicates and solve the same bug over and over again, and probably forget one in the process.

To my experience, where I encounter the most duplication, is where there is a heavy use of enums (enumerations) combined with large switch or if-else statements. A change in the enum requires you to track down all the usages and read the switches to check whether nothing is broken. In my opinion, Enums promote and breath bad design. Do they have their place? Yes, of course, let me explain:Β  Enums can be very useful when communicating through some API, maybe over the web via JSON. The enum value might give hints of some type; now when you consume this JSON structure, you need to parse these enum values, in a factory class with a switch statement maybe. Then the factory will instantiate proper instances of classes based on that enum. This way, the enum and many switch statements will not propagate through your entire code base, but simply be isolated in that API communication/translation layer/component.

Design patterns

Design patterns, aren’t those a common good thing? Well, yes if applied in the correct manner. Design patterns are really nice when you have specific problems that need to be solved and there is a known pattern for it. Does this mean you need to apply them everywhere you can? No, because they will complicate the code in places you do not need them (yet). Another point to make here is that when you apply design patterns everywhere the code probably does not comply with KISS or YAGNI. But to be fair, design patterns are mostly a good thing because when you use them you speak a common language to other programmers. I think everyone must know the design patterns in the original GOF4 book. Even if you are not programming in a (traditional) imperative OOP language. People sometimes say that design patterns are not needed anymore because we got these new awesome languages and they do not apply anymore. That is a load of BS… Maybe these are the same guys that have an army of singletons in their code.

Another thing that I have come to learn over the pass years is that design patterns aren’t law, they are guidelines or molds to shape your application. Knowing the common design patterns help you understand the principles of OOP in a clearer way. The patterns learn you to understand polymorphism and what are good ideas in OOP in general. Then on top of that, each individual pattern has a different characteristic to help you in some place where you need more flexibility or need to encapsulate an abstraction or less coupling.

So they key takeaway here is to get knowledge of all the design patterns but be careful sprinkling the patterns all over your code. Apply them only when needed to keep the design simple.

Pair Programming

Another practice that has takes a lot of heat and is still quite controversial. I simply do not understand why pair programming is still not an accepted way of working. I will try and clear things up: Let’s start with the fact that pairing is optional! There are many problems a programmer can solve perfectly well on his own, but every now and then there is going to be that one thing he or she simply cannot get his / her head around. It is very, very useful to have an extra pair of eyes. There are more advantages like; Pair programming can be used to educate and train (new) programmers, not just with the code but also the workflow or domain language. Also, it is the perfect way of doing a code review! Why? Because both programmers have gone through the same thought process, they are likely to have the same understanding of the problem domain and the solution that is presented in the code. They will remind each other to follow the correct practices. Additionally; pairing is a team building and coaching experience.

Why is pairing still controversial? Well because managers will often think it is a total waste of time and money; why would they put 2 (or even more) people on the same problem if 1 should be able to solve it as well, aren’t these programmers professionals? Well my argument would be that pairing is part of my job description and professional behaviour. It is a way to take a shortcut through difficult to solve problems. Why would I break my head on something when I could pair with someone who has more domain knowledge or experience in a certain area. He can teach me, and transfer some knowledge and maybe next time I will able to solve a similar problem easily, on my own. Another thing is that once managers have come to accept that pairing is a thing among programmers; they will sometimes create these schedules in order to try and contain the amount of time spend on pairing. This is unacceptable, programmers choose to pair on an informal schedule based on need! It’s like telling the manager he can only spend 2 hours a week on meetings…

What does pair programming have to do with agile? Well personally I think.. not a whole lot. I think pairing is part of being a software developer even without agile work methodology. However, historically pair programming came forth from the eXtreme programming and TDD community. Pair programming in combination with TDD is a very cool experience. If you have never tried it, you should. Let me quickly explain it:

You pair with another programmer to solve some problem. Now one of you starts writing a test and the other programmer needs to solve it. Once he is done, he will write the next test for you to solve. You simply keep switching between solving and writing a test until the problem is solved. Important here is that you never write a test to solve for yourself, only for the other programmer. This forces you to keep the test simple and to give it a proper name.

Agile looks and feels a lot like eXtreme programming so it made sense to incorporate pair programming in agile. Also, agile was started as a movement by software programmers not by managers. So they included pair programming since it made sense to do it.

Continuous integration and deployment

Ah yes, CI/CD seems to be the latest hype, but is actually pretty old and was part of agile since the beginning. I just wanted to point this practice out specifically since it was never highlighted as a practice of Agile. Now, it has always been a practice in Agile, just not (even) in it’s final form. Since the beginning, agile meant that every iteration there would be a deployable piece of software, and it still means there will be. CI/CD is just a modern take on that practice. Current technology allows us not just to deliver deployable software at the end of an iteration, but if you really want, every, single, commit.

Continuous integration can help the team go fast and it will make sure everything is up to date as quickly as possible. You do not want to keep your feature branch open for a long time, because you will need to do a big merge then. I am guilty of this too. Spending an awful lot of time to finally get a feature done, then coming to the conclusion the development environment has changed greatly during the time I developed my feature. After that merge, my feature was broken due to all the changes, and thus it takes even longer, with the danger of the cycle repeating itself. Do not forget integration is also part of the feature you are working on when you estimate user stories. Now, there will be these user stories that are going to take a lot of time. How would you make sure you do CI? Well by merging the development branch back into your feature branch maybe after lunch or/and at the end of every day depending on the progress made by your team. This way you are pretty much up to date constantly.

A big part of CI and CD nowadays is: the continuous build and deployment. I think the micro-service fad has really emphasized the importance of CD. Without CI/CD micro-services would be a pain the ass and it would be less of a hype. I think it is a good thing that everyone now wants to practice CI/CD since it is only humane to do so. No one wants to manually build the software, sitting there staring at some progress bar. It will also reduce the space for human errors since it is all automated. One important thing here is that when you practice CI/CD the build should never break! Once it breaks, development must stop to investigate what went wrong and fix it a.s.a.p.

Having a proper CI/CD pipeline will also be really appreciated by your QA team. They will be able to test certain features before the end of an iteration resulting in a shorter feedback loop. This is a good thing! QA will be able to report bugs to the development team before things get into production which will reduce the number of open bug reports. Also, releases to production, will be far more stable because of it.

What happens without these practices?

First of all, Agile, without the technical practices, appears to become just a `management` technique. The team will be able to move quickly the first couple of iterations. But when you ask that team to estimate some feature a year later they will give you some absurd answer since the code-base is one big mess.

I think everyone, who has ever worked with agile, will be able to relate to this! At first everyone is happy and features are being delivered at light speed. But as the project progresses, the requirements change and the architecture/design, if there is any, takes a beating due to the lack of technical practices and rapid iteration time. The programmers will start to get annoyed by the project, management and the customer. Communication suffers, and programmers will try to cover for themselves by giving absurd estimations since they can no longer foresee how much time simple things will cost. Ultimately, the project will reach some budget where the customer needs to make a choice whether to take what ever is there, or to throw more money at the project. When he decides to invest more he will probably still not get what he expects since this is a downward spiral.

So how do you make sure the technical practices are followed? Well, if you do SCRUM there is a role whose soul purpose is to defend the process. I’m talking about he SCRUMMaster (SM) of course! The SM is responsible for defending the process and making sure ALL SCRUM practices are followed. This includes the technical practices! In my previous blog I wrote about the modern role of the SM and that it is most often mixed with management tasks.

We really need to make sure that the SM or project lead knows these practices (even when he/she does not have a technical background). He needs to know about TDD, why we need automated tests and not just test everything manually. He needs to know about refactoring, that everything in the code-base is subject to change (to a certain degree of course) when a new feature is implemented as a way of managing technical debt. He should not discourage pair programming since, first of all , it’s none of his business, second it will definetely help programmers solve difficult problems more quickly. He should remind the team, if need so, to look for simple solutions instead of portraying problems as too complex. Keep the design simple!Β  And lastly, to save everyone some time and energy, setup a CI/CD pipeline as quickly as possible. Yet only do this when you have made commitments to the techstack! it makes no sense to me that you would spend time on a CI/CD pipeline if you are going to use a new techstack next iteration or switch programming languages.

What if..? The project lead does not have a technical background? Well, make sure you appoint someone on the team who understands the technical practices to make sure the they are followed. Let him coordinate with the project lead and defend the team.

Conclusion

So I have talked about the technical practices, what they are and why they are important. I think the practices are an integral part of agile, and without them, chances are high for a failing project (trust me, I’ve been there). I wanted to write these down to first of all remind myself… I think I/we often forget about them in the heat of the battle. To my experience the most difficult practice to follow constantly is TDD. The other practices have become second nature by now, but TDD is still too controversial to be accepted as a mainstream technique I think. I often use TDD when implementing some totally new feature, but I rarely use it while extending or changing existing behaviour that has no (unit) tests. It is often far to difficult to write tests, for something that is not designed to be tested. I am guilty of this and there are ways around this but sometimes due to time constraints I still won’t do it. I think that’s something that needs attention..

#end

01010010 01110101 01100010 01100101 01101110

Hey, sorry to bother you but you can subscribe to my blog here.

Never miss a blog post!

You have Successfully Subscribed!