#begin

Next I want to discuss the chapter about testing (chapter 12). I’m a fan of applying TDD where it makes sense and adds value. This chapter is not about TDD but it is still very interesting to read how they viewed testing and made it a core activity within OOSE.

This chapter dives deeper into what testing actually is, what types of tests there are, how one should test and why it adds value for the quality of the product. Yeah, let’s go (and check it out)!

 

On Testing

The authors immediately hit this chapter off with a nice statement: “To test a product is relatively independent of the development method used”. I think this is a great observation, and very much the truth. Any product created with any development method will always require some form of testing. May it be automated, manual and/or exploratory testing.

Testing is often separated in two parts; verification, which tests for the fact that the correct system is being build, and validation which tests whether the system is being build correctly.

In OOSE, testing is integrated in all activities and they can begin as soon as the analysis has started. The more it is integrated, the better.

Next they mention a very important fact and that is that a very important aspect of testing is your (yourself, team, management) attitude towards it. We must be aware that testing takes time and the cost can be high and must be allowed for. Testing can take up 30% of development time, and is some cases even exceed 50%. Testing is part of part of development and must be planned just like analysis and construction of a product. It should (read must) be described in the project plan, it should not be something that is scrambled together quickly at the end of the development phase.

ha..ha.haaaa. Yeah, this makes so much sense and it’s logical and every software developer knows this. Yet, still, testing is so often delayed as long as possible being pushed back as far as we are able to do and then “scrambled together” at the end of the development phase or somewhere during alpha/beta. I think the authors state a great fact about testing here. No matter if you like TDD or BDD (they don’t mention it because these concepts did not exist yet 30 years ago), you need to test as soon as possible no matter your development method.

 

The purpose of testing

This section of the book starts of with some definitions about what types or “errors” there can be in a software product. They mention they used the 1983 standard of the IEEE:

  • Failure: The program misbehaves. So it’s a property of the system execution.
  • Fault: Exists in the program code, so it can be fixed by changing the code.
  • Error: A human action that resulted in a software failure, thus an error may lead to a system containing a fault making the system fail.

We should therefore consider that, if we find many faults with our tests, we have a successful test, and not the opposite. I think this is the mentality used in TDD. We intentionally first write a failing test, which is good, and then make it pass which is even better.

So the purpose of testing is to find as many faults as we can before we ship the product. In some sense it is a destructive process since we must try our very best to show faults in the system.

A nice anecdote they mention as well is that for every third fault you correct, you introduce a new one. This is well known meme, but it is actually backed by research. I remember having to read this paper in my Software Evolution course during my master’s in university.

 

Test Types

Next the authors continue with an explanation of the different kind of tests there are. They speak about Unit-Testing, Integration Testing, Regression Testing, Operation testing, full-scale testing, performance testing, stress testing, negative testing, requirement testing, ergonomic testing, documentation testing and finally acceptance testing.

When I read this section of the chapter some forms of testing were unknown to me, or the difference between certain forms of testing were not clear. I think we all know what Unit testing and integration testing means but how about the others. I won’t go to far into details but just briefly mention the others;

  • Regression testing: re-testing you system after you have made a change. You might for example run both your unit and integration tests to verify your change did not break anything.
  • Operation testing: This is the act of testing your system stability under normal, expected conditions and check whether it behaves correctly.
  • Full-scale testing: Testing your system with all parameter cranked up to their limits (yet, not over) so see if it behaves correctly and if needed exits gracefully instead of flat out crashing on you.
  • Performance testing: Testing the processing capability of your system.
  • Stress testing: This form of testing is pushing over the limits of your system to see how the system behaves and exits well. It’s much like the full-scale test except the fact that limits are exceeded on purpose, onccasionaly. Your system must survive the peaks of processing.
  • Negative testing: Stresses beyond what the system is build for, pushing far over its limits. All special barriers and exit strategies must be tested.
  • Requirement testing: Tests that can be traced back directly to requirements. They could be one of the earlier tests like performance tests or full-scale tests.
  • Ergonomic Testing: This basically tests your UI. This comes down to exploratory testing and thus is a manual activity. It checks whether the UI is consistent, menus are readable, messages are visible and understandable. Does the system provide a logical picture.
  • Documentation Testing: Test whether your code is up to date with the documentation. Many documentation is not up to date with the current state of the system. These tests will verify that it does. I’m not sure how one would automate this, which isn’t described in the book. I have a feeling this form of testing can take a lot of time to complete.
  • Acceptance testing: The last tests before it goes to the customer. These tests are often called Alpha or Beta tests. I think we all know what they mean.

All these forms of testing are actual testing activities that are performed, yet there is one more prevalent form of testing which is code inspection or code reviews. I think this has been integrated in our industry very well. I cannot stress enough how important the concept of code reviews are. Not just to keep up the quality of the system, but also to keep yourself educated about parts of the system you did not work on. Reviewing someone else’s code gives you some insight about parts of the system you did not actively write code for. This is a great way to stay in shape.

Next the authors go on about testing techniques and zoom in on Unit and integration testing specifically. So let’s see what they have to say about them.

 

Unit Testing

Unit testing (UT) is the most primitive form of testing and should be done by the developer him/herself. In traditional systems, unit tests refer to operations but in OO code they could be of higher level. In OO, you might have to setup the correct state for the test — arrange all dependencies if needed — before you can actually test anything. (This is a symptom of bad UT). This is why integration testing is often a smoother activity than UT.

Any form of testing will require you to setup certain test beds. Since we will avoid testing in production, although this has changed al ot with the advent of micro-services and fail fast mentality, we need to simulate test environments where we can run our tests safely. This has changed so much lately however I think we need to do both. We need to run traditional tests, and if we can run tests in production as well like throwing in the chaos monkey into the live system every now and then.

UT normally speaks of two methods: structural (white-box) testing and specification (black-box) testing. White box testing means that the tester is aware of the internal implementation, yet with black-box testing they don’t. I think every developer knows what this means. The important part of this separation is that; for proper and complete unit testing, you need to do both since they complement each other.

Another very important thing is that each and every unit test should be designed with regression tests in mind. When I read this in the book I thought it was a very truthful statement, haha. This is often the blame when people first start practicing TDD. They write many, many tests and then refactor or new requirements come in, and many tests break. They complain, whine about it and probably delete the tests and create a hole in the test suite. So, always remind yourself, and others, that unit tests are volatile for change.

Now, in the context of OO and structural testing we need to pay special attention to testing inherited classes and overriden functions. There are at least two reasons why you might need to test inherited classes and operations.

  1. The child class modifies instance variables which the inherited function assumes immutable or certain values for.
  2. Operations in the parent call inherited and/or overriden functions implemented in the child.

Inheritance thus might result in more extensive testing depending on the implementations of both the parent and child classes.

Specification, or black box testing on the other hand is focused on how the unit solves the problem, instead of what the unit tries to solve. A great technique to test how the unit solves particular problems is through equivalence testing. This is a technique to, first of all, reduce the total number of tests that need to be written and one that covers a small amount of all number of possible tests that could be written.

For example; for integers we would test 0, 1, 2, 5, 1000, MaxInt, -1,. This covers a couple of things, first of all we test for boundary values like 0, -1 and MaxInt and we test for even and odd numbers. There are probably some more interesting integers to test for depending on the function (maybe we have special behaviour of negative, odd numbers, then we need to add more cases to cover those).

 

Integration Testing

The purpose of integration testing is to test whether the combination of multiple units are working together properly. This is often subject of debate since; if we do extensive unit testing, why would we need integration testing since everything has been tested already. Well the answer lies in the fact that when units are combined we will find unforeseen faults. Maybe there are timing or threading issues that arise for example (so much fun!).

The authors mention; for integration testing we often need a special test environment and these tests are most likely done by the test team. I think it is interesting that they frame it as if some testing/QA team should write and execute the integration tests. They may be a valid option when using cucumber or fitnesse for example, but those are more acceptance tests than integration tests. So I think integration testing i part of the developer’s job description.

A nice tip they give as well is to never start integration testing before the unit testing has been completed since faults in underlying tests will most likely hinder the integration tests. It’s also important to keep some kind of test log for all tests you run. I think our modern testing tools do this very well automatically so we don’t need to worry too much about this.

So how would one identify an integration test? Well in the context of OOSE, using a use case is a great place to start. The use cases inherently “integrate” many classes into one coherent flow. Since use cases are control objects, like described in my short blog about sequence dialog structures, they act as a spider in a web, controlling multiple objects. So writing tests for a use case object is great for integration testing.

How would we test some use case then? Well we need to verify the different paths inside the use case, and test for those. So we probably write basic and odd course tests, but also tests that validate certain requirements are implemented and documentation tests. Also, we need to make sure we do some stress, full-scale and operation testing and make sure we can run multiple use cases in parallel.

 

Error analysis and test completion

When all the tests are done we need to analyze the results to check whether we have found new faults in the system. We also need to be sure that the tests have been executed correctly. Is there a fault in the test, the test data, the system or even the test environment configuration. If the fault was not created by the system under test, we need to correct it and run it again.

When all tests are analyzed and completed we need to make sure we restore the test environment for reuse next time. Documentation like test logs, experiences and other should be collected and documented properly. It is important that experiences are documented as soon as possible after the testing has been completed since they are fresh in the tester’s mind.

 

Conclusion

Testing. A very important aspect of software development to me personally. However, the way they describe testing in this book does not really resonate with me as much as I would have thought before I began reading it. I’m a fan of TDD, and it’s application where it makes sense. TDD tries to solve many issues described in this book like testing all courses through a unit. Anyone with some experience with TDD knows this might lead to what we call the “fragile test” problem. This arises when we do so much white-box testing that our tests become greatly dependent on the internals of a unit that, when refactored or updated with new features, all tests break.

I think if you practice TDD correctly you can solve many issues presented in this chapter. Just by the fact that you write the tests first, your code will be testable. This is an issue they don’t address directly but I guarantee that if you would just write your functions and use cases you will quickly find out that some parts of the system are untestable.

But enough about TDD.

I think this chapter covers a lot of ground and is definitely a great way to start. I did not discuss everything in it, so if you are interested in testing in particular I encourage you to read it for yourself. A very important fact they state in this chapter is that testing is part of every step in the software development life-cycle. Even when you are in the analysis phase, you want to include QA for their watchful eye and when you have some use cases defined you want to validate them with your customer as a form of acceptance testing.

In my next blog I will do a full review of the book. Stay tuned!

 

#end

01010010 01110101 01100010 01100101 01101110

Hey, sorry to bother you but you can subscribe to my blog here.

Never miss a blog post!

You have Successfully Subscribed!