#begin

At last! I’m finally writing my review of Object-Oriented Software Engineering (OOSE) A Use Case Driven Approach by Ivar Jacobson et al. It took me a lot of time to read through the book. You know, some books are really difficult to read and are really information dense, but in this case it’s just the sheer volume of the book that makes it long to read. I’m not much of a speed reader so I read every word, sentence, page, paragraph, section and chapter plus I actually read books cover to cover. It’s not often that I put away a book, unless is obvious that all valuable information is in the first chapters, and the rest is just filler content to satisfy the publisher’s needs.

But enough rambling about reading; I bought this book since it is praised a lot in the software craftsmanship community. Many of the plugin based architectures stem from this book. And after reading this, I can tell how they took inspiration from it. Uncle Bob and Ian Cooper often mention this book specifically in their (conference) talks. But also when, for example, Jimmy Bogard talks about his “Vertical Slice” architecture I can see the comparison with the concepts discussed in the OOSE book. What I’m trying to say is this; The concepts of this book have inspired and penetrated the software industry in many ways.

Some people also asked my like: “Why are you reading this book, it was written such a long time ago and don’t we all know about use cases already? I think that is a valid question yet this book is a true classic. I think the concepts in this book have stood the test of time. I could compare it with another book called The Structure and Interpretation of Computer Programs, by Abelson and Sussman in that sense. And yes, we probably all know about the horror that are use cases, but the use cases described in the book are not that horrible as one might think. And then again, I would ask them, do you know what “a use case driven approach” means and how you can drive an architecture by use cases?

I really enjoyed reading this and I’ll now review the book, so without further ado; let’s go.

 

Structure of the book

So this book has 16 chapters which are divided into three parts: First we have the introduction which consists of 5 chapters. The second part is all about the concepts of OOSE. This is the real content of the book and definitely the most interesting. The third part describes some applications of OOSE and it contains some case studies how to implement OOSE into your organization.

Since this is such a massive book, I will not describe each chapter individually. I will however split this review in the three parts I mentioned. Each part will globally describe the concepts discussed in the containing chapters.

 

Part 1: Introduction

Part 1 contains 5 chapters and is really, as the title says an introduction to the book. The rationally why the book is written is explained here and it lays the ground work for OOSE and object orientation.

 

Chapter 1: System development as an industrial process

This section of the book is all about the reasons why we need this book and why we need a development methodology.

Chapter 1 starts of by making the case that writing software is an industrial process. The only way to solve the software crisis is to write lots of software efficiently and effectively. The industry they compare software with, is the construction industry. I think this is a comparison that is often made. We can see this in the terminology of “Software Architecture” which comes from construction as well. Many industries, including construction, depend on methods, processes, and tools to make sure they work.

Methods make explicit step-by-step procedures to be applied to the architecture. A process provides scaling-up of a method and tools are provided to support all aspects of the enterprise. Soto really transform software into an industrial process, we need a method, processes and tools. And that is exactly what this book is all about!

Some other important aspects of construction is the fact that it leaves room for creative design and we need long time support. Requirements are often vague and problem solving is left for the experts so out-of-the box thinking is often appreciated in construction, as well as software development. Long time support is also a very important aspect of software development. As software architects we need to take that into account.

Based on these things the book makes the case that they have created a method, processes and tools to really industrialize software development. They call the method Object-Oriented Software Engineering (OOSE). OOSE is designed to make a rational architecture and makes essential contribution to long time support, documentation, reuse and places special emphasis on management of change

Next the authors describe how the software development process would look like compared to construction. I will not describe this now since the rest of the book describes this is far more detail.

 

Chapter 2: The system life-cycle

This chapter dives deeper into the process of change in system development, reuse of source code, documentation and examination of the methodology.

All systems change during their life-cycle. Period. An industrial process should thus focus on these system changes. Therefore, the concept of software architecture must be the base of the system throughout all subsequent development. The architecture must be adaptable to support changes during the life-cycle of the system. A faulty architecture will result in serious consequences down the road.

Software development truly is an iterative process. It’s great that the authors made this case! Remember, the book was written in 1991, Agile and SCRUM did not exist yet! System development is this a process of progressive change.  What is also particularly interesting is the fact that they say that it is insufficient to only describe the system from the designer’s point of view (being the software architect). It needs to be described to all parts of the enterprise!

They also state, very specifically, that in order to reach commercial success you need to limit the number of different versions you release and instead make sure that your software can be configured in different ways depending on the needs. There should be no need to re-write the entire thing when a new customer walks in through your front door asking for the version that fits his needs.

Prototyping is also a big element in OOSE since it is often difficult to determine how a system should work. Reasons may be technical or non-technical, or even both. That’s why we need a prototype; which is used to highlight certain properties of a system, other parts can be disregarded. This suggests that there may be multiple prototypes for a since application. For example; we might create a prototype just to verify that the basics of the architecture are well enough and we might create a prototype to demonstrate how the user interface would look and even function. (Modern wire-framing software is really nice!) What is very important though, is to communicate the objective of the prototype; You do not want to take a highly experimental prototype as iteration 1 of your end product. I have experienced this far too often where we would hack some prototype together in a week, show it to the customer, and then the (project)manager decided that we take it from there ignoring signals from the developers that it is absolutely not meant for further development. Staring with an unstable base project will lead to problems in the future, trust me!

Another big selling point of OOSE it that is promotes reuse. OO is often praised be people to promote re-usability of software. But how do we actually do it effectively and efficiently? OOSE tries to solve this with the concept of components, which are certain isolated and independent pieces of software that can actually be reused. Also, by promoting changeability in software. If only have to write the software once, but then you can make multiple different releases by just tweaking configuration you can reuse the thing many times over. This way, you can even reuse documentation such as handbooks, educational material and technical descriptions.

Next the authors describe what makes up a software development methodology. They hinted at this in chapter one; we need a method, some processes and tools. Those can be applied to any kind of architecture, but in the case of OOSE we want a use case driven architecture for its understandability, changeability, testability and maintainability.

A method is a planned procedure to reach a goal step-by-step. It’s based on a preconceived notion of the architecture of the working system and is formulated in terms concepts based on the architecture. So a good method simplifies the development systems of a particular architecture.

A process is a scaling-up of a method. So a process should focus on a specific system and describe how a product is handled during its lifetime. Processes can have subprocesses which are independent of other subprocesses. Development work can be split up and carried out at different location. Also included in the process is the notion of version control.

 

Chapter 3: What is object-orientation

So this chapter is basically an introduction into the concepts of OO. I’m not going to dive into detail very much here since If you want to know more about OO you should read some different Blog or content. Nonetheless I will discuss some of the things written here on a high level.

It is often said that “OO models the world around us” and is there to fill the semantic gap between software and the real world. I think this point has been proven false many times over, although for newbies it is a a great way to start with OO. But you quickly find out there are no real objects for many OO concepts.

Prominent qualities of OO include:

  • Understanding: It’s easier to understand a system when it’s objects relate to reality.
  • modifications: In good OO design, modifications tend to be local and not ripple down into every corner of the source code.

Next the authors talk about objects, types of relations, encapsulation, sending “stimuli” i.e. calling functions on other objects, abstraction, classes and instances, polymorphism, inheritance. What is interesting though is that they talk extensively about how to use inheritance. And, sometimes it may be more useful to choose composition over inheritance. I agree with this and I think many other devs do as well. I think it’s really nice they made this observation back in 1991 already (not that these people were stupid or anything). They even talk about multiple inheritance in the book which I do find fascinating.

 

Chapter 4: Object-oriented system development

Chapter 4 is about how one would develop an object oriented system and what methods and processes belong in such a process. First they describe two existing methodologies which are the waterfall, of course, and a more interesting one; the Spiral model. I read about the spiral model by Boehm extensively in my master’s and I found it a really interesting one. It feels a lot like the agile methods we have today.

Next they talk about object-oriented analysis, which purpose is to obtain an understanding of the application focused only on the system’s functional requirements. Object-oriented analysis contains the following activities;

  • Finding objects
  • Organizing objects
  • Describing how objects interact
  • Defining the operations of the objects
  • Defining objects internally

The goal of this analysis is that the objects identified here, should be found in the design as well. This creates traceability among the different phases of the lyfe-cycle. You may also identify objects that have already been implemented which we call components. For example: You could find that you can reuse implementations of stacks and queues.

Next up is Testing; The authors make the case that testing is part of every phase of the lyfe-cycle and must start as soon as possible. We start at the lowest level, unit tests.

 

Chapter 5: Object-oriented programming

The last chapter of part 1 in the book dives deeper into object-oriented programming concepts and strategies. They say OO encourages reuse. I think many people try to sell OO this way. But as time has proven, OO has nothing to do with reuse, good design does. I know many very good, stable and reuseable software that is written in functional a functional language like Clojure. Although, with OO and polymorphism you can surely build some very reusable software.

An OO language must possess the following concepts:

  • Encapsulation
  • Class and instance concepts
  • Inheritance
  • Polymorphism

The rest of the chapter deeply explains these four concepts. I’m not going to describe all this since I think there are way better sources available than my blog. The content in the book is really, really good so if you want to know, just read the book :).

What I do want to discuss is the fact that people often think that inheritance is a prerequisite for reuse. Yet, it is not! You can build really reusable components without inheritance. Another important aspect is to really understand what polymorphism is and how you can use it to promote dynamic binding of you components. Without dynamic binding you get a really rigid system which does not promote reuse.

 

Part 2: Concepts

Now, this is the core content of the book. Part 2 contains 7 chapters which describe each phase of the OOSE method and includes practices and concepts related to it like real-time and database specialization, components and testing.

 

Chapter 6: Architecture

I thought reading this was very fun and interesting at the same time. I like reading about software architecture since it often introduces a new point of view.

So what is architecture according to OOSE and what be be the preferred architecture to implement a system. Well, first of all, since system development should support an organization we should view it from the organization and the user’s perspective. This is really important if the software is being developed outside the organization which is going to use it.

Next the authors discuss some of the concepts used to define the architecture. They talk about object-oriented programming which started with Simula back in the 1960’s in Norway. They quickly found out that OO was very suitable for developing graphical user interfaces which were difficult to design with traditional languages. Next is the concept of concept modeling which came to life in the 1970’s of which analysis of information management systems and organizational theory are examples. The aim here is to create models of the system or organization that can be analyzed. In OOSE an analysis method is described that allows for expression of dynamic behaviour. The last concept is block design which originates form LM Ericson and was also developed in the 1960’s. It is a now a widespread technique in the telecommunication world. Block design describe how modules (blocks) are collections of data and programs and their mutual communication with “signals” i.e. stimuli i.e. function calls. Blocks in OO can be viewed as components or service packages.

 

Models

Next the authors talk about the different models in the OOSE method. This is where it gets interesting since these are directly related to the method.

  • Requirements model
  • Analysis model
  • Design model
  • Implementation model
  • Test model

The basic idea with these models is to capture right from the start all the functional requirements of the system from a user perspective. This is accomplished by the requirements model. The requirements capture all important usages of the system by a user.

When these requirements are gathered, they are structured in a logic manner which is called the analysis model. In the analysis model we assume an ideal implementation environment. We do not burden ourselves with hardware, databases, programming language or even if the system is distributed. Why do this? Well the software world has changed a lot, and very very fast. We don’t want the design of the system be coupled to the current state of the industry since it just changes too quickly.

We do take the implementation environment in consideration in the design model, which is next on the list. This model describes how the system is integrated with the database, programming language of choice, distribution. When all of these decisions are refined the next model can be created.

This is the implementation model, and is the actual code. So at this stage we start writing code to implement the system. Remember, we have already gathered all the requirements, designed an optimal system structure for an ideal world and then made a design model to adapt it to the implementation environment.

The last model we need to take care of is the test model which aims at verifying the system. It mainly involves documentation and tests specifications.

Now, a really important aspect of these models is that terminology is the same in each of them. This creates traceability between the models. Also, in a later model the names of objects are changed, we need to go back to the older models and change the names of the objects there as well. This we we can promote traceability and thus by no means are models final once they are delivered.

 

Architecture

So in the case of OOSE, system development requires different models of a software system. The goal of OOSE is the find a powerful modeling language, notation and technique for each of the models. In the book they define architecture as:

“The architecture of a method is the denotation of its set of modeling techniques. A modelling technique is generally described by means of semantics, syntax and pragmatics. By syntax we mean how it looks, semantics is what it means and pragmatics are heuristics and other techniques.

Then this architecture is implemented through some development process. In reality this process is just a number of different waterfalls.The OOSE book described the waterfall method in an earlier chapter and I find it funny that they describe a development process this way. I think if you say the word waterfall today people will look at you weirdly. Nonetheless it is nice that they acknowledged this. So a process is just how a product should be developed and maintained. In the case of OOSE they do not describe the process as means of waterfall structures but they divide the development and management of a product into multiple processes which are highly intractable.

The main processes here are analysis, construction, components and testing. The analysis concerns the conceptual system we want to build. In the construction process we develop the models created in the analysis and is highly related to the components process which aim is to develop and maintain components that arise from the construction process. We also have a testing process that aims at integrating the system and verifies whether is is ready for delivery.

 

Model Architecture

Since OOSE is an OO method, all models are build by using objects yet for each model developed, there are different types of objects. The most important criterion for defining such objects should be that they must be robust for modifications and help the understanding of the system.

Let’s take a look at the different models and their objects.

 

Requirements model

The three elements that make up the requirements model are; the use case model, interface descriptions and the problem domain model. Interface descriptions refer to user interface in this context, not to the interface programming construct.

I think we all know the use case model. It uses actors and use cases. Actors represent what interacts with the system and a use case is a related sequence of actions. So by defining all the use cases and actors at a very early stage of the project we can quickly find out that the system is supposed to do. Also, we could spot potential new functionality and make sure we make our architecture adaptable. We would not necessarily implement it since we don’t want to over-engineer things but we could add an extra layer of abstraction for example.

In this way we get a system model that is use case driven. When we want to remodel the system, we change the behaviour of the use cases and involved actors. The architecture will be controlled by what the users wish to do with the system!

In order to support the use cases, we should create prototype (user) interfaces early on in the project. In the current day and age we have very nice wire-framing tools like Figma for example. Using these during requirements elicitation and combining them with the use cases will provide you with a lot of knowledge if you are developing the correct system. Using the wire-frames we can simulate the behaviour of the use cases by showing them how they flow together.

The last element is the domain element model. I feel this relates much to Eric Evans’s Domain Driven Design. We want our object model to communicate to all stakeholders, even the non-technical ones. The stakeholders must all understand the conceptual meaning of the objects.

In a way, the requirements model aims to define the limitations of the system and specify it’s behaviour.

 

Analysis model

When the requirements model is approved by the orderers of the system we can start to develop the actual system. The development starts with developing the analysis model which aims to structure the system independently of the actual implementation environment. This means we focus on the logical structure of the system in a stable, robust, maintainable and extensible manner.

In the analysis model we capture information, behaviour and presentation. Information specifies the actual information held in the system in terms of local state, both long and short term. Behaviour specifies what will be implemented in the system and boils down to how the system changes state. Presentation is all about how the system is presented to the outside world.

In OOSE there are three types of domain objects coupled to these three dimensions (information, behaviour and presentation). There are Entity objects which represent the information space. They model the information that should be held over a longer period of time and typically survive a use case. For example; creating an order in a web shop creates an Order entity, which is not destroyed after the order is completed but is probably held somewhere in the system (database) for later analysis, history or other purposes. Also, all bahaviour that is naturally associated with this entity should be placed in the entity object.

The interface objects models the information that is dependent on the (user) interface of the system. Everything that has to do with the UI should be placed in such object.

Last there are control objects that model the functionality that is not naturally tied to any other object. This behaviour often involves operating on several different objects, doing some computations and then returning results to the interface object.

 

Design model

The design model takes the analysis model one step further; here we do include the implementation environment in the system and thus we must adapt our analysis model to support reality. The design model can be regarded as a formalization of the analysis model so it fits into the our implementation environment.

Objects used to model the design are called Blocks. These describe the intention how the code should be produced. A block normally aims at implementing an analysis object yet they are not the same thing. Objects in the analysis model relate to blocks, but for implementation reasons we might need to split them up into multiple blocks so we can keep a loosely coupled system for example. However, we must always be vigilant to keep traceability in the back of our minds. Blocks must be able to be traced directly to objects in the design model. This way, if we make changes to blocks, we might have to make changed to the objects in the analysis model as well.

The design model thus brings us closer to the actual source code. The design model should be a drawing of how the source code should be structured and developed.

Interactions between blocks are modeled by stimuli. A stimuli is a simple trigger to execute logic in another block. In modern OO languages these are just function calls. To model these, we use interaction diagrams (sequence diagrams). These can quickly show how multiple blocks behave. When we have defined these diagrams we have enough information to start on the implementation model.

 

Implementation model

The implementation model is the actual source code of the system. We do not actually need an OO language, yet it is very convenient to have it since the previous models are all described in concepts of objects.

In terms of source code, it is desirable to have a class match with blocks described in the design model. We do this to promote traceability between the different models. This way it is very easy to match a certain block with a class in the source code and trace it back to the entity/interface/control object(s) in the analysis model. Nevertheless, a single block in the design model could represent multiple classes in source code. An example the authors give here is a single block that was using 17 classes for its implementation. But typically, 1 Block will map on about 5 classes.

When you have blocks that are modeled with multiple classes, there might be component be hidden there. Components can be viewed as building elements and ready to be placed in the implementation. But I’ll talk about components later since there is an entire chapter dedicated to them.

 

Test Model

The last model is the Test model and it describes, you guessed it, the result of testing.  The fundamental aspects of testing are mainly test specification and result. Tests can be performed by the developers them selves, like unit test. However, there could be some separate testing group for a project like a QA department, for more intricate tests and exploratory testing.

This testing group should test for bugs and other faults with the system. But what is also important is to test all the requirements and make sure the user interfaces are implemented correctly. The requirement model I discussed earlier is thus verified by the testing process. I’ll talk more about testing in a later chapter.

 

Chapter 7: Analysis

Analysis? Why do you need it? Well because during the analysis process we specify and define the system which is to be built. Further models describe what the system will do. These form the basis for the system’s development. An important aspect during the analysis phase is that we model the system with no regard to the implementation environment. We want to make an ideal analysis of the project as possible.

Why would we ignore the implementation environment during analysis? Because we want to guarantee that the ensuing architecture of the system will be based on the problem and not conditions prevailing the implementation environment.

During the analysis phase, two models will be developed. The first model is the requirements model which should make it possible to delimit the system and to define what functionality should take place within it. The second model is the analysis model. This model gives a conceptual configuration of the system, consisting of control objects, entity objects and interface objects to build a robust and extensible system.

Next the authors describe the sample project they are going to use in the further chapters to explain the concepts in the book. This is a really basic example, yet there are two case study chapters that focus on two additional far more complex examples.

To explain the concepts of OOSE we are going to model a recycling machine for cans, bottles and crates (of bottles). The machine has 2 buttons, start and receipt, three entries for cans, bottles and crates and two actors. One actor is the customer returning his/her items and the other is the administrator who needs to do repair/administrative tasks on the machine.

 

Requirements model

So the requirements model aims to delimit the system and what functionality the system should offer. Sometimes this model is used as a contract between the developer and the orderer of the system. It is thus, a view of what the customer wants from the system. A very important aspect of this model is that is must be readable for non-OOSE practitioners.

As I noted in the previous chapter; the requirements model consists of three elements, the use case model, problem domain model and the user interface descriptions. Let’s talk about the use case model first.

 

The use case model

To identify the use cases of a system we need to start with the identification of the users of the system. Users of the system are called actors in the use case model. Important here is that an actor is simply a role that interacts with the system and it might even be another system. So an actor is not exclusively a person! Also, a single person might map onto multiple actors since a person can take on multiple roles.

A good starting point is to check why the system is being developed. Who are the actors the system is trying to help? These actors are called the primary actors. There are also often actors that need to maintain and support the system which are called secondary actors. Secondary actors only exists so that the primary actors can optimally use the system.

Important here is that the primary actors govern the structure of the system, not the secondary!

Only after we (think all we) defined all actors we start on the use cases them selves. Use cases are sequences of interactions performed on the system. The are specific ways to interact with the system. Each use case models a complete course of events initiated by an actor and it specifies the interaction that takes place between the actor and the system.

To understand a use cases we can view their descriptions as state transition graphs. This means, that use cases have internal state! Transactions between the actor and the use case will change the start and progress through the (internal) state machine.

To find all use cases on an actor you might ask the following questions:

  • What are the main tasks of each actor?
  • Will the actor have to read/write/change any of the system information?
  • will the actor have to inform the system about outside changes?
  • Does the actor wish to be informed about unexpected changes?

By going through all actors and defining their interactions with the system we can get a complete overview of the desired functionality of the system.

Also, in the use cases we might identify actions that are being repeated lie for example logging in. These action can be grouped into abstract use cases and can be extended by others. Sometimes it is not obvious what functionality should be  placed in another use cases and what is just a variant of a use case. We often describe the basic course of a use case, and then the additional courses as variants.

This confusion often appears when the requirements are vague and thus this vagueness in the requirements specification becomes clear very early in the project. This is a great thing because now you have more than enough time to change them since it is at the beginning of the project.

Examples for when to extend use cases are:

  • To model optional parts of use cases
  • To model complex and alternative courses which seldom occur
  • To model separate sub-courses which are executed only in certain cases
  • To model that several different use cases can be inserted into a special use cases, such as login/logout mentioned before.

 

Interface descriptions

It is often appropriate to describe the (user) interfaces when making the use case model. We can add sketches of what the user will see when performing a use case. Modern wire-framing software can even make this fully interacable, and if you are doing web design it might even generate the html for you based on the wire-frames itself.

The user interface descriptions are thus an essential part of the use case descriptions and should accompany them. Also, when designing the UI we want to include the customer as much as we can for input and have the user involved in the project. It is essential that the UI reflects the user’s logical view of the system. This a fundamental aspect of human UI design; the consistency between the user’s conceptual picture of the system and the system’s actual behaviour.

When we have actors communicating with the system that are systems themselves we don’t need a UI. We might however, design communication protocols to model how the systems communicate with each other.

 

Problem domain objects

When I read this chapter I got a read Domain-Driven-Design (DDD) vibe. It is exactly that, yet they give it a different name. Actually, this book was release 13 years earlier than the DDD book by Eric Evans so he might have gotten his idea from this one. I encourage you to read the DDD book, it is great and I have enjoyed reading it as well. Maybe I’ll do a review somewhere in the future. But back to OOSE:

When we have some requirements, however vague they might be we should start with defining some (problem) domain objects. We should start to develop a logical view of the system using these objects that have a direct counterpart with the application environment and that the system should handle information about.

This problem domain model will come in very handy when defining the use case model. Another major benefit of having these objects is it makes a very good way to communicate with the stakeholders of the system. Since the stakeholder can recognize the objects and concepts this logical model can be used to model what the system does.

A benefit of having the domain objects defined in the requirements model is that you can use them in your logical model in the next phase; Analysis.

When we think we have defined all domain objects we can refine the use cases some more. Now, we may be able to define proper abstract use cases which are to be extended or included in other use cases. Note that we make a difference between an extends relation and a uses relation. In the case of an extends relation, we have abstracted out some common part of a use case, however with the uses relation we have a association relation. The difference here is that an extendable use case might be an abstract class which cannot be instantiated without inheriting it.

A general rule of thumb is the following, and I quote: “If the course to be extended is an independent course itself , and the course has very little to do with what it is inserted into, extend should be chosen. If, on the contrary, the courses are strongly functionally coupled, and the insertion must take place every time to obtain a complete course use should be chosen.”

So uses relations are found through extracting common sequences from several use cases, and extend relations are found when new courses are introduced or when extensions need to be made to use cases in specific cases.

 

Analysis Model

When the requirements model is done it should be signed off by the stakeholders of the system to get some consensus for its correctness so we can focus on the next step in the method; creating the analysis model. The focus of the analysis model is on the structuring of the system. We do this through the use of interface, entity and control objects. Each of these objects has its own purpose and will describe a specific aspect of the system. These objects were mentioned briefly in the previous chapter but now let’s discuss them in more detail.

 

Interface objects

Functionality described in the use cases that is directly dependent or linked to the system environment will be placed in interface objects. Through these objects will the actors communicate with the system. The job of the interface object is to translate the actor’s actions into events the system can process and translate events back towards the actor so he/she can understand what the system is doing. This means that interface objects offer bidirectional communication between the actor and the system.

Remember, that in OOSE interface objects are simply the UI classes. So you might use some sort of Model-View-Presenter architecture (or any other UI based pattern or architecture) to implement them. The presenter then talks to the system through means I will discuss soon.

What should be obvious to all developers is that interface objects are often dependent on each other to solve certain problems. We often need multiple screens, windows or pop-ups to get things done. Acquaintance association (uses) relations are often the way to model this. So an interface object can send a trigger to start or show another interface object.

We can also see interface objects that act as aggregate objects and control multiple smaller interface objects. Think about list views and complex screens that contain multiple interesting UI components like grids, image sliders, date selectors and forms etc.

Another type of interface object that is very important and sometimes overlooked is the interface objects not communicate to users, but to other systems. These interface objects are often expressed as communication protocols or API’s.

What is interesting here, is that they describe, that since interfaces and these communication protocols live on the same level of abstraction they are allowed to communicate directly and form acquaintance relations. This seems strange to me since an API or a communication protocol is often implemented as some sort of business logic requirements, not user interface. This would mean that you invoke business logic directly from your UI layer. So I either don’t really understand this part or they mixed up te responsibilities here. They even state that the interface object invoking the communication protocol must translate the information to discrete information the interface object can use.

Since interface objects are extracted from the use case descriptions they are often directly linked. This means that making changed to a use case often results in a change of the interface object as well. So changes to a use case are local to the interface object. To identify what flow should be allocated to a use case we should look for the following units:

  • Which present information to the actor or request information from him
  • The functionality of which is changed if the actor’s behaviour is changed
  • Where a course is dependent on a particular interface type

Next the authors talk about the different types of control there can be in a system. I blogged about thus part a couple of months ago since I found it particularly interesting. So I wont describe it here, you can read this blog if you are interested.

 

Entity objects

Entity objects are used to model information in the system that the system will handle over a long period of time. Typically its this information that survives use cases and must be kept somewhere in the system in for example a data base. Entity objects are independent of use cases and control objects. Many entity objects found early are obvious since these are often found in the problem domain model. Yet others are harder to find. But there is a trade off, it is easy to model too many entity objects in the belief that more information is necessary. So try to model the information efficiently.

The needs of the use cases should be the guidelines for finding the entity objects so their existence can be motivated by the use case descriptions.

Information modeled inside entity objects are called attributes. So an entity object can hold multiple attributes of any given type. These attributes are all described as acquaintance relations and cardinality can also be part of it. Sometimes it may be difficult to know if attributes are actually attributes of an entity, or should be extracted out to model a new entity object.

The general case to model information as entities or attributes is the following: If information can and should be handled separately it should be an entity. If the information is very strongly coupled to some other information and never used on itself it should become attributes. It is thus decisive how we use the particular information.

Functions that are often found on entity objects to manipulate the information within are the following:

  • Storing and fetching information
  • Courses that must be changed if the entity object is changed.
  • Creating and removing the entity object.

The last item on that list is one I don’t agree with since it violates the Single Responsibility Principle (SRP). Creating and removing the entity should be handled by some factory for example. If you want to read more about the SRP, I wrote a blog about it a while ago. You can read it here.

 

Control objects

Control objects model flow of control that is not naturally placed inside interface or entity objects. We take this approach since we want to isolate changes to the behaviour to as less objects as possible. If we would have modeled the behavour inside the entity and/or interface objects we might need to touch a lot of them when we want to add a behavioural change.

So these control objects typically work as glue that unite the objects in the system together. Control objects often only last as long as the use case runs. So they are bound to the use cases and thus they are directly found from use cases. Generally, we only have 1 control object per (abstract) use case.

The aim should be to tie only one actor to a control object as changes to the behaviour of the actor the reason why a use case changes. So when you have multiple actors using the same control object difficulties might arise yet when we only have one actor per control object these changes will be isolated.

Behaviour that is often modeled in control objects might include, transation-related behaviour or control sequences specific to one or few use cases, or functionality to isolate the entity object from interface objects. The control objects are thus the ones that unite courses of events and carry communication between objects.

 

Chapter 8: Construction

The construction phase in OOSE produces two models; the design model and the implementation model. Construction is thus divided into two parts; design and construction. The design model is a further defined analysis model, yet with the implementation environment taken into consideration and the implementation model is the actual source code of the system. Developing the design model consists of three steps:

  1. Identify the implementation environment
  2. Incorporate these conclusions and develop a first approach o a design model
  3. Describe how the objects interact in each specific use case

 

The design model

The design model further refines the analysis model. Here it is where we actually explicitly define the interfaces of the objects and also the semantics of the operations. We will describe how issues like data bases, programming language features and distribution will be handled.

A design model will be composed of objects called Blocks, which are thus design objects. These will make up the actual structure of the design model and will later be implemented in source code.

Blocks will abstract the actual implementation. As mentioned earlier in this blog, a block might in the end be mapped to a single class, yet it’s often the case that a block is represented by multiple classes.

Initially, it might be useful to create a block for each of the objects in the analysis model to increase traceability. Note that in OOSE, the concept of traceability is bidirectional! The goal is to keep the structure found during analysis and not to violate it with unnecessary design. The design must be robust and logical.

 

The Implementation model

When we need to adapt the design model to the implementation environment we must first identify the technical constrains we are working under. Ideally this has been known in advance, even before creating the analysis model. Sometimes you are just required to use a specific language, database, or use specific web-services since the company already has subscriptions or server time.

However, since these might still be volatile, we need to make sure the system is not directly dependent on them. It is preferable to handle these requirements in the same changeable way as all other requirements. Except maybe for the chosen programming language. You can really prepare for switching from C# to Clojure for example.

To be able to change things inside the target environment we need to encapsulate them into blocks. You might thus create blocks just to model the target environment.

There are a couple of ways that the ideal design model might be changed:

  • To introduce new blocks in the design model which does not have any representation in the analysis model
  • To delete blocks from the design model
  • To change blocks in the design model (splitting and joining existing blocks)
  • To change the associations between the blocks in the design model.

Usually, adding blocks to handle the target environment is a good change, yet deleting blocks is suspicious. When you delete blocks to match the implementation environment it might be needed, but if you are deleting them just to change the logical structure you should change the analysis model first and see if it makes sense. Splitting and joining blocks can also be seen as suspicious changes.

To model our implementation we should also use interaction / sequence diagrams for our use cases. These diagrams will describe how a use case will be modeled in regard to the used blocks and their communication. It will show use what blocks are participating in some sequence of events and how the use case is realized. These diagrams also define the stimuli including their parameters sent.

Next comes a very familiar section since I already blogged about this part; Sequence diagram structures. This section describes the structure of centralized and decentralized sequence diagrams. You can read my block about those here.

Following this the authors describe that we also need to describe the interfaces of the block (public methods and properties) and internal states. To model the internal state we might use state transition diagrams and/or activity diagrams.

They even describe how to map concepts of this book onto programming language terminology which I think is very practical and nice information to have. Personally I think it is a bit common sense. But still, since OO was still rather new this information would probably be welcomed.

 

Chapter 9: Real-time specialization

From about this chapter in the book the concepts of OOSE start to be applied to different subjects. The first subject is: real-time specialization. This boils down to advanced industrial real time systems which depend heavily on (hardware) sensors and actuators to provide information to the system.When all the tests are do

I think it is really nice they added this chapter although it does not really relate to any of the work I do personally. I’m working in the information space, not high tech (technical) software or embedded engineering. Yet it is cool they added this so I can get a better understanding of how OO concepts can be mapped to these kind of systems too.

But what is a real-time industrial system exactly?

 

Classification of real-time systems

The book describes there are two mayor types of categories of real-time systems.

  1. Systems that have hard deadlines that must be met or catastrophy shall occur. Think about software in aerospace for example. Deterministic predictability of processes is an essential property of this kind of system.
  2. Non-Hard systems where services are provided real-time, while important, a catastrophy will not occur when deadlines are not met or services are not provided immediately. Quality and performance of this kind for system is measured in terms of services provided.

In these kind of systems we should always identify critical / essential processes and non-essential processes. Also we must categorize them as periodic which means they run on a certain interval, or aperiodic which simply run at a certain point in time.

In general, with real-time systems, we are able to read state through actuators and sensors. These are “connected” to the external and/or internal environment. Real-time systems can be distributed and centralized and may optionally have some means of interface to control what is going on inside it.

Fundamental issues in real-time systems whether they are hard or non-hard include: the view of process, the means of communication and the method of synchronization.

For the view of process the programming language might play an important role. For example; in telecommunications there is heavy use of Erlang, yet outside of this field, there is not. Another example they provide in the book is Ada’s rendez-vous functionality, which is used for synchronization concurrently, yet is well known for performance issues. The point they are trying to make here is that when the semantic gap between the implementation environment and behavioural description of the system exists and grows to large (long lasting) complexities are build into the system.

Second, in the area of software methods we can consider OOSE, of course. Or in modern times (currently) it will most likely be some derivative of agile, like SCRUM. You need to be very careful that you really do focus on semantics and quality than other requirements. This is due to the fact that for real-time systems clear semantics and high quality will keep the project a float.

 

Analysis

Use cases specified during analysis provide a strong basis for capturing the real-time requirements of the system. From a very early stage it is possible to attach hard or non-hard requirements to specific use cases. An example of this might be; Request X must be completed between 100ms. By associating a time attribute to a sequence in the use cases we are able to document real-time requirements. Note that these requirements might be linked to periodic or aperiodic processes.

We might also want to add information of possible concurrency. Because, let’s be real-time, systems are highly concurrent. There can be two flavors of this, the concurrency happens local/inside the use case or it occurs between use cases. Hence, real-time requirements are often naturally attached to use cases.

Coupling and documenting the real-time requirements to use cases we also promote traceability, which is very important in the OOSE process. This assists verifying the requirements in a later development phase.

 

Construction

During construction we consider the rea-time system requirements in relationship to the target environment. Remember, during analysis we only make the best possible logical view of the system. It is essential that we prioritize how we can adapt the implementation environment in such a way it matches our analysis model.

Concurrency is not an uncommon feature of real-time systems, although there are systems out there that are not concurrent. Use cases can be mapped onto specific concurrent processes and it is essential to note that it is behaviour that provides the basis for this division and not the objects.

In modern (operating) systems we have the possibility access shared memory space, though threading mechanisms or light weight processes. This can help out the realization of real-time concurrent systems a great deal. Synchronization of these processes should then be taken from the semantics provided by the (operating) system or programming language. For example; If I would use C# I would make heavy use of task programming and async/await, yet if I used Clojure, I would use the Clojure.Core.Aync library based on channels, or in case of Kotlin, I would use Coroutines, or maybe even GoLang and use their GoRoutines. You have to make sure you use the concurrency constructs native to the language because they are probably very well integrated with the OS you are running on. This will save you a lot of headaches.

 

Testing and verification

Testing of real-time systems is extremely difficult :D. There are several reasons for this. One might be that you need to setup a test environment based on real hardware and thus is expensive and not reachable for testing purposes. Another reason might be because you need to test specific deadlines in your system, yet it is very difficult to enable testing of these deadlines since the timing is difficult to express in tests. Yet another reason might be that, testing for missed deadlines, on real hardware, might result is dangerous situations.

The authors describe another technique, which is more along the lines of model checking your entire system. This will be a very expensive way of testing, yet it will probably be the most complete form of testing you can do for such systems.

There is more to be said for testing, but there is an entire chapter dedicated to this so I won’t describe it all here.

 

Chapter 10: Database specialization

Interestingly the authors chose to include an entire chapter dedicated to databases in about OO design. I think, based on the previous chapters of the book this chapter was not necessary per-see.  I think it is clear that entity, control and interface objects should not have any knowledge of the database. Yet I understand that it is included since the database is historically a source for problems in OO systems.

The impedance problem is quickly noted in this chapter. The impedance problem is a well known thing in OO systems that connect to some database. It refers to the fact that in the programming language we think in terms of objects, yet we need to translate them to a table-oriented structures when we want to persist them in a database. This makes room for a number of interesting problems like how to setup links between data.

They also talk a little bit about normalization but nothing really in depth which is ok for a book like this.

An interesting section however is how you would model inheritance in a table-oriented structure. This might be done by duplicating tables that represent the individual children of a specific class, or you might set up links to different tables to model the inheritance. I encourage you to read the book to find out exactly.

 

Chapter 11: Components

Alright, now we came to a chapter that I already wrote a blog for. I will simply link to this blog so you can continue reading it. If you don’t here is the key takeaway from this chapter; When we are developing OO systems and really want to industrialize it we need to make the code reusable. Reusability can be reached by making components that specialize on certain task and making them independent of each other.

Again, I wrote about this chapter recently, you can read it here.

 

Chapter 12: Testing

Coincidentally, I also reviewed this chapter already. As the title says, this chapter is all about testing. In OOSE testing is a very important practice which is present in all stages of development. Tests need to be traceable through all models and bound to specific requirements. The chapter explains about different types of tests that should be run and verified against the system. You can read a more detailed blog about the testing chapter here.

 

Part 3: Applications

Now we reached the part of the book that gives some practical examples and applications of OOSE. The first two chapters involve case studies. One is an information system regarding warehouse management, the other is a telecommunications chapter. Chapter 15 involves how to introduce OOSE into an organization and the last chapter compares OOSE to other popular development methods.

I will only give a brief review of these chapters since it does not make much sense describing them in detail since I have to leave to much information out. I encourage you to read the chapters yourself so you can find out how OOSE works by your self.

 

Chapter 13: Case study, warehouse management system

So chapter 13 involves a case study of how to implement OOSE for a warehouse management system. It is a simplified system of different warehouses, truck drivers, foremans and customers. They also describe how bar code scanners and such should be connected to the system. I think it is pretty funny how they predicted that, in the future we might have portable computers with us to assist the business process. With the advent of smartphones, a lot has changed.

In the chapter they only describe the use case of redistributing items within a warehouse as an abstract use case. Then they also describe manual redistribution, customer ordered distribution and insertion of new items into a warehouse.

Some interface mock-ups are shown and use cases models including their actors are described. The analysis model is described in the three objects we talked about before, the entity, control and interface objects. Following this, the design model is described in terms of blocks. I think it is particularly interesting to see how the traceability is shown within these models. They also show some iteration diagrams of certain parts of use cases like their initialization and planning

 

Chapter 14: Case study, telecommunication switching system

Next is another case study chapter that involves more something like real-time system. It is all about telecommunication and switches to control the in and outgoing lines. I have to admin that, this i the only chapter I skimmed passed since I already read a case study chapter and real-time systems is not something I work with in my day to day job. If I were, I would probably have spead-read the previous chapter, and read this one.

 

Chapter 15: Managing object-oriented software engineering

In this chapter we take a deep dive into what it takes to implement OOSE in a new organization. No one will accept any delays caused by a new process being introduced so it is essential to make it fast and painlessly. New processes are often introduced in new, small projects. At the time of writing, summer 1991, OOSE was implemented by the authors in about 15 projects with a 3-50 man years of varying work time.

When you introduce a new method and process to an organization there are several factors that increase the chance adoption and smooth the transition.

  1. Introducing a new development method must be supported by upper management since it might be a risk full operation
  2. The first project developed under the new method will not be exposed to too much attention.
  3. People working on the new project must support the change and have a positive feeling about it.
  4. Introduce the method before any tooling is chosen since people might be biased for tools.
  5. The new way of working must be integrated with prior routines.
  6. There must be reasonable expectations for the new project.
  7. Do not have high expectations of components and reuse.
  8. Select a real project that is important, yet without thigh schedule and time constraints.
  9. Select a problem domain that is well known.
  10. Select people that are experienced with system development.
  11. Select a project manager with a high degree of trust in the task.
  12. Staff should work full-time on this project, and not to be disturbed by other projects.

When you choose to implement OOSE the authors propose a method that consists of three steps:

  1. Risk identification
  2. Risk valuation
  3. Managing the risks

Some risks that might be identified could be the paradigm shift that will need to happen in the company. Or simply the new process that needs to be followed by everyone involved. New tools should also be considered a risk and even the system that is going to be developed might be a risky project all by itself, that is why you should choose a project that has a known problem domain. Last, the organization should be evaluated as a risk; Are you on a tight schedule or is time to market critical, does the organization have realistic expectations?

Once you have identified these risks they should be valuated. You can rank them with probability and consequences; then you have a nice matrix of which risks pose threats.

The last step is to manage the risks preferably with proactive measures since we don’t want to run into any identified or even unidentified consequences. I think these things are just common sense really.

Another thing you should do is gather data on important software quality metrics, like lines of code, cyclomatic complexity and function length. This can give you some insight in the quality of the software.

The authors talk about many more subjects in this chapter like organizing for product development, setting up the management, staffing and QA. I think you should check this out yourself if you are interested.

 

Chapter 16: Other object-oriented methods

The last chapter of this book is all about comparing OOSE to other popular OO development methods at the time. They compare OOSE to the following:

  • Object Oriented Analysis (OOA), by Coad and Yourdon
  • Object Oriented Design (OOD), by Booch
  • Hierarchical Object Oriented Design (HOOD),
  • Object Modeling Technique (OMT)
  • Responsibility Driven Design (RDD)

I’m not going to describe these comparisons since I think this is a potential great subject for a future blog. If you cannot wait and want to read it, just read the book 😉

 

Conclusion

Alright, that’s a wrap! We have come to the end of the review and oh boy was it a long drag haha. Writing this reminded me of my days in university where I would summarize entire books so I could study them quicker. So If you or someone else you know ever needs to read this book, point him towards this blog because it can probably same him/her some time. However, I barely scratched the surface of what is in the book.

I’m going to give this book a nice 5/5 stars since it really does live up to expectations. I see why this is a revered book in the software craftsmanship community now. It gives a great development method that tackles both the people and technical side of things. It promotes reuse and component architecture and gives special care to testing. These are all very important subjects when it comes to software craftsmanship.

I think, in my next personal project, I will keep some kind of dev blog and use the OOSE method to develop it. I think it might be interesting to see how that would all play out.

But for now, thanks for reading this blog, it was a big one. Cya.

#end

01010010 01110101 01100010 01100101 01101110

Hey, sorry to bother you but you can subscribe to my blog here.

Never miss a blog post!

You have Successfully Subscribed!