#begin
The last and really interesting topic of this chapter is estimation. I think I’ve said this before but estimation is one of these topics that however much advise you have consumed, when-ever you consume it again, something new, you always gain new insights. Estimation is such a difficult thing to do, and yet it’s a big part of our jobs. So let’s see what David and Andrew have in store for us with this chapter, I really can’t remember even.
So they start off by saying intuitively, we might estimate something is feasible or practical. So we have some sense of what is even remotely possible, but however long such thing might take to build or implement is still very difficult. So you can use estimation to avoid surprises.
Next they offer some really undervalued piece of advise I think and I’ll just quote this little section directly, so here we go: “To some extend, all answers are estimates. Its just that some are more accurate than others. So the first question you have to ask yourself when someone asks for an estimate is the context in which your answer will be taken. Do they need high accuracy, or are they looking for a ballpark figure?”
I mean, isn’t that great advise? Context indeed plays a very big role in how estimates are made. Be cause, if they require high accuracy, it’s probably something critically important bound to business decisions. But if they have enough with a rough estimate, they’re probably not betting the company on it. So I think this is really great advise.
So a simple example might be if your manager asks you how much the install size of your Unity3D app will be. Depending on the context, they might require a different degree of accuracy; If you are deploying a standalone game on Windows/Max/Linux the estimated size might not be really important, but on the other hand if you deploy for mobile or WebGL, install sizes start to become more impactful so a higher degree of accuracy is required.
The authors also mention something really interesting next. They claim that the units in which you express your estimates can make people believe you are closer to target. So for example if you say that something takes roughly 30 days to develop. They themselves will translate that into: “ok, that’s about 4 weeks, or 1 month”. But they will also think, if things go south, it will be on the measure of days, maybe a week. But if you say something takes roughly in months, they might think that if things go bad, weeks may go by. So, estimating in days will provide them with the perception that you have higher accuracy.
Next they also provide a little table with some cool information I would like to include as well. So if your estimate is about 1 to 15 days, you can speak, or scale in the matter of days. If your estimate is in the range of 3 to 8 weeks, they suggest you speak about weeks. If your estimate is on the order of 8 to 30 weeks, you speak about months, and is your estimate is on the order of 30 or more weeks you should think very carefully about your estimate. Haha. An example they provide here is that, if you estimate something to be about 125 days, which is 25 weeks, you might tell them it’s about 6 months. So choose the units of your answer to reflect the accuracy you intend to convey. Really great stuff if you ask me!
So where then, do estimates come from. All estimates are based on a model of the problem. The authors also provide us with their number one piece of advise in regard to estimation which is; ask someone who has already done it. Haha yes, this is of course very true and something I do all the time if there’s an opportunity to do so. You can really exploit someone else’s past experiences. Don’t think this is wrong, this is just good team-work.
So how then do we really estimate; well, the first part is to build an understanding of what’s being asked. You need to understand the context, scope and domain, and often this is implicit in the question of estimating something. So what does that mean exactly; well, something being implicit is when people expect you to know something based on past knowledge. So this might extend to the context, scope and domain. As with my estimation example for the install size, you probably know whether you are deploying for mobile or not so you can provide an estimation on the scale of MB vs. GB for example.
But the (mental) model you use to estimate can really differ on the question you are trying to answer. If you are estimating a feature, you might create a mental model of the application, data-flow, gameplay and maybe even the review process in your company. But if you are estimating a project you might want to include the (formal) steps your organization uses for development. Do you need sign-offs by management for example. If those generally are done at the start of every month, you need to account for that in your estimate. And yes, I’ve seen this happen! I was asked to estimate something but then having to wait till the end of the year since then all the excess budget is being spend on important but urgent things.
The authors also say that model building is creative and can lead to lot’s of knowledge in the long term. When you create mental models of things it often leads to you discoveries of underlying patterns and processes. I think one of the persons who is most widely known for using mental models in his work is Albert Einstein. He famously used his imagination to create mental models about general relativity and quantum physics.
But building a model introduces inaccuracies into the estimation process. Simplicity often wins from accuracy in these kinds of models which can be beneficial. This simplicity can allow you to rethink the model and improve accuracy. Think about Prof. Ousterhout’s practice of designing it twice, or more. When you create multiple mental models, maybe with drastically different designs you can see the trade-offs in things and then take the best informed solution.
One way of improving the accuracy of your estimate is to break up your model into components. Once you have a model, you can decompose it into smaller, more manageable parts. You could separate a model into a UI, gameplay, web and database logic for example. Next you can zoom in on each components individually and subdivide them even more if you think that will help you in the estimate.
A common side-effect when subdividing a model into components is that each components have parameters, at least in your mental model you could assign parameters. The interesting part of having these parameters is that you can give them a name and a value. You can then extend your mental model to accommodate for these parameters and see how it affects your model. You will also purposely introduce errors through these parameters. This way you can think about error-handling very early on in the process. Parameters in a game for example might be the number of enemy units on screen. If you expect multiple thousands, you will be better of taking an approach using the ECS with DOTS, but if it’s just a couple, a classic gameobject approach will be sufficient.
And btw, at Unite this year, just a couple of weeks ago. It has been announced that DOTS will be integrated in the 2023 Unity3D engine by default and no longer has its preview state.
But we will come back to the topic of DOTS in a later episode. What I will do is put this really pragmatic, pun intended, video in the shownotes where some dude on Youtube shows you how to use the latest version. I checked it out a while ago and I have to say the design of DOTS finally looks usable. It has had many generations and I just could not be bothered using it since it changed too much.
So by assigning the parameter of units on screen a number; we quickly determined it either is a DOTS approach or a classic gameobject one. Now, since with many of us, DOTS is new, our estimate will go up, but we are not sure by how much since this is unknown territory. In this case we need to drill down further and add estimations to each step. So, what behaviour do these units need to have? How about rendering? Do the units have behaviour that is incompatible with DOTS’s systems approach. And going totally man, could a compute shader be an alternative?
Computeshaders are special kinds of programs written in shader code to exploit the concurrency of the GPU to do calculations. So these shaders are not used for rendering, but purely calculating things. They can be really cool and very useful, but usecases are limited.
But to get back to the book again. The next tip the authors give is that great idea to keep track of your own, or team’s estimated. Make sure you write them down so you can check, afterward how close you were. This allows you to hone in on the skill of estimating things. And, after a while, you will become more comfortable with estimation.
And well let me tell you, this is easier said than done. What I’ve come to realize is that I sometimes over-estimate just to cover my ass. This is not on purpose but I’m often dealing with lots of uncertainty. Plus some bad experiences in the past tend to add those extra hours/days/weeks to an estimate. I’d rather be safe than sorry. But I’ll always try to communicate this uncertainty with the person asking for the estimate.
Next they provide us with some more interesting tips; they say that the best way to find a time table of a project is to actively work on it. This sound like a paradox they say, and I agree. But what they mean is that if you work iteratively on a project you can adjust and fine-tune estimates as you go. The estimate will be more clear as a project progresses and thus you are in a better position to answer questions about the requirements, risk analysis, design and integration.
And the very last tip they give in the book is what they say the best answer to anyone asking for an estimate, and I’ll quote: “I will get back to you”. And they are totally right. This will allow you some space to breath and give you some time to think about it. But remember the communication advise from chapter 1! If you tell someone you will get back to him or her, you really have to do it!
So that’s it for chapter 2 of The Pragmatic programmer! I do want to mention one last thing about estimation which is about some really great advise Uncle Bob provides in one of his keynotes at the YOW conference. I’ve listened to this talk far too many times but it’s just too good. In this talk, Uncle Bob talks about his strategy for doing estimations. He also mentions work break-down structures just as we talked about in this episode. But he has a specific formula to provide an estimate.
The key here is to never, ever give a concrete date! Always provide a range of dates! Hit strategy is to give the best case scenario which has about 95% chance of succeeding, the worst case scenario which has a 95% chance of failing and the normal case scenario which has about 50% chance of succeeding. This will give you a range like, best case 5 days, normal case 8, worst case 14. This curve can then be communicated to management, so they can actually do the work they were hired for, management.
This strategy is called PERT which stands for three-point estimation technique. I really recommend you check out this talk on youtube. It’s a lot of fun and you will definitely learn about estimations. Also, if I remember correctly, there is an entire section dedicated in Uncle Bob’s Clean Coder book to this subject.
#end
01010010 01110101 01100010 01100101 01101110
Recent Comments