Tuesday, June 20, 2017

The Mathematics of Gender Bias

A couple weeks ago, at Music City Agile I attended a session on combating biases presented by Neem Serra. There was a slide presented that had data from simulations on how gender bias during promotions affects the number of women at different levels of an organization. This was something that seemed to be simple to model and hopefully prove out. Around the same time, my wife became more active in the Women In Leadership events that my company regularly organizes. I have definitely experienced a raising of consciousness about the obstacles faced by women in IT and wanted to put some numbers behind this newfound consciousness.

Most of the conversation about gender bias centers around the pay gap. This is a legitimate concern and a good deal has been written about it. In this post, the exploration is centered around a slightly different question - What is the effect of gender bias in promotions through the hierarchy in an IT organization? In order to do this, we are going to set up a very simple mathematical model. We are going to start with the assumption that we are looking at an organization that has 5 levels of hierarchy, ie, line level folks are at level 1 and C-Level execs at level 5. Let us also say that in this organization, every couple of years 10% of the workforce at every level gets promoted to the next level. We are going to start with the case that the lowest level of the hierarchy has 50,000 women and 50,000 men. For the first run of our model, we will assume there is no (0%) gender bias shown during promotions. So, our starting model has the following assumptions -
  • Organization has 5 levels of hierarchy. 
  • 50,000 Women and 50,000 Men at the entry/line(first) level.
  • 10% of employees at every level receive promotions.
  • 0% promotion decisions have Gender Bias
The resulting numbers would look like this -

Level Women Men Promotions Percentage Of Women Percentage Of Men
1 50000 50000 10000 50% 50%
2 5000 5000 1000 50% 50%
3 500 500 100 50% 50%
4 50 50 10 50% 50%
5 5 5 1 50% 50%

What we see in this case is that when there is no bias involved, the percentage of women at every level equals the percentage of men. With every round of promotions, the same number of women and men get promoted.  Let us change our model to include some degree of bias. We will start with the case that there is a 10% gender bias. This would be reflected in 10% of promotion decisions involving women and 5% overall showing the bias. In other words, 5% of the promotions that should have gone to women, are actually handed to men due to conscious or sub-conscious bias. We are going to factor the bias in for promotions at every level. This time our model has this one parameter changed, and hence the following assumptions -
  • Organization has 5 levels of hierarchy. 
  • 50,000 Women and 50,000 Men at the entry/line(first) level.
  • 10% of employees at every level receive promotions.
  • 5% promotion decisions have Gender Bias
This has the following effect on the results -
LevelWomenMenPromotionsPercentage Of WomenPercentage Of Men

The percentage of women at every subsequent level gets worse. When we finally reach the fifth level, we have only about 30% women holding office. The change is drastic compared to the base model with no bias. This should not be a surprise as in many organizations, the ratio of women in the highest echelons seems to be lower than at the entry level positions. If we change the bias to be affecting 10% of promotion decisions as opposed to 5%, we get the following results -

LevelWomenMenPromotionsPercentage Of WomenPercentage Of Men

Once we change the bias to 10%, by the time we get to the fifth level of the organization, the percentage of women drops dramatically to close to 20%. Having 5 levels of hierarchy is actually a great simplification for most organizations. Usually, there are 7 or more levels separating the C-Level executives from the line level employees. The more levels we add, the more drastic the decrease in percentage becomes.

Another simplification in our modeling is assuming equal numbers of men and women in the industry. According to CNET, the percentage of women working in the tech industry is about 30. Our models, till now, have assumed an equal (at least at the starting level) ratio of men and women. What if we change the number of level one employees to reflect this. Also, let us assume bias being a factor in 10% of the promotions. Our model setup is now the following.
  • Organization has 5 levels of hierarchy. 
  • 30,000 Women and 70,000 Men at the entry/line(first) level.
  • 10% of employees at every level receive promotions.
  • 10% promotion decisions have Gender Bias
This renders the following results -
Level Women Men Promotions Percentage Of Women Percentage Of Men
1 30000 70000 10000 30% 70%
2 2400 7600 1000 24% 76%
3 192 808 100 19% 81%
4 15 85 10 15% 85%
5 1 9 1 12% 88%

The number of women at the top level in this type of an organization drops drastically to 12%. Incidentally, the research at Paysa.com suggests that the percentage of female C-Level executives is close to 12% and the percentage of tech directors (Encapsulated by Level 3 in this case) is close to our model's 19%. 

The numbers from our model matching the industry numbers is partly coincidental. We have made numerous assumptions in our modeling. The 10% promotion rate, same bias at every level, the number of levels in an organization are all assumptions in the model. These numbers do point to the existence of gender bias in the tech industry. Based on our simplifications, it seems that about 10% of the promotion decisions are affected by this bias. You can see what a drastic effect this has on an organization's makeup. 

It would be interesting to see how individual organizations stack up. How consistent is the percentage of women as you move up the rungs of the corporate ladder? How about your organization? Does it seem that the ratio of women to men seem to get smaller the further up the organization you go? The culture of organizations is a product of the culture of the individuals that make up the organizations. As individuals it is up to us, to educate ourselves and reduce the role of bias in our own thinking. Neem Serra, whose session inspired me to play with the mathematics of bias, in her session also mentioned Harvard's Project Implicit. Project implicit helps you test for different kinds of biases. So far I have been brave enough to test myself only for gender bias and am pleased to say that the results said that I was unbiased in that regard. Take the tests for yourselves and see where you land on the bias spectrum. If there is work to be done, maybe spending some time with the folks you might hold a bias against would help.

A final note - This mode of modeling (applying bias as a percentage) and interpreting the results (tracking changes in percentages of members at every level) can be applied to any type of bias. This does not have to be restricted to gender and can be expanded to test for biases with regards to race, ethnicity, sexual orientation etc.

Thursday, June 1, 2017

Hackathons : Paving The Desire Path

Desire Path

There is a very interesting Urban Planning concept called "Desire Path". The idea is pretty simple. People walking through a park will take the shortest or easiest routes. This is regardless of whether this path is paved or not. Eventually, due to erosion from the foot traffic the path will become visible. The designers can choose to ignore this or make this path official by paving it. Paving the desire path is a great design decision (most of the time). That is probably the reason behind the Lean and Agile UX communities having embraced this concept. If the user keeps going to the same part of your app over and over again, and it takes 4 clicks to get there, provide them a shortcut in the next release that makes it one click. Twitter making hashtags and @ mentions official features is another example of this. As Twitter users started communicating through hashtags and @ mentions, Twitter saw the desire path and not only made it official, but enhanced their use.

Related image

What does this have to do with hackathons?

My company has a great tradition of doing two hackathons per year. We call these events 48 hours as developers have 2 days to build anything they want. The projects could be product enhancements, internal tools, automation of processes, even apps to decide where to go for lunch. The results are often amazing. It is incredible that when left to their own accord, how creative and productive developers can get.  There have been entire features that have been built in 2 days that have eventually made it into the product. In fact, 48 hours is looked upon as a great way to get product innovation and ideas from the folks who are knee deep in the code every day.

In our latest hackathon, a few developers from one of our teams submitted and completed their 48 hours projects.  One of the projects also won an award at the closing ceremony for 48 hours. A small team of developers was able to build an award-winning feature in 2 days. Here is the interesting part of it though - The average time it takes for the team these developers belong to, in order to complete a feature, is over 200 days. In fact 2 days is close to a tenth of the time it takes for the team to complete user stories. A lot of this difference is due to the processes that the team has in place. There is definitely some factor of time spent exercising quality control for a production quality feature. I would argue though, that most of it is due to the processes that the team follows.

During hackathons, developers are forging a desire path. This is the ideal process they would like to follow, instead of the process that has been paved for them.

During hackathons, developers are forging a desire path. This is the ideal process they would like to follow, instead of the process that has been paved for them. What is interesting is that our teams have a lot of liberty to adjust and change their processes. Many a time the paved process path, has been created by the team itself.  They have full control over branching strategy, testing policies, definitions of done and to a good extent technologies they can use. For some reason though the hackathon desire path does not match the paved daily process path.

Paving The Desire Path

In my previous life, I am proud to have led a team that won every hackathon that it presented in. You can see their happy faces in the picture below. 

Image may contain: 5 people, people smiling

I cannot take any credit for their success as I did not write a single line of  code or test for these projects. But I did notice that they were doing a few things that were different from our "paved" process. Every time I would pick one of the things, for example, using slack to collaborate, when our company itself was not using it and make it a part of our daily process. The team(as enlightened as they were) would catch on to the trend and start incorporating other things that worked for them into their daily work. This would often mean removing unnecessary process. I clearly remember our prototype and product UI code merging and becoming the same soon after a hackathon where our UX designer had actively paired with one of our developers for two days. The team (with a slight push from me) paved the desire path that they were travelling.

Hackathons for me, as a manager, became not just celebrations of how good our developers were, but also retrospectives on our process. 

This does not mean that the two processes became the same. They just became more alike. Hackathons for me, as a manager, became not just celebrations of how good our developers were, but also retrospectives on our process. We as a team, sometimes consciously and other times, sub-consciously learned from our hackathons, both in terms of the processes we should use(or discard) and the technologies we should use(or discard). Our hackathons were often proof of concepts that with some modifications made it to the product itself. It allowed our engineers to play with technologies that get the job done the fastest and then incorporate them into our product.

Since our 48 hours event is very developer focused, the managers and team leads, usually continue doing their daily jobs as usual. Due to the fact that not everyone is participating, "official work" still happens. Stand-ups, meetings and other day-to-day activities are still going on for the teams.  I think this is a great missed opportunity. Team Leads and managers should be able to observe the team's behaviour during these hackathons. They should then encourage the team to behave in similar ways during their everyday work. Any organization that invests in hackathons, should look at this as a learning opportunity and a retrospective on their process. What process do the developers follow when they are most productive and have cycle times that are 100 times shorter? How far away is our current process from that and what are the tweaks that we can make to bring the two closer?

Hackathons reveal your team's desire path. Make it official! Pave It!

Thursday, April 6, 2017

Is Your Car Lying To You - Continuous Reforecasting

How often do you forecast whether your project is going to finish on time or not? Once, at the beginning of the project? Every month? Every two weeks? Every day? How about every 15 minutes? For the teams I work with, we are forecasting our success probabilities every 15 minutes. Now, that might seem a bit of overkill, but we will get back to that a little later in this post. First, a real life anecdote about re-forecasting.

Through some planning and a lot of luck, work and home, for me, are in the same city. This means two things. First, on most days, my entire day is spent in a 2-mile radius. Second, almost all my driving during the week is city driving. Those two things are true on weekdays but not on Sundays. Playing in a regional sports league means traveling beyond my 2-mile comfort zone. It also means a good amount of driving on the highways.

My farthest drive is to a city on Florida's east coast called Port St Lucie. It is almost exactly 100 miles from my home in the idyllic suburb of Weston, Florida. Recently on a Sunday morning, as I was preparing to leave, the dashboard display on the car told me that I had enough gas to travel 120 miles. The option was to either fill up now or on my way back from the ground so that I had enough gas to make the round-trip. I decided on the latter. Somehow, upon reaching my destination a hundred miles away, the projected miles the car could travel had changed from 120 to 98. I had traveled 100 miles, but only used gas required to travel 22 miles. From the outside, it might seem that either this is a miracle of physics or my car was lying to me when I started my journey.

It seems that the engineers at BMW (apart from creating some beautiful sports cars) have figured out what many project managers, scrum masters and team leads have had a hard time figuring out. It does not matter how good your initial estimate or expectation is, as you gain more information you have to adjust your projection and re-forecast. The computer in the car, as I was on my trip, discovered that I was burning fuel at a much different rate than I had during my recent trips. There were good reasons for this -
  • City driving was replaced by highway driving, a lot less stop and go.
  • I take full advantage of sports mode driving on a daily basis but decided not to do so as I was cutting the miles remaining really close to the trip distance.
  • I usually drive in the city with the convertible top down, which increases wind resistance as opposed to driving on the highway when I usually keep the top up.
As the trip progressed and all these factors affected the consumption rate of the fuel, the car's computer re-projected how many miles the fuel in the tank would last. Regardless of the initial estimate of the computer, it took the new information available into account and gradually adjusted its projections. Most likely, the projection algorithm was not altered, but the underlying data model used by the computer was. The projections were reactive to every mile traveled. Every bit of new information was consumed by the car to re-forecast the range the car would be able to cover.

The NOAA (National Oceanic and Atmospheric Association) does that same thing when forecasting hurricanes. They adjust the path that they project the hurricane would travel every time they receive new information. They don't just issue a single line forecast when a hurricane is formed and leave it unaltered. In what is a matter of life and death, the NOAA, reacts to every new piece of information and re-adjusts their forecasts for the direction of the hurricane.

Mapping applications realize that their accuracy is greatly dependent on having the latest information and using it to update their predictions. The same is true when you are forecasting end dates for projects or releases.

The same is true for any "Maps" application that provides directions and estimated time of arrival. Google Maps, Waze, Uber, Lyft etc all reforecast continuously. Not only do they reforecast when they have new information they actively seek out information to make those forecasts. Mapping applications realize that their accuracy is greatly dependent on having the latest information and using it to update their predictions. The same is true when you are forecasting end dates for projects or releases.

Now, back to why we run forecasts for teams every 15 minutes. In an organization of 30 teams, the assumption is that one of those 30 teams has some new information that has caused them to change the scope or move a date. We want the result of those decisions to be available to the teams as soon as they make these changes. In fact, we want the changes to be visible to anyone who is interested, not just the team. That is why, just like BMW, with every mile that goes by, we want to be able to re-project and re-forecast our range of possibilities. New information shows up every day, all day and we should reforecast as soon as it shows up.

The feedback loop from the new information to actionable forecasts should be as small as possible...Forecast, early and forecast often

We want to know as early as possible when our initial assumptions have been invalidated. That is the crux of the problem. Our first projections have a lot of assumptions built into them. The more unstable our system and its rules(Sports mode vs comfort mode, Convertible top up vs down), the greater the chances that our assumptions are going to be invalidated as the project (or trip) makes progress. The variability in our systems that cause this are unavoidable, what is avoidable, though, is us turning a blind eye to the variability.

A large part of being Agile is building feedback loops. It is about seeking out new information and then reacting to it. Our systems (organizations, teams, cars, hurricanes etc.) are continuously providing us with new information. The feedback loop from the new information to actionable forecasts should be as small as possible. That is the major reason why we don't wait two weeks or a month to re-forecast and instead do Continuous Reforecasting.

Our initial estimate, unless we are already running a very stable system (more on that another time), is going to be inaccurate. We, whether "we" means the team or the business, as long as we want to live in the real world, need to accept that. As we make progress, get new information and the cone of uncertainty begins to narrow, we will be able to make better forecasts. Re-Forecasting, when new information becomes available, is not just a good thing to do, but also the responsible thing to do. The same rules that apply to code check-ins - do them early and often, apply to forecasting as well. Forecast, early and forecast often. 

Friday, March 31, 2017

The Micromanagement Disease

One of the many advantages of being an Agile Coach is that I get to interact with folks across the organization. I get to observe how developers work and how the folks leading and/or managing them behave. My organization has managers and directors of varying degrees of maturity. They also possess their own unique styles of management. In a development org where there are over 50 folks in management positions, invariably you find the world's most popular management anti-pattern - Micromanagement.


Our mature Agile implementation, which empowers teams to manage themselves acts as a great deterrent for micromanagement. That being said, having an Agile organization does not imply micromanagement cannot exist. Practices that are individual-focused, rather than team-focused are clear symptoms of micromanagement. These symptoms can include -
  • Stand-ups that are status reports as opposed to opportunities for the team to collaborate.
  • Focus on individual metrics over team metrics
  • Multiple status requests all day, every day
  • Manager/Lead needs to be in every meeting and ratify every decision
  • There is an "Owner" for the process and for architecture
  • Team has no involvement in making commitments, leads/managers make them
  • Product Architecture is dictated to developers and only "Architects" can change/question it
  • Team members do not make suggestions for improvement out of fear or apathy
There are many more symptoms, but the last one on the list here is the one that shows that the disease is in its most advanced stages. When folks doing the actual work, feel that they have no business voicing concerns or suggesting improvements, the management battle and conceivably the project is already lost. After all, people doing the work are the ones who know the problems they are facing and often the best at coming up with ideas to remedy those problems. Another important thing to note - Micromanagement is not just limited to Managers, it extends and is often exhibited to a greater extent by Architects and Tech Leads. These folks have often earned their stripes and seen code from other developers that makes them firmly believe that they can do things better. If you are "living with" an architecture, your Architect is probably a micromanager.

When folks doing the actual work, feel that they have no business voicing concerns or suggesting improvements, the management battle and conceivably the project is already lost.


Managers and Architects are not bad people. Most are trying to do what they truly believe is in the best interest of the products and the organization they are involved in. Why, then do they exhibit the negative symptoms of micromanagement? I believe that there are two underlying causes for the symptoms.

  1. Fear that employees will not do the right thing
  2. Fear that upper management will not do the right thing
When managers are afraid their direct reports or the people they are leading are not going to do the right thing, the symptoms of micromanagement emerge. Managers might believe that their employees will not do the right thing for a couple of reasons. The employees might not have earned the trust of that manager that they are not 'lazy' and can actually make progress without multiple status reports. The manager might have been a great 'doer' who was promoted and firmly believes that she/he can solve problems better than the people currently doing the work. In either case, the result is the same. The manager maintains the firm belief that the project will only be successful if they are intimately involved in every detail and decision. They are only doing what they believe is the right thing to do, regardless of how unhealthy it might be for the overall system.

When mid-level managers believe that their superiors do not have their back and will not do the right thing by supporting them, they, in turn, transfer the lack of trust downstream. Since the middle managers are being asked for regular updates and are not being trusted, they do the same thing to the people they are leading. There are definitely cases where the middle managers absorb the pressure and don't transfer it downstream. Those managers, in my opinion, are the ones that need to take higher positions in organizations. When a middle manager's boss does not trust them, it also means that they have to prove their indispensability. This leads managers to be more hands-on and involved in the details and hence, not living up to their own potential. They respond to the lack of trust by being a micromanager that needs to have all the details available to them at all times in order to answer the questions of their superiors on the spot. When managers (especially the inexperienced ones), believe that their superiors will not do the right thing, they themselves start doing the wrong things. 


I wish I could say that I know the cure, but the cure has to be very context specific. If our diagnosis is correct, there is a trust issue somewhere in the organization. Lack of trust more often than not also manifests itself via fear. Fear to innovate, fear to try process improvements, fear to change or abandon prescribed architecture, even fear to speak up at times. How we get rid of that fear would depend a lot on the organization and the willingness of the people that make up that organization to try.

ask the question - "Do you feel that we are doing the right things and do you feel you have the freedom to change the things we are doing wrong?"

One way would be to establish and promote a culture of trust where everyone is encouraged and trusted to try to improve things every day. An approach would be to start at the line level and ask the question - "Do you feel that we are doing the right things and do you feel you have the freedom to change the things we are doing wrong?". If the answer is no, find out if it is just the direct manager that has the trust issue or is it systemic and goes up the chain to the highest levels. Very few will argue that fear and lack of trust is bad. Hopefully we can, through some tough conversations, drive consensus that we are going to trust the people we have hired. More so, when the symptoms of micromanagement emerge, we will diagnose where they are coming from and apply the appropriate cure to re-establish trust and remove fear from the equation.

Follow Up

We believe that it is our job, as managers, to make the project successful. When in truth, it is our job to help our employees make their project successful.

No one sets out to be a bad manager or a micromanager. There are conditions that lead some of us in that direction. For many of us, it is the fact that we ourselves were such good "doers" that we do not trust that our employees can do things well without our help. We believe that it is our job, as managers,  to make the project successful. When in truth, it is our job to help our employees make their project successful. If we trust and empower our employees to do the right thing, not just the current project, but any project the team undertakes will have a high probability of success.

Tuesday, March 21, 2017

Turning Our #NoEstimates Game Up A Notch

In an earlier post, I described how one of my teams went down the #NoEstimates route.  We had pretty good success going down this path. As a manager and a coach, it validated for me that how long it took for a story to get done had little correlation with the size or complexity of the story. In an organization with 30 development teams, we were definitely not the first team to stop estimating stories. We had other firsts under our belt, but that was not one of them. Very soon though, there was not a single product development team that was using story point estimates. It was the end of story point estimation, but as much as I would like to tell you, this was not the end of estimation.

First, a quick description of our work breakdown structure. We take long-term strategic initiatives and break them down into features. These features are then broken down into stories. The teams have the freedom to decide if they want to break these stories further down into tasks or not. Most teams choose not to use tasks as they do not provide much value to them.

At the story level, as described in the previous #NoEstimates post, limiting our WIP and right-sizing the stories made us more predictable. We were more predictable than we had been after years of trying to figure out estimation. The transition was easy for developers, as they always found the estimating conversations time consuming and wasteful. It was also easy for the folks on the product side of the house. The predictability gained counteracted the desire to get story level estimates. The other reason that the business did not have an issue with us going to #NoEstimates was our delivery level. We, as an organization, delivered Features and not Stories. Eliminating story points stopped teams from spending time guessing the exact "size" or exact "complexity" of a story. It did not stop the business from asking the teams to spend time guessing the exact "size" or exact "complexity" of a feature.

Teams across the organization were no longer estimating stories but were providing estimates for features in the following format -

The product team at that point would take these estimates and the team's forecasted capacity for the release into consideration for release planning. The product team would play 'Tetris' in order to fit as many of the features into the release based on these story estimates. For example, if the team working on the features listed above has a projected capacity of 80, the release plan would say that they can do features A, B and C (12 + 55 + 7 = 77). These features would then be advertised as the ones we are committing to for the upcoming release. 

On the face of it, this seems completely logical and sane way to go about releases planning. Unfortunately, this approach suffers from the same problems that story-level estimates have. 
  • It is impossible to know the exact count of stories in a feature beforehand, regardless of how much analysis we do on the feature. 
  • Features, more often than not "grow" and more stories are added while they are in flight. Planning to capacity puts features at risk, even if one of the features grows.
  • In order to ensure completion on time, the largest of the features has to be started Day 1 of the release even if it is the lowest priority feature. This almost always puts higher priority features at risk.
  • These estimates are based on initial guesses, which, when later invalidated, did not always invalidate the commitment.
  • Large features are turned into multi-release efforts as opposed to being sized appropriately.

After struggling with these issues for a number of releases we started to attack these problems. We realized that we already had a blueprint for how #NoEstimates has worked for us at the story level. We decided to start applying the same to Features. To start, we first collected some data on our features. We discovered that 85% of our features consisted of 25 stories or less. This was our stake in the ground. We asked teams to start using this as a yardstick for whether a feature is too large or not. Teams had already been doing this for stories. Each team was very adept at "right-sizing" stories based on their data. The other necessary step in order to get #NoEstimates to work at the feature level was to limit WIP at the feature level. We asked teams to establish Feature level boards and limit WIP on each stage of Feature lifecycle (Selected, Analysis, Development, Test, Done to start with). Teams had again, already been doing this at the story level and this would be a natural transition for them. We, as a coaching group, set these guidelines and left the implementation in the hands of the teams.

The results were mixed. Some teams, due to various reasons, including external pressures, implemented a loose feature WIP and didn't quite go down the path of sizing features. Other teams, took the advice to heart and implemented strict WIPs on the number of features they would work on and not accept any features that were above 25 stories. When a feature seemed like it would be much larger, they broke it up into smaller features that could flow through their process smoothly. There were some things missing though. Even in the teams that took the approach head on, they did not do everything that they were doing to make themselves predictable with stories. They were not breaking up features into smallest deliverable features as they gained more information about them. They were violating their feature WIPs when emergency requests came in. In other words, our roll-out was only semi-working. Which means, we were becoming only semi-predictable.

Of late, we have changed our approach a bit. We have picked one team to work closely with and try out our approach. Apart from ongoing coaching, we are actively encouraging them to not provide feature estimates and to break up features as much and as often as possible, especially as they seem to approach the 25 story mark. We are trying to turn our #NoEstimates game up a notch. We are taking all the principles that we know work with stories and applying them to the Epics/Features. Limit WIP, Control Batch Size and Manage For Flow at the feature level, the same way as we do on the story level. Once we have these concepts proven out with this one team, we will roll them out to others in the department as well. Hopefully, all estimation conversations turn into simple conversations of "Does this work-item look like it is too large for this level, if yes, let us break it up, if no, let us start work on it."

We expect that this approach will make us more predictable with our features. The predictability gained should allow us to answer the question of - "When it will be done?" without having lengthy estimation conversations. The expectation is that this approach will also help us get to Just In Time commitment, where a feature is committed to only after it has been started Before the feature is started, the business can easily de-prioritize it and replace it with a different right-sized feature. This should allow us to be more agile and respond to market pressures and feedback more often. The hypothesis is that limiting WIP and controlling batch size allows for these things to happen naturally. Watch this space for results in the future.

Tuesday, January 3, 2017

Why Scrum Sometimes Works And What Can You Do About It?

Scrum sometimes works great and other times is a constant source of dissatisfaction for both developers and management. If Scrum is working well for you, most likely you have already started modifying the "rules" of Scrum to fit your context. On the other hand, in a failing Scrum implementation, you are either looking to give up on it or are about to bring in a "Scrum Expert" that can help you course correct. The first reaction from your new expert friend is most likely going to be - "You are doing Scrum wrong". Scrum has some pretty straightforward rules which, admittedly, are easy to get wrong. There is probably a very low percentage of teams that adhere to all the rules in the scrum guide. In fact, in my personal experience, there is little correlation between adherence to the scrum framework and the degree of success of teams.

Note: The intent of this post is not to say that Scrum is a bad methodology. It is to say that doing Scrum "by the book" is very hard. Scrum itself has brought many great things to software development, as acknowledged here. This post tries to point out that those great things are available even without a full-on adoption of Scrum. 

That does not change the fact that many teams do see improvements when Scrum is introduced. What are the reasons that this improvement happens? Also, if success and improvement, have little correlation with the degree to which the scrum framework is implemented, can the same improvement be seen without ever implementing Scrum? Below is my take on some of the reasons teams see improvements with Scrum and how you can get there with or without Scrum.

Limiting Work In Progress

Scrum forces your teams to concentrate on fewer work items. There are only a certain number of stories that can fit into the sprint. This forces a team-wide work in progress limit. The team is able to concentrate on the few items in the iteration. This is usually a huge shift from each developer having 20 work items active with them. Any developer would tell you how costly this is. There is constant context switching and since you are caught up in 20 items. As a result, none of them get done. If any stories do get done, they are not done with the quality that a decent developer would be proud of. Context switching kills both productivity and quality. Limiting the number of work items you are working on, helps with both these aspects. Reducing "work in progress" also has the direct result of every item in progress getting done faster. That, in turn, results in more things getting done on a regular basis. There is a mathematical theory behind this. If you are interested in further details on the math, please read more about Little's Law here.

Here is the fun part - The benefits of limiting work in progress, do not have to come through Scrum. Simply having a rule that no developer can ever work on more than one work item (or even less than one) will produce very similar if not better results. This is not an easy change for most organizations. Scrum does not prescribe it explicitly either. This could be the reason why many Scrum implementations don't achieve the success they are expected to. If you are considering going Agile, this is the one single change I would start with. Explicitly limit the number of things your developers are working on.

Small Batches

The entire idea of having time-boxed iterations in Scrum forces the team to think of the work they need to accomplish in incremental small units. The Scrum Guide encourages these to be units that are close to a day in length. This can often mean that the limited amount of work that the team is taking on, is further broken down into smaller batches. Small batches have great benefits. They help find mistakes early, whether in the requirements, the code or the tests. They avoid big up-front design and heavy architecture work that often ends up as waste. Instead, design and architecture emerge as new needs are discovered. The team continually makes measurable progress towards its goals rather than having little idea of where they stand in the overall picture. Small Batches help gain a lot of predictability. Most successful scrum teams usually reinforce this by saying that they will not work on anything that higher than 5 or 8 story points. This is not a prescribed rule for Scrum, but one that seems to be commonly used by successful Scrum implementations.

Interestingly enough, small batches are completely achievable without adopting scrum. Developers can break down work items into batches that take 2-3 days to get done. Every time they pick up a work item, they can the question - Can this be done in less than 3 days? If the answer yes, they start work, otherwise, they break it down into pieces that are achievable in 3 days or less. Of course, they are allowed to be wrong, and some items will take longer. This approach will give you the same benefits of small batches with or without the adoption of full Scrum. If your current Agile implementation lacks the emphasis on small batches, that is another easy win.

Collaborative Improvement

Scrum uses its ceremonies, Planning, Daily Standup, Sprint Review and Retrospective as tools to establish collective ownership of the process and the product within the team. The most powerful of these, for long term improvement, is the retrospective. This is the activity where the team gets together at the end of an iteration and figures out things that are going well and things that can be improved. Scrum did a great job of taking the job of "figuring out efficiencies" away from managers and handing it to the teams. There are numerous retrospective techniques out there. As long as teams are looking to improve, you can employ most of these techniques and get successful results.

For some reason, teams adopting Scrum are forced into using the same cadence (sprint length) for Planning, Retrospectives, Stakeholder Reviews and delivery. There is no reason for these cadences to be intertwined. Scrum puts retrospectives at the end of iterations or sprints. They don't have to be this way. If you are not doing Scrum and don't have sprints established (and there might be no good reason to), you can do retrospectives as and when an issue that needs the team to get together pops up. This goes hand in hand with working in small batches.

Small batches work effectively whether we are talking about developing software or making improvements on a team. Instead of building up a batch of issues to talk about, let us take care of things as soon as they come up. This might mean that there is no established cadence for a retrospective and that, in my opinion, is perfectly fine. It might work better than having a backlog of painful items build up for two to four weeks. A large retrospective backlog can lead to ineffective retrospectives as not all the important topics can be talked about before people burn out. Retrospectives, themselves, are not unique to Scrum. You do not have to be doing Scrum in order for your team to improve collaboratively. Have retrospectives as and when you need, give the people doing the work, the power to improve collaboratively.

User Feedback

Central to the entire idea of Agile is to get feedback from users and stakeholders. Scrum does this by having a sprint review/showcase at the end of every sprint. This is what really puts the Agile in Scrum, especially if you are smart enough to make decisions based on the user feedback. The problem is that Scrum regiments the user feedback to the end of sprint, review step. It is the same cadence matching problem that retrospectives suffer from. The earlier we get this feedback the better, then why wait until the end of the sprint. Get the feedback as soon as you have something ready.

If you are working in small batches and limiting your work in progress, it is likely to have something ready for review every day. These changes should not have to wait till the end of a sprint in order to be shown to users and stakeholders. For a developer, 2 weeks is an eternity when it comes to remembering what he/she did. There is a great deal of efficiency in faster feedback. We can tweak something a developer just worked on before the dev has moved on to working on a different part of the system. Get feedback, course correct and deliver value as early and as often as possible.

The (Re)Starter Kit

Scrum is often touted and used as the starter kit for Agile. The problem is that Scrum while appearing simple on the outside is very hard to "do right". The reason people are attracted to Scrum as a starting point usually is due to the fact that Scrum is a documented recipe. Do these things and you will be Agile. Every Agile coach will tell you that just doing the steps does not make you Agile. Unfortunately, the same recipe is the reason why Scrum adoptions fail and need reboots. Developers are analytical beings. If they believe that the recipe produces the Agile cake, they will follow it in every detail. The focus shifts from the intent of the law to the letter of the law. A system designed to help developers quickly changes into a way of micromanaging developers.

Inflexible rules lead to inflexible processes. Inflexible processes by their nature are not great at adapting to your context. In order to be successful with Scrum, most of the times, you have to be flexible and tweak the rules to fit the context. Is it necessary though to have the rules in the first place? Why not have principles and let them define the rules in our context? The basic premise for Agile, in my opinion, is - Deliver early and often and use feedback to determine the future course you need to take. Let us take that premise and work with in order to make things better.

 Agile does not mean Scrum, although Scrum can at times be Agile

If Scrum is failing you, or if you haven't tried it yet, there might be a simpler way to dip your toe in the Agile pool. Start with the four principles here (or a subset of them), and measure the gains they get you. The rest of the Scrum framework is barely required if you want to be Agile. 
  • Limit Work In Progress
  • Work In Small Batches
  • Improve Collaboratively
  • Get Rapid User Feedback
I would argue you can be more Agile with these four changes than you would be if you adopted full on Scrum. Agile does not mean Scrum, although Scrum can at times be Agile. This is not to say that Scrum is a bad way to go. The point there are some background "intents of the law" that make Scrum work. It might be a simpler approach to adopt these intents in order to create an Agile mindset as opposed to adopting an Agile framework. Teams see improvement Scrum with not because they strictly adhere to Scrum rules, but because of the intentional or unintentional adoption of practices that make them Agile.

In the interest of small batches and of limiting your WIP, pick one of these four principles and start there. This might be more effective than taking on an entire framework. Each of these principles in themselves is not easy. Wouldn't it make sense to try one smaller difficult thing, rather than a set of multiple difficult things at the same time?

Tuesday, December 27, 2016

Handoffs Create Heartbreaks - A Christmas Story

Our story begins and ends at one of Florida's largest tourist attractions - The Sawgrass Mills Mall. Just like any good procrastinating parent, I waited until the last week before Christmas to get the earrings that my daughter had (strongly) hinted at. Over lunch on the 21st of December, I headed to the Swarovski store at Sawgrass Mills mall. This was also an economical decision in terms of time invested, as I was able to pick up a gift for my wife at the mall, as well. 

My daughter had shown great interest in two separate pairs of earrings. As is the most efficient method available at these stores, I walked straight to the case and located the two earrings. I asked the shopping assistant working the floor to help me get the earrings, as they were behind a locked case. The assistant, a very nice and personable gentleman, let us call him Joe, was happy to help. The fact that this was the week before Christmas meant that the store was pretty busy. Joe, did his best to help me while still helping two other customers. Since I knew exactly what I wanted, my transaction was simple and Joe led me straight to the payment counter after finding the appropriate boxes.

There were still the other customers on the floor that Joe had been helping. As Joe started ringing me up, His colleague (Let us call her Mary) who had just finished ringing up another customer, suggested that he return to the floor and she would help finish my transaction. Mary's was a logical proposal, as this arrangement would make sure that all transactions proceed unimpeded. I am able to pay for the earrings that I am buying and other customers would be helped in making the best selections possible in their context. Joe took the offer, said a courteous "Happy Holidays" to me and went over to help other customers. Mary helped with the bagging of the boxes and the finalisation of the bill. I was happy that I was able to get two pairs of earrings for my daughter in the space of 10 minutes, as this left me with time to pick up a gift for my wife as well.

After getting home, I hid the earrings and again, like a good procrastinating parent waited until the afternoon of the 24th to actually wrap the gifts. I wrapped the two sets of earrings in one gift pack and placed it under the tree for my daughter to open on Christmas morning. For any parents with a teenage daughter, the excitement is easy to imagine. You are so sure that jewellery is going to be a success. Even if every other gift is rejected, jewellery, which is backed by strong hints, is absolutely going to work.

The night passes, and I am sure Santa has done well. My wife, my daughter, our German Shephard and our Maltese are all gathered around the tree. We start opening gifts or sitting in boxes, based on our preference ( you have no idea how small a box a 90 lb German Shephard thinks she can fit inside). It is my daughter's turn and she is unwrapping the box with the earrings. I have done well, she has no hint of what is inside. She finally sees the Swarovski boxes and immediately knows what she got for Christmas. She opens the first box and there are the crystal hoop earrings that she wanted. The thank yous, kisses and hugs are being distributed. She opens the other box and... nothing! The box is empty. We shake the box, turn it in every direction, close it and reopen it, but the earrings do not appear. This just turned into an anti-climactic heartbreak. The starfish shaped earrings that I bought for my daughter are nowhere to be found. I show my daughter the picture on the box, to assure her that I had bought the right earrings and tell her that I will visit the store soon after Christmas to get the earrings that she wanted and the ones I paid for. That box was definitely not worth the money I paid for it.

Not the day after Christmas(because I love my daughter, but I hate crowds), but the day after that, I headed over to the mall. Luckily, the "returning of the gifts" crowds had died down enough for me to find decent parking and not hyperventilate due to an over-crowded mall. I made a beeline straight to the Swarovski store and was met there by two shopping assistants (not Joe or Mary) that were working at the time. One of the assistants, let us call her Kate, approached and asked what she could help me with. I explained the problem to them and asked if I could get the earrings that I had purchased. Kate seemed to be very surprised by the request. She explained that it was store policy to show the customer the box before finally closing it and bagging it. Only after the customer has verified are they supposed to bill the customer. She saw the receipt and remarked that Joe "is very good" and she is surprised that there was a slip up in the processing of my purchase. Kate asked me if I could wait to talk to the manager so that she could take care of it. I didn't mind waiting and when the manager became available, she promptly took care of the matter for me by getting me a pair of the same earrings (after doing her due dilligance of checking the vedio tape ofcourse).

If Joe is a well-respected employee with a reputation that people are surprised when things go wrong, how did things go wrong? Putting on my Agile Coach/Process Junkie hat I can see exactly where the issue occurred. When Joe handed over my transaction to Mary, there was some loss of information. Mary assumed that Joe had already shown me the box and Joe assumed that Mary will show me the box with the earrings inside. Neither of them did. This is why handoffs are dangerous. They might seem efficient at first, but there is always some loss of information when the handoff happens.

Think of all the handoffs that a single work item in software goes through. Customer request - Product Owner - Business Analyst - Software Engineer - Quality Assurance - Build Team - Operations - Customer. It is a long game of telephone where any one of these handoffs can result in a heartbreak. In the case of the earrings, it took just one handoff to cause the issue. Closing the handoff loops and eliminating handoffs is one of the reasons Agile first took flight. DevOps is the latest iteration of this. Fewer handoffs, result in fewer communication issues.

This does not mean that in every organisation, all handoffs, will be eradicated. Handoffs will exist, but where ever there is a handoff, there need to be explicit rules. There have to be explicit exit criteria before the work item can exit a stage. In order for Joe to hand something off to Mary, he should let her know of some basic information, including whether he has shown the contents of the box to the customer and received an acknowledgment. Ideally, there is no handoff, but if there is one, the policies for the handoff are explicit and understood, otherwise, there will be many heartbreaks on Christmas mornings.