Tag Archives: team

Happiness and other technical requirements – Clean slate

This post is the third part of a software development story. If you want to catch-up, here are the first and the second parts. Please make sure you read the disclaimer first.

After presenting a dysfunctional team, and then moving on to it’s recovery and improvement, we are now facing a defining challenge: a new project where the team could experiment their skills.

Even if the stakes from a business perspective where not that high, they were so for the team. It was a big bet on ourselves and on our newly found competences and attitudes.

The risk of getting something wrong was incredibly high, as we were moving in uncharted territory: developers were mainly PHP devs, who decided to write the backend in Ruby; automation testers were mainly Java experts, who transitioned to Cucumber and BDD; none of us had any experience developing for mobile, working with maps, storing on a No-SQL database or hosting on an cloud infrastracture.

So… dear reader, buckle-up, because it’ going to be a real roller-coaster!

Re-shaping the problem

A couple of business owners and a project manager approached us with a detailed, documented plan to extend an existing business application. The idea was to provide our door-to-door sales agents with a tablet, and to build them a digital, live version of the printed map and form which they were usually assigned.

The first thing we did was to heavily transform the project structure. Piece by piece we reshaped the waterfall plan in a more agile, iterative way. We convinced them to start with a prototype and evaluate the results. Mindful of our inherited mistakes, we put user satisfaction at the top. We challenged their business assumptions about a top-down plan.

Before even starting a proper discussion, we did something that shook them a bit: a couple of developers and our business analyst actually spent a day working with these door-to-door sales people, knocking at people doors, trying to sell our products.

It was a brilliant move. No-one likes door-to-door sales people. But that single act allowed us to empathise with them and their hard job. It gave us a totally new and fresh perspective on their real problems, and a concrete meaning behind what we were about to develop.

We used this new information to challenge some work items, introduce new ideas, and re-priotise the backlog.

We made the project our own. We developed a sense of ownership.

Mistakes

Everything was quite exciting and challenging, but only one part of the team was involved in these early explorations. When we started to make decisions that would affect the course of the project I saw the risk of not involving the rest of the team, but my colleagues were more eager to make progress quickly.

As mentioned in the previous post, as a Tech Lead I tried to have a very light touch and to rarely interfere in technical decisions. As we were figuring out what JavaScript framework to adopt, I felt both the pressure from our non-technical team mates to quickly decide and the danger of laying the wrong foundation for our codebase. I ended up pushing for my preferred option much more than I’d wanted.

I tried to explain later to the rest of the developers the reasons for my choice, and that I would have much preferred a collective approach. I reassured them that the project manager promised we would be given the opportunity to change framework and re-write our front-end if the prototype and the user trial proved we could have chosen better.

Nonetheless, along the course of the project that decision proved to be a real source of troubles. From a purely technical perspective the framework was well architected, flexible, and especially good at compensating some of our own weaknesses (such as visual design). But the learning curve was steep, and working with it was far from exciting. It required much more investment and motivation.

If there is something I learned from it is that you own the decisions you take, and being the only one involved in that decision, I become the only one with a sense of ownership over that piece of code.

I hope you dear readers noticed the irony in my mistake: I ended up relying on purely technical specs more than on collaboration and commitment.

I’ve asked myself for a long time what would I do differently and better if I had to face a similar critical situation. Once put in the right context, the answer is simple, and I hope by now you already figured it out: I just wouldn’t settle for an individual decision.

It really doesn’t matter what framework you adopt, if your whole team is not committed to it. And you can only commit to a decision if you have been involved in it and you have been allowed to voice your opinion.

Innovation

Once the whole team started working on the new project, the joy of having a clean slate and the curiosity for exploring new possibilities took over.

Being part of a big enterprise company, we inherited a slow and verbose release process. We already worked hard in the previous 6 months on our original project to speed it up by automating the manual steps and by removing technical obstacles for more frequent deployments.

With this new project we decided to move much further, adopting a Continous Deployment approach and a Continous Delivery strategy. Which essentially means we hugely invested and relied in automation tests and automated deployments.

Our deployment pipeline became a masterpiece of craftsmanship, a source for new experiments and learning, a backbone of our development process, but also a showcase of all our efforts and huge source of pride.

We supported our growing application with different kind of test suites: Cucumber/WebDriver feature tests, RSpec for our own REST API, Jasmine unit testing for JavaScript, integration testing and contract tests for external APIs.

All of these test suites were organically born out of real needs, so that we were efficiently testing only what was really required. For instance, we introduced a contract suite once we realised the external API we were using wasn’t reliable. We introduced JavaScript unit testing once the complexity of our asynchronous code become confusing.

Above all of them, a special place and importance had our Cucumber/BDD suite. It really was a point of intersection between the needs of developers and testers, between business requirements and technical features.

The test suites were distributed along multiple steps of our deployment pipeline, and executed on different automated environments, in a beautifully designed process that involved careful integration between different systems and technologies.

The Incredible Machine

A bit like one of those contraptions out of The Incredible Machine, everything started from a spark in our Git repo, moving through a set of jobs on our Jenkins CI server, crossing our farm of 20 virtual slaves running on VirtualBox/Vagrant and provisioned using Puppet, deployed on freshly spawn environments using CloudFoundry, smoke tested again via WebDriver, and finally ready for production.

I have a clear picture in my mind, a snapshot of this happy period. I’m leaving the office at the end of the day, and I notice that half of the team is still sitting at their desks. We have no deadlines, and no pressure to deliver yet. Still we sometimes work more than before just because we like what we are doing, staying late maybe just to finish that cool thing we were playing with. And I realise how far we’ve gone in the last 8 months, when, on the contrary, we had to work overtime on the weekends to meet an arbitrary deadline no-one really cared about.

Doubts

I left the team for one month, as I temporarily joined another team that needed some help with their delivery.

They say: hire great people, and get out of their way, and I always believed that the best thing a leader can do is to make oneself redundant.

But when I returned, in time for our regular team retrospective, things weren’t going that well.

Even if still no-one was using our app, more and more random bugs were caught by our testers, and we couldn’t identify the root cause.

The general mood during the retrospective was quite low, and people felt a lot of uncertainty.

With all this disruptive innovation going on, I found myself in the unexpected position of having to contain it a bit, or at least allow it to redistribute organically across the whole team, trying to make everyone feel comfortable with the fast pace.

Maybe because of the lack of my reassurance, or maybe just as a necessary phase, doubts creeped in.

As one of the developers emphatically expressed it at the end of the retrospective “essentially,  at this stage we have no confidence at all that this thing is going to work“.

What if the project turned out to be a failure?
Had we been too reckless?

You’ll find out in the next episode.

(suspense)

Advertisement

Happiness and other technical requirements – Change

This post is the second part of a software development story. If you want to catch-up, here is the first part. Please make sure you read the disclaimer first.

Six months after the team hit rock bottom, we are all now standing in a meeting room, having a project retrospective.

These kind of meetings are different from the usual, frequent, Agile retrospectives, because the feedback collected is based on a whole project (or part of a project), spanning months of work.

The setting is relaxed, collaborative and a bit fun. The walls are covered in paper, tracing a timeline – from left to right – of the latest months. We scribbled and attached post-it notes where we believe happy, sad or simply meaningful events happened.

Each of us was also given a number of colourful dots to stick across the same timeline. We could position the dots to mark our morale: the y axis represents happiness, the x-axis represents time.

We are essentially surrounded by a visual representation of what the team experienced and how we experienced it.

And between the dots, a pattern emerges: on the left side the marks are very low and more scattered, but moving to the right they tend to unify, get closer, and – months after months – consistently rise.

For me, it’s just an amazing proof of how things changed and improved.

Six months earlier

I wish I could tell you that my first speech to the team was the the pivotal event that started the change. I wish I could tell you that my carefully prepared bullet points – focused around the idea of working and collaborating together, each of us towards a common goal – was actually a success. But who am I kidding?

There’s nothing more vain than other words and other promises for a team that lost trust.

After reaching our next deadline, finally our Scrum Master managed to negotiate a more relaxed goal, giving us some of the so longed-for time for refactoring.

That opportunity was also a first great collaboration trial, and for me, a first challenge on my coordination and facilitation skills.

And – I guess you already figured it out – we did it. We released the next milestone, together with the refactoring, in perfect time. And we did it together.

After a month and a half we finally got our first concrete result, proving that things were getting better.

Communication

I put communication and team morale as my first priority, and by doing that I had to sacrifice other things. I sacrificed, for instance, a strong technical leadership. Even if at that time I was the developer with more knowledge of the project, I realised that listening to the other developers ideas and opinions, and accepting them, and encouraging them, was far more important. I only stepped in when I though really serious mistakes were about to be made, and I left the other ideas and opinions be expressed and applied.

In doing so, a lot of the code changes and secret refactorings that used to happen before came to the surface, and become part of the usual, healthy debate. I had to first give them trust, before they could trust me, and trust each other.

At the same time, we started to get curious about our department (about 100 developers).

And we started to meet with other teams, learning about what they were doing, being inspired and encouraged by some very cool guys.

I know a lot of leaders or managers perceive their role as the gatekeepers of external communications. But, as advised by a good mentor, I took the opposite approach: I tried to give them the freedom and the confidence to go talk to anyone they wanted.

By opening up to other teams, new interesting ideas flourished, both about team dynamics and technical tools.

Testers and developers

As if part of some sort of virtuous cycle – opening external communications brought great ideas on how to improve our internal communication, the best of which was with no doubt BDD (Behaviour Driven Development).

Under the disguise of a new technical tool BDD dramatically changed the relationship between developers and testers.

We changed the ownership of our acceptance tests suite, sharing it between developers and testers, and rewriting it with a BDD framework (JBehave first, then Cucumber – after we fall in love with Ruby).

That wasn’t an easy or harmless step for testers at all: they had to give up some of that ownership, and open up to sharing it with those developers that they were used to see as “opponents”.

I hope my friendship and reassurance helped them doing the transition, but the biggest credit for succeeding goes to their kindness and openness.

We also moved as much as possible of the testing phase at the beginning, to avoid creating a mini-waterfall process.

This is a diagram of how our development workflow was re-designed:

workflow
We also learned not to take this process too seriously. So this is actually the latest revised process, based on a real scenario:

better-workflow

User-centric development

I saw in the BDD process an extra, great opportunity. By starting development with the definition of those user-centric acceptance tests, we shifted our focus from code correctness, to feature correctness – from crafting the right code, to delivering the right user experience.

Which meant: less bugs to catch for testers, more interesting and rich technical discussions, but also, narrowing the gap between developers and the business, and unexpectedly, between the business and the end-users of the application.

Ruby Tuesday

Good developers like to learn new things, and learning new things make good developers.

So I had an idea: organise a recurring learning lunch (a.k.a. brown bag lunch) to learn the Ruby programming language. One of the amazing guys we met outside of our team is a really experienced developer – I’d say a Ruby evangelist. I invited him over. I also invited testers, as Ruby is the programming language in which a lot of the best automated testing tools are written (i.e. Cucumber).

In a single shot, our weekly “Ruby Tuesday” became an amazing event to inspire developers, let them collaborate closely with testers on a ground unfamiliar to both (as developers were mostly coming from PHP and testers from Java) and learn some technical tools that would help in our transition to BDD.

I obtained support from management too, both in terms of encouragement and – even better! – free food for the lunches.

The series of lunches was a huge success. In the end, after 5 months, it died from implosion: more and more people joined the club, to the point where finding a meeting room big enough was a quest. A horde of newly hired graduates happily invaded our meetings, forcing us to start again from the basics we already learned. Also, our Ruby mentor left the company for new adventures. And – finally – my team didn’t need it anymore, as we now had a real project to test our  Ruby skills… but this is part of the next post, so you’ll have to wait for that… ; )

An awesome team

More than anything, we became a close-knit team. Our brilliant Scrum Master organised (and encouraged us to organise) nights out. We started to know each other better.

Coffee after daily stand-ups became a ritual (mostly stealing it from another floor: for some reason they had the best coffee machines of the whole building). Not to mention the lunches at the canteen – especially the Chicken Caesar Salads on Monday – a pillar of our team cohesion.

As icing on the cake, we won the first prize for our Hack Day project – a yearly competition, open to the whole business, which saw about 10 teams from all departments hacking, prototyping, and pitching new ideas. And to be perfectly honest, my contributions to that effort were mainly two: subscribing the team to the competition, and accidentally messing up our Git history 30 minutes before the jury’s inspection. Kudos must go to that amazing team, and in particular our young and bright Business Analyst.

A new challenge

Now, going back to our main topic, you may wonder how all of these changes affected the technical side of the project.

We totally rewrote our huge acceptance test suite, finally bringing it down from 9 hours to one acceptable hour (in the end we managed to reach a 15 minutes build time).

Our obsession for Continuous Delivery practices lead us to stream-line our deployment pipeline, cutting manual steps and verbose checklists. Our average release lifetime went from months to weeks, and then days.

No technical debt was added to the project (well, at least as far as I can remember!), and we kept refactoring and improving our code relentlessly (sounds brave). We never released bugs in production more serious than the ones we had released before.

But from the moment we started the aforementioned refactoring, our codebase started to look to us more like legacy code. We realised we had now the capacity to successfully design new patterns and systems, craft great code, deliver new features fast and safely – but all we had on our hands was a project in maintenance and BAU mode.

With impressive political skills our Scrum Master managed to secure for our team a new project (and new fundings), loosely related to our business area.

And this is where I placed my biggest and riskier bet.

As a brand new Technical Lead this opportunity was my first chance to really lead the development of a project from start to finish, to prove I was a good fit for the new role.

And I decided to bet it all on the enthusiasm and passion of our team.

So, this is how a team of PHP developers and Java testers, used to deal with e-commerce web apps (with no experience on mobile) undertook the development of an iPad app, written in JavaScript and HTML5, backed by a Ruby-based server, hosted on an internal Cloud infrastructure so new that no-one else had yet tried, using a Cassandra No-SQL database we never used before…

I don’t know you, but if I were technically responsible for a project like that, I’d tear my hairs out! ; )

The rest of the story, on the next episode… (suspense)

Happiness and other technical requirements – A dysfunctional team

Intro

This is the first of a series of posts about a software development story. It starts in troubles, but it has a happy ending, and hopefully some interesting insights along the way.

The journey is about my personal discovery of  how relationships, team dynamics and morale affect the success of a project and – more surprisingly –  how these apparently unrelated aspects affect the technical quality of our products.

Disclaimer

I’m tempted to claim that I know what happened and why it happened. I’m tempted to claim that I can pinpoint the reasons why things were problematic, and why they dramatically improved.

But it wouldn’t be completely honest. The truth is, I’ve learned that in the evolution of a complex system like a software project and even more, a dynamic and self-organising team, so many different variables are involved and interconnected.

We naturally tend to remember and highlight according to our personal bias. Someone could look at the same project and only be aware oft the technical issues. Someone else may stress that even if we were following Agile practices, we were “doing it wrong”.

You can look at it form the point of view of the managers, of the developers, of the business stakeholders or the users, and you’ll see different things.

What you are about to read is from a particular perspective. Please remember: it is not the only one or necessarily “the one”.

But it’s a bit unique, unusual, and so, hopefully, interesting.

Let’s start

Imagine a software development team, focused on building a low-traffic e-commerce application. The team is following a pretty standard Scrum process, with 2-weeks long sprints, regular retrospectives and planning games, a dedicated Scrum Master, a Tech Lead, a Business Analyst and a Product Owner.

Developers are quite keen on following well-known best practices, such as Pair Programming, Continuos Integration, and a proper Test (First) Driven Development (all production code is written after a test).

The architecture of the application is solid: a cluster of load-balanced web servers and app servers. Corporate backend services, hidden behind a uniform façade, handle all the complex business logic and data management requirements.

So, here is the question: how successful do you think this team and its project could be?

Or in other words, how much the technical and process-oriented aspects I’ve described can tell us about it’s success?

As a matter of fact, not that much. After one year since I joined the team as a developer, and since the project started, the situation looked quite grim.

Even if real users where already using the app as part of a trial, and we were gradually ramping up to a full release, we were significantly late on the plan. The project turned out to be more expensive than estimated, and the development pace slower than expected. The team doubled in size to cope with the workload, but that wasn’t enough.

Tech Debt was accumulating in every direction: the core of the application was a bundle of over-complicated design patterns, the acceptance test suite become extremely slow and in need of some TLC, the conceptual domain was fragmented and inconsistent. And all of this, under growing pressure to deliver, and meet the now twice-postponed deadline.

I had been doing overtime on the weekends for about three months, when a couple of developers contractors were fired.

…and this is, dear reader, when our story hits rock bottom (yes, don’t worry, it’s not going to get worst than that!).

So, what happened?

Let’s rewind, go back from the beginning, and explore some of the deeper issues.

During the first months of the project, everything was quite fun. The project was brand new, the code base was shiny and clean. We were a small team, gradually ramping up to a group of 3 developers + 1 tech lead, 2 testers, a business analyst and a scrum master.

With the increase in size, communication problems become more evident. Technical meetings were especially painful. As often, we were a bunch of opinionated developers who loved to argue about the design patterns, technical decisions and sometimes technical processes. But we were not able to come to clear conclusions, take proper actions and move on.

Some started to become disillusioned about our capacity to listen, learn and adapt – as advertised by the Agile-oriented management.

Some meetings were so bad I wished I could be somewhere else. Unsolved and unmanaged conflicts between ideas become personal conflicts.

That’s when I started to realise something quite amazing. Despite having a robust TDD approach, despite following Pair Programming and having static code analysis tools running as part of our CI system, the areas in our codebase that were more troublesome were not the ones with higher cyclomatic complexity or too much nested logic; they were not the ones with lower unit tests code coverage, and not even the ones written without doing Pair Programming.

The biggest technical issues were on those areas of the code involved with some unmanaged personal conflicts.

Relationships between developers and testers were problematic too. Automation testers were writing and maintaining a suite of acceptance tests. Was their responsibility to ensure that the final product matched the product-owner expectations, while developers were just focused on writing great code.

The conflict and the gap were somehow intentional – consequences of the belief that those two concerns (developing and testing) were better if specialised and antagonised.

The result was a huge amount of waste. Developers were writing a truck load of micro unit tests to be sure that “this time” their code was rock solid, only to witness it being rejected with tons of bugs from testers.

Testers, on the other side, being completely unaware of how the software was developed, were writing loads of ineffective and repetitive tests.

It become normal for our acceptance tests suite, in it’s fast mode, to last more than 9 hours, and never to pass 100% ( yep, you can imagine…).

But the most serious communication problems were between developers and the management. As the already postponed deadlines were approaching, managers started to worry about the delivery and to ask for more productivity. But developers were already in troubles for all the tech debt accumulated, and were – at the opposite – asking for more time to refactor.

Absence of mutual trust become the main dysfunction of our team.

Blame

So, at this stage, who would you like to blame? Who would I like to?

Well… no-one really.

As you read the story you may naturally start to connect issues with responsibilities, and eventually start to blame and to imagine culprits.

But I’d like to keep all of that out of this story.

I’ve never found blame useful to understand or to improve.

I prefer looking at the issues on a system-level.

Let’s now go back to our finale.

Climax

When the day of the full launch was on the horizon, the clash came to a climax.

On the surface it appeared as if the whole team (quite big at that time: about 7 developers and 4 testers) was focused on delivering the remaining  features. But on the inside, developers were totally doing their own things. After months of unkept promises from the management about getting some time for improving the code base, they lost their faith and trust, and started to unionise on their own, sneaking in refactoring tasks as possible, ignoring the delivery pressure from the business as a fake concern.

The final straw ended up being a missed deadline, due to an unplanned piece of refactoring that, days before the date, turned into an epic two-weeks-long task.

This is where we hit rock bottom, and a couple of developers (contractors) were fired on a Friday afternoon.

But this is also where the story become a bit more personal, because then our Tech Lead moved to another team, and being the only developer permanent employee left on the team, I stepped into that role.

And here I was, for the first time with that kind of responsibility, in the middle of a highly dysfunctional and now wounded team.

To know what happened next, you’ll have to wait for the next episode….

(suspense)

; )