Happiness and other technical requirements: Lessons

As software developers we care about the quality of our code, about our test coverage, about design and architectural principles. We study algorithms and design patterns, we experiment with the latest programming paradigm, we fall in love with certain IDEs (or lack of), and we practice keyboard shortcuts as diligent Zen disciples.

Then we become technical leaders, and we bring our approach with us: we investigate new processes (Scrum, Kanban, Lean…), we happily obsess about collaboration tools (Jira, Trello, FogBugz…), we fight the right cause for the latest test automation fad (TDD, BDD, ATDD…) and we become enlightened after discovering Continuos Delivery.

But what about the human side of software development? What about commitment, about passion and team morale, about working towards a common goal, about developing trust and communication? How much we invest on those?

I’d like to claim that in our industry we underestimate the impact of the human factor on the success of our projects and on the technical quality of our delivery.

I’m a geek at heart, and I wish I could now cite some researches and rattle off significant numbers to back my ideas. But I can’t, and what a pity it’s the lack of scientific studies in software engineering.[1]

In the last 10 years, following the shift in perspective brought by Positive Psychology, lots of researches and studies (and Ted Talks) have been made about the impact of happiness, wellbeing and motivation at work. Businesses like Zappos have started a “happiness revolution” focused on culture and positive attitude.

But – even if some technical companies such as Buffer are already part of it – this revolution hasn’t touched yet the core of our industry. The missing piece of the puzzle is, I believe, the awareness of how much the human side of our work affect the success of our technical projects and even the technical quality of our codebase.

Lacking scientific proofs, the best I can give you is a bit of anecdotal evidence from my personal experience. That’s why I’ve been writing this series of articles, a story of a team in a big enterprisey media company and its redemption from dysfunctional to amazingly brilliant.[2]

I was part of that journey, and along it I’ve learned some valuable lessons. Without further ado, here are the most important ones, in a “bloggy” dotted-list fashion.

Lessons I’ve learned

People and interactions over processes and tools

I’ve witnessed many times teams with collaboration problems being unsuccessfully prescribed a cure based on pre-packaged processes and tools (Scum or Kanban? Jira or FogBugz?). Processes can only lead you so far.[3] But if you can build trust and honesty, if you can establish open communication patterns and have everyone in the team passionate about a common goal, then you’ll really shine and deliver.[4]

Hiring and keeping great people is the most important thing

It’s a cliche, but it’s true: there is nothing more important than hiring the right people, and doing everything you can to keep them happy. I’m not talking about the technical skills, but mainly about finding people with the right attitude. Technical skills can be learned, but negative behaviours can be a drag for the whole team.

Culture-fit is not an option

I worked in companies where the main requirements to hire contractors where years of experience in a specific language, and even in a specific framework, following a specific process. That’s all good, but not as important as cultural fit. Hire people that fit within your company and team culture.

One team, one goal

As developers we are often concerned with maintaining technical quality up to standards, while project managers or Scrum Masters are mainly worried about delivery and deadlines. Sometimes testers too perceive their main goal as antagonist to developers.[5] All of this is really counterproductive, and leads to politics and paralysis. Find a unique goal that is shared by everyone, let developers and testers also care for deadlines and delivering in time, and let product owners understand that technical quality is nothing but speed of delivery in the long term.

Values matters

You don’t need to write them down. The best values are hard coded in our habits and unspoken. But if you can, try to have them clear in you mind. Especially, consider their hierarchy and priority. For instance: do you value customer satisfaction more or less then code craftsmanship? Do you value respecting each other’s idea more or less than code correctness?[6] Values are the hidden DNA of a team.

Trust is the best productivity booster

Big companies invest a lot of money in safety procedures. I used to work in a company where each deployment required approval from a chain of Change Managers. You had to file a ticket filling a long form describing motivations, services impacted, testing strategy, contact details, and so on. Lack of trust is really expensive. At the contrary, teams and organisations where people trust each other work like magic.

Communication mirrors code quality

We all know Conway’s law: “any piece of software reflects the organisational structure that produced it.”. In the same way, communications patterns between people affects the quality of the software. After 6 months working on a greenfield project I realised I could link all the areas with major technical debt to some unsolved personal conflicts in the team.[7] On the positive side, a team where everyone feels safe and is free to communicate, where team mates trust and respect each other, will refactor code more freely, come to sound judgments in a constructive way, and will be more open to feedback instead of worrying about defending egos.

Happy developers produce great code

It’s without saying that when we feel passionate about what we do, that’s when we perform at our best. Still, we forget about it when we have to take technical and organisational decisions. Learning new skills, earning our team mates respect, having an impact on our product, feeling that our ideas are taken into account, these are all things that increase our daily happiness.[8]


I hope you enjoyed reading this series of articles. If you felt inspired and you’d like to make some changes in your way of working, just try to consider the human factor in your daily decisions, as a developer or as leader. A small shift in perspective could make all the difference in the long term and lead to new discoveries on your own journey.

Thanks for following these posts, and have happy holidays.


1. If you are interested in the topic of scientific studies in software engineering I can recommend this book: Making Software: What Really Works, and Why We Believe It. It’s a collection of researches about topics such as TDD, Pair Programming, high level languages… Unfortunately the findings are often inconclusive.

2. I’ve written the whole story in 4 blog posts: a dysfunctional team, change, clean slate and finale. The post you are currently reading is somehow the moral of the story.

3. Processes and tools are amazing. In my story I’ve recounted many times when a change in team dynamics was supported by a change in process or adoption of new tools. But the point is that they are just instruments. The “why” behind them is much more important.

4. I still remember an amazing talk by Dan North at an Agile conference I was attending three years ago. “Individuals and interactions over processes and tools” was one of the core principles in the Agile Manifesto, but 10 years later the Agile community seems to have forgotten. Some would argue that processes and tools are easier to sell and turn into products, therefore more interesting for consultancies.

5. I’ve written about it in my own story, how all the internal conflicts reduced productivity and morale, and how we changed.

6. A different hierarchy of values is for instance what animate the debate around the software craftsmanship movement.

7. I wish we had better tools for static code analysis. I can imagine the kind of errors and warnings I’d like to see: line 28 on Product class was written with an high amount of boredom; the module ShoppingCartSerialiser was written with high level of confusions because of lack of communication with stakeholders; the method filter_results on the OrdersCollection class contains too much of “I-don-t-care-because-I-am-a-micromanaged-contractor”

8. A brilliant article I highly recommend about technical choices and their impact on values: What technology should my startup use?.


Happiness and other technical requirements – Finale

And here we are, looking at the gran finale of this (now) four-part software development story.

We left our team in the middle of it’s defining challenge, facing doubts and concerns.

Now, your first instinctive reaction – if you find yourself in a similar condition – is denial. It’s a natural defence mechanism.

– There are no problems! They are just over reacting! Besides, it all looks easy to me: you just need to do this and that, and that’s it…

Why do we do that? We don’t want to admit there are issues. We invested much of our ego in a project, and when problems arise, we ignore them. The bigger the problems, the more we tend to ignore them. And we are all walking together our death march.

It takes some skills to actually stop, and be honest with ourselves. Stop and listen.

What helped me to acknowledge it was my focus on listening and respecting my team’s opinion.

What helped us to work it out and solve it, was our unwritten (but practiced) belief in respecting and eliciting each other’s feedback.

So we made a plan, we plotted some practical actions.

To shed some light on the root cause of those bugs we introduced an extensive logging solution, that tracked everything happening on the HTML 5 app, storing it in the device storage and syncing it back to server for further analysis.

Funky bugs

What we found was quite surprising. Some of those most obscure bugs where only happening using a mobile connection and not over WiFi.

We looked at the extensive logs, and we found some weird error messages, that were definitely not generated by our code.

It turned out that some network operators, to optimise user’s bandwidth, re-scale the images contained in a web page, and then inject some custom JavaScript at the bottom of the HTML in order to serve the full-quality version only when the user taps on them.

That piece of random JavaScript written 8 years ago, totally inadequate to modern single-page applications, was hijacking our event handlers and throwing odd errors.

The elevator test

Our testers did an amazing job in identifying this and other potential bugs.

In contrast with the situation we found at the very beginning of this story, this time the collaboration between developers and testers really payed off.

As developers, instead of rejecting the concerns of our testers that couldn’t be proved, we built them tools that allowed them to automatically record what was happening and easily report it.

As for them, they used their dedication and creativity to come with new clever ways of testing.

That’s how we invented the SouthWest Train commute test (aka: what happens to your mobile app with an intermittent connection) and the elevator test.

Taking the elevator from our second floor to ground zero, facing an internal corner of the compartment, was a very accurate and reliable way to cut all 3G signal.

These tests raised a lot of questions, and highlighted important assumptions we missed. Is you app able to properly detect your connectivity status? Is it blowing out in the middle of a server requests? Is it able to recover from that?

So, you could sometimes spot a developer and a tester, going up and down on the elevator, until some obscure bug wasn’t ruled out…


All our technical choices and process decisions were oriented towards flexibility and adaptability: we choose an HTML5 app, so that changing device and operating system would not be an issue, we choose REST protocols and a loosely typed data format such as JSON, we choose a NO-SQL database which allowed to quickly enhance our data, we choose Ruby and JavaScript as dynamic programming languages, we choose a cloud system which let us delay performance concerns with super-easy scalability, and we wrapped everything in robust but loose test suites, to be able to constantly and safely refactor and reshape our code.

Towards the end of our development, when the user trial was approaching, these choices really payed off. As we became more confident, our velocity increased. Our business sponsors were noticeably satisfied by the pace of our delivery.

We used retrospectives and demo to get fast feedback and steer the direction our efforts, and – as lubricant – we encouraged open communication, and we relied on mutual trust.

I happened to witness a brilliant example of that: after a demo which raised some discussions, our two business owners stopped for a chat with one of the developers. Looking at the screen, they were able to quickly take a better decision, implement it, and commit and deploy the new feature. They were happily shocked. You must know they were used to have long discussions with business analysts and technical architects before a change could even be scheduled by a project manager, and then wait for weeks, and sometimes get something completely different from what they were looking for.

Good and open communication was really the magic that kept the engine working so well, both internal communication between developers, testers and our business analyst, and external communication, establishing great relationships with he DevOps teams, our department technical directors, and our business sponsors and project manager.

The last obstacle

We finally launched our user trial, with 20 sales agents using our iPad application in their daily job. We built them an easy way to report their feedback, and we put in place analytics to track the outcome.

The results were encouraging. They all loved it.

No bugs ever reached our users, except… except one user started to complain about losing all his data.

For better network and battery efficiency, and to keep functioning in areas without 3G connections, our application was storing a lot of data in an offline storage and synchronising this data on-demand.

Losing that data was a huge problem. Luckily no sales were lost, but all the logs about the daily activity – which meant so much for the agents, their managers and for our understanding of the success or failure of the experiment – were gone.

As an isolated report, we blamed it on the agent (he may had accidentally cleared the offline storage after all).
But when a second agent reported the same issue, our worries become real.

How was that possible? We tested the application for months, both manually and automatically, on different networks and connections, using the same physical device used by the field agents, and we never ever found a similar problem.

Our attempts to reproduce the issue, involving a lot of creativity and hard work, kept failing.

Finally, after more research, hacking devices, and inspecting the internals, we discovered the cause of the issue.

Months before our launch, Apple introduced a change in the way HTML5 applications local storage was treated by iOS. When the operating system decided to run a “garbage collection” – because of low memory or as part of a routinely clean-up – it could effectively wipe out that data. Local storage for HTML5 applications was now treated as temporary.

The only reliable solution, guess what, involved adding an extra new technology to our stack: a native application wrapper for iOS, written in ObjectiveC. As you may imagine at this stage, learning a new technology couldn’t really do more harm than galvanising our team. Such are the benefits of working with a team like that.

We quickly built a prototype that proved our solution, and with the support of native developers from the Mobile department, we refined it and released it to our agents. The bug was squashed.

Mission accomplished

So, as I promised you four blog posts ago, there is a happy ending. And here it is. : )

At the end of our user trial our business sponsors were hugely satisfied, so much that they tried do assign us other parts of a bigger project. We delivered much more than what we promised, by listening and constantly adapting to their requirements.

Our agents and final users loved the application and loved being involved in it’s development.

We pioneered new technologies, we showcased them to the rest of the department, making our technical directors happy.

We went from being the cinderella of the department, to the raising star.

What happened after that to our team, the hero of our story?

Soon after, as a result of some high level restructuring, the team was moved to another department and bound to merge with another one.

Somehow, we all felt that our experience as a team was over, and in few months most of us went looking for new adventures.

And so, the story ends, with our heroes walking towards a sunset.

But before closing the curtain, I’d like to spend a final post on this topic.

So, if you had the patience to follow it up until now, there is an extra bonus in preparation…

Soon… : )

Happiness and other technical requirements – Clean slate

This post is the third part of a software development story. If you want to catch-up, here are the first and the second parts. Please make sure you read the disclaimer first.

After presenting a dysfunctional team, and then moving on to it’s recovery and improvement, we are now facing a defining challenge: a new project where the team could experiment their skills.

Even if the stakes from a business perspective where not that high, they were so for the team. It was a big bet on ourselves and on our newly found competences and attitudes.

The risk of getting something wrong was incredibly high, as we were moving in uncharted territory: developers were mainly PHP devs, who decided to write the backend in Ruby; automation testers were mainly Java experts, who transitioned to Cucumber and BDD; none of us had any experience developing for mobile, working with maps, storing on a No-SQL database or hosting on an cloud infrastracture.

So… dear reader, buckle-up, because it’ going to be a real roller-coaster!

Re-shaping the problem

A couple of business owners and a project manager approached us with a detailed, documented plan to extend an existing business application. The idea was to provide our door-to-door sales agents with a tablet, and to build them a digital, live version of the printed map and form which they were usually assigned.

The first thing we did was to heavily transform the project structure. Piece by piece we reshaped the waterfall plan in a more agile, iterative way. We convinced them to start with a prototype and evaluate the results. Mindful of our inherited mistakes, we put user satisfaction at the top. We challenged their business assumptions about a top-down plan.

Before even starting a proper discussion, we did something that shook them a bit: a couple of developers and our business analyst actually spent a day working with these door-to-door sales people, knocking at people doors, trying to sell our products.

It was a brilliant move. No-one likes door-to-door sales people. But that single act allowed us to empathise with them and their hard job. It gave us a totally new and fresh perspective on their real problems, and a concrete meaning behind what we were about to develop.

We used this new information to challenge some work items, introduce new ideas, and re-priotise the backlog.

We made the project our own. We developed a sense of ownership.


Everything was quite exciting and challenging, but only one part of the team was involved in these early explorations. When we started to make decisions that would affect the course of the project I saw the risk of not involving the rest of the team, but my colleagues were more eager to make progress quickly.

As mentioned in the previous post, as a Tech Lead I tried to have a very light touch and to rarely interfere in technical decisions. As we were figuring out what JavaScript framework to adopt, I felt both the pressure from our non-technical team mates to quickly decide and the danger of laying the wrong foundation for our codebase. I ended up pushing for my preferred option much more than I’d wanted.

I tried to explain later to the rest of the developers the reasons for my choice, and that I would have much preferred a collective approach. I reassured them that the project manager promised we would be given the opportunity to change framework and re-write our front-end if the prototype and the user trial proved we could have chosen better.

Nonetheless, along the course of the project that decision proved to be a real source of troubles. From a purely technical perspective the framework was well architected, flexible, and especially good at compensating some of our own weaknesses (such as visual design). But the learning curve was steep, and working with it was far from exciting. It required much more investment and motivation.

If there is something I learned from it is that you own the decisions you take, and being the only one involved in that decision, I become the only one with a sense of ownership over that piece of code.

I hope you dear readers noticed the irony in my mistake: I ended up relying on purely technical specs more than on collaboration and commitment.

I’ve asked myself for a long time what would I do differently and better if I had to face a similar critical situation. Once put in the right context, the answer is simple, and I hope by now you already figured it out: I just wouldn’t settle for an individual decision.

It really doesn’t matter what framework you adopt, if your whole team is not committed to it. And you can only commit to a decision if you have been involved in it and you have been allowed to voice your opinion.


Once the whole team started working on the new project, the joy of having a clean slate and the curiosity for exploring new possibilities took over.

Being part of a big enterprise company, we inherited a slow and verbose release process. We already worked hard in the previous 6 months on our original project to speed it up by automating the manual steps and by removing technical obstacles for more frequent deployments.

With this new project we decided to move much further, adopting a Continous Deployment approach and a Continous Delivery strategy. Which essentially means we hugely invested and relied in automation tests and automated deployments.

Our deployment pipeline became a masterpiece of craftsmanship, a source for new experiments and learning, a backbone of our development process, but also a showcase of all our efforts and huge source of pride.

We supported our growing application with different kind of test suites: Cucumber/WebDriver feature tests, RSpec for our own REST API, Jasmine unit testing for JavaScript, integration testing and contract tests for external APIs.

All of these test suites were organically born out of real needs, so that we were efficiently testing only what was really required. For instance, we introduced a contract suite once we realised the external API we were using wasn’t reliable. We introduced JavaScript unit testing once the complexity of our asynchronous code become confusing.

Above all of them, a special place and importance had our Cucumber/BDD suite. It really was a point of intersection between the needs of developers and testers, between business requirements and technical features.

The test suites were distributed along multiple steps of our deployment pipeline, and executed on different automated environments, in a beautifully designed process that involved careful integration between different systems and technologies.

The Incredible Machine

A bit like one of those contraptions out of The Incredible Machine, everything started from a spark in our Git repo, moving through a set of jobs on our Jenkins CI server, crossing our farm of 20 virtual slaves running on VirtualBox/Vagrant and provisioned using Puppet, deployed on freshly spawn environments using CloudFoundry, smoke tested again via WebDriver, and finally ready for production.

I have a clear picture in my mind, a snapshot of this happy period. I’m leaving the office at the end of the day, and I notice that half of the team is still sitting at their desks. We have no deadlines, and no pressure to deliver yet. Still we sometimes work more than before just because we like what we are doing, staying late maybe just to finish that cool thing we were playing with. And I realise how far we’ve gone in the last 8 months, when, on the contrary, we had to work overtime on the weekends to meet an arbitrary deadline no-one really cared about.


I left the team for one month, as I temporarily joined another team that needed some help with their delivery.

They say: hire great people, and get out of their way, and I always believed that the best thing a leader can do is to make oneself redundant.

But when I returned, in time for our regular team retrospective, things weren’t going that well.

Even if still no-one was using our app, more and more random bugs were caught by our testers, and we couldn’t identify the root cause.

The general mood during the retrospective was quite low, and people felt a lot of uncertainty.

With all this disruptive innovation going on, I found myself in the unexpected position of having to contain it a bit, or at least allow it to redistribute organically across the whole team, trying to make everyone feel comfortable with the fast pace.

Maybe because of the lack of my reassurance, or maybe just as a necessary phase, doubts creeped in.

As one of the developers emphatically expressed it at the end of the retrospective “essentially,  at this stage we have no confidence at all that this thing is going to work“.

What if the project turned out to be a failure?
Had we been too reckless?

You’ll find out in the next episode.


Happiness and other technical requirements – Change

This post is the second part of a software development story. If you want to catch-up, here is the first part. Please make sure you read the disclaimer first.

Six months after the team hit rock bottom, we are all now standing in a meeting room, having a project retrospective.

These kind of meetings are different from the usual, frequent, Agile retrospectives, because the feedback collected is based on a whole project (or part of a project), spanning months of work.

The setting is relaxed, collaborative and a bit fun. The walls are covered in paper, tracing a timeline – from left to right – of the latest months. We scribbled and attached post-it notes where we believe happy, sad or simply meaningful events happened.

Each of us was also given a number of colourful dots to stick across the same timeline. We could position the dots to mark our morale: the y axis represents happiness, the x-axis represents time.

We are essentially surrounded by a visual representation of what the team experienced and how we experienced it.

And between the dots, a pattern emerges: on the left side the marks are very low and more scattered, but moving to the right they tend to unify, get closer, and – months after months – consistently rise.

For me, it’s just an amazing proof of how things changed and improved.

Six months earlier

I wish I could tell you that my first speech to the team was the the pivotal event that started the change. I wish I could tell you that my carefully prepared bullet points – focused around the idea of working and collaborating together, each of us towards a common goal – was actually a success. But who am I kidding?

There’s nothing more vain than other words and other promises for a team that lost trust.

After reaching our next deadline, finally our Scrum Master managed to negotiate a more relaxed goal, giving us some of the so longed-for time for refactoring.

That opportunity was also a first great collaboration trial, and for me, a first challenge on my coordination and facilitation skills.

And – I guess you already figured it out – we did it. We released the next milestone, together with the refactoring, in perfect time. And we did it together.

After a month and a half we finally got our first concrete result, proving that things were getting better.


I put communication and team morale as my first priority, and by doing that I had to sacrifice other things. I sacrificed, for instance, a strong technical leadership. Even if at that time I was the developer with more knowledge of the project, I realised that listening to the other developers ideas and opinions, and accepting them, and encouraging them, was far more important. I only stepped in when I though really serious mistakes were about to be made, and I left the other ideas and opinions be expressed and applied.

In doing so, a lot of the code changes and secret refactorings that used to happen before came to the surface, and become part of the usual, healthy debate. I had to first give them trust, before they could trust me, and trust each other.

At the same time, we started to get curious about our department (about 100 developers).

And we started to meet with other teams, learning about what they were doing, being inspired and encouraged by some very cool guys.

I know a lot of leaders or managers perceive their role as the gatekeepers of external communications. But, as advised by a good mentor, I took the opposite approach: I tried to give them the freedom and the confidence to go talk to anyone they wanted.

By opening up to other teams, new interesting ideas flourished, both about team dynamics and technical tools.

Testers and developers

As if part of some sort of virtuous cycle – opening external communications brought great ideas on how to improve our internal communication, the best of which was with no doubt BDD (Behaviour Driven Development).

Under the disguise of a new technical tool BDD dramatically changed the relationship between developers and testers.

We changed the ownership of our acceptance tests suite, sharing it between developers and testers, and rewriting it with a BDD framework (JBehave first, then Cucumber – after we fall in love with Ruby).

That wasn’t an easy or harmless step for testers at all: they had to give up some of that ownership, and open up to sharing it with those developers that they were used to see as “opponents”.

I hope my friendship and reassurance helped them doing the transition, but the biggest credit for succeeding goes to their kindness and openness.

We also moved as much as possible of the testing phase at the beginning, to avoid creating a mini-waterfall process.

This is a diagram of how our development workflow was re-designed:

We also learned not to take this process too seriously. So this is actually the latest revised process, based on a real scenario:


User-centric development

I saw in the BDD process an extra, great opportunity. By starting development with the definition of those user-centric acceptance tests, we shifted our focus from code correctness, to feature correctness – from crafting the right code, to delivering the right user experience.

Which meant: less bugs to catch for testers, more interesting and rich technical discussions, but also, narrowing the gap between developers and the business, and unexpectedly, between the business and the end-users of the application.

Ruby Tuesday

Good developers like to learn new things, and learning new things make good developers.

So I had an idea: organise a recurring learning lunch (a.k.a. brown bag lunch) to learn the Ruby programming language. One of the amazing guys we met outside of our team is a really experienced developer – I’d say a Ruby evangelist. I invited him over. I also invited testers, as Ruby is the programming language in which a lot of the best automated testing tools are written (i.e. Cucumber).

In a single shot, our weekly “Ruby Tuesday” became an amazing event to inspire developers, let them collaborate closely with testers on a ground unfamiliar to both (as developers were mostly coming from PHP and testers from Java) and learn some technical tools that would help in our transition to BDD.

I obtained support from management too, both in terms of encouragement and – even better! – free food for the lunches.

The series of lunches was a huge success. In the end, after 5 months, it died from implosion: more and more people joined the club, to the point where finding a meeting room big enough was a quest. A horde of newly hired graduates happily invaded our meetings, forcing us to start again from the basics we already learned. Also, our Ruby mentor left the company for new adventures. And – finally – my team didn’t need it anymore, as we now had a real project to test our  Ruby skills… but this is part of the next post, so you’ll have to wait for that… ; )

An awesome team

More than anything, we became a close-knit team. Our brilliant Scrum Master organised (and encouraged us to organise) nights out. We started to know each other better.

Coffee after daily stand-ups became a ritual (mostly stealing it from another floor: for some reason they had the best coffee machines of the whole building). Not to mention the lunches at the canteen – especially the Chicken Caesar Salads on Monday – a pillar of our team cohesion.

As icing on the cake, we won the first prize for our Hack Day project – a yearly competition, open to the whole business, which saw about 10 teams from all departments hacking, prototyping, and pitching new ideas. And to be perfectly honest, my contributions to that effort were mainly two: subscribing the team to the competition, and accidentally messing up our Git history 30 minutes before the jury’s inspection. Kudos must go to that amazing team, and in particular our young and bright Business Analyst.

A new challenge

Now, going back to our main topic, you may wonder how all of these changes affected the technical side of the project.

We totally rewrote our huge acceptance test suite, finally bringing it down from 9 hours to one acceptable hour (in the end we managed to reach a 15 minutes build time).

Our obsession for Continuous Delivery practices lead us to stream-line our deployment pipeline, cutting manual steps and verbose checklists. Our average release lifetime went from months to weeks, and then days.

No technical debt was added to the project (well, at least as far as I can remember!), and we kept refactoring and improving our code relentlessly (sounds brave). We never released bugs in production more serious than the ones we had released before.

But from the moment we started the aforementioned refactoring, our codebase started to look to us more like legacy code. We realised we had now the capacity to successfully design new patterns and systems, craft great code, deliver new features fast and safely – but all we had on our hands was a project in maintenance and BAU mode.

With impressive political skills our Scrum Master managed to secure for our team a new project (and new fundings), loosely related to our business area.

And this is where I placed my biggest and riskier bet.

As a brand new Technical Lead this opportunity was my first chance to really lead the development of a project from start to finish, to prove I was a good fit for the new role.

And I decided to bet it all on the enthusiasm and passion of our team.

So, this is how a team of PHP developers and Java testers, used to deal with e-commerce web apps (with no experience on mobile) undertook the development of an iPad app, written in JavaScript and HTML5, backed by a Ruby-based server, hosted on an internal Cloud infrastructure so new that no-one else had yet tried, using a Cassandra No-SQL database we never used before…

I don’t know you, but if I were technically responsible for a project like that, I’d tear my hairs out! ; )

The rest of the story, on the next episode… (suspense)

Happiness and other technical requirements – A dysfunctional team


This is the first of a series of posts about a software development story. It starts in troubles, but it has a happy ending, and hopefully some interesting insights along the way.

The journey is about my personal discovery of  how relationships, team dynamics and morale affect the success of a project and – more surprisingly –  how these apparently unrelated aspects affect the technical quality of our products.


I’m tempted to claim that I know what happened and why it happened. I’m tempted to claim that I can pinpoint the reasons why things were problematic, and why they dramatically improved.

But it wouldn’t be completely honest. The truth is, I’ve learned that in the evolution of a complex system like a software project and even more, a dynamic and self-organising team, so many different variables are involved and interconnected.

We naturally tend to remember and highlight according to our personal bias. Someone could look at the same project and only be aware oft the technical issues. Someone else may stress that even if we were following Agile practices, we were “doing it wrong”.

You can look at it form the point of view of the managers, of the developers, of the business stakeholders or the users, and you’ll see different things.

What you are about to read is from a particular perspective. Please remember: it is not the only one or necessarily “the one”.

But it’s a bit unique, unusual, and so, hopefully, interesting.

Let’s start

Imagine a software development team, focused on building a low-traffic e-commerce application. The team is following a pretty standard Scrum process, with 2-weeks long sprints, regular retrospectives and planning games, a dedicated Scrum Master, a Tech Lead, a Business Analyst and a Product Owner.

Developers are quite keen on following well-known best practices, such as Pair Programming, Continuos Integration, and a proper Test (First) Driven Development (all production code is written after a test).

The architecture of the application is solid: a cluster of load-balanced web servers and app servers. Corporate backend services, hidden behind a uniform façade, handle all the complex business logic and data management requirements.

So, here is the question: how successful do you think this team and its project could be?

Or in other words, how much the technical and process-oriented aspects I’ve described can tell us about it’s success?

As a matter of fact, not that much. After one year since I joined the team as a developer, and since the project started, the situation looked quite grim.

Even if real users where already using the app as part of a trial, and we were gradually ramping up to a full release, we were significantly late on the plan. The project turned out to be more expensive than estimated, and the development pace slower than expected. The team doubled in size to cope with the workload, but that wasn’t enough.

Tech Debt was accumulating in every direction: the core of the application was a bundle of over-complicated design patterns, the acceptance test suite become extremely slow and in need of some TLC, the conceptual domain was fragmented and inconsistent. And all of this, under growing pressure to deliver, and meet the now twice-postponed deadline.

I had been doing overtime on the weekends for about three months, when a couple of developers contractors were fired.

…and this is, dear reader, when our story hits rock bottom (yes, don’t worry, it’s not going to get worst than that!).

So, what happened?

Let’s rewind, go back from the beginning, and explore some of the deeper issues.

During the first months of the project, everything was quite fun. The project was brand new, the code base was shiny and clean. We were a small team, gradually ramping up to a group of 3 developers + 1 tech lead, 2 testers, a business analyst and a scrum master.

With the increase in size, communication problems become more evident. Technical meetings were especially painful. As often, we were a bunch of opinionated developers who loved to argue about the design patterns, technical decisions and sometimes technical processes. But we were not able to come to clear conclusions, take proper actions and move on.

Some started to become disillusioned about our capacity to listen, learn and adapt – as advertised by the Agile-oriented management.

Some meetings were so bad I wished I could be somewhere else. Unsolved and unmanaged conflicts between ideas become personal conflicts.

That’s when I started to realise something quite amazing. Despite having a robust TDD approach, despite following Pair Programming and having static code analysis tools running as part of our CI system, the areas in our codebase that were more troublesome were not the ones with higher cyclomatic complexity or too much nested logic; they were not the ones with lower unit tests code coverage, and not even the ones written without doing Pair Programming.

The biggest technical issues were on those areas of the code involved with some unmanaged personal conflicts.

Relationships between developers and testers were problematic too. Automation testers were writing and maintaining a suite of acceptance tests. Was their responsibility to ensure that the final product matched the product-owner expectations, while developers were just focused on writing great code.

The conflict and the gap were somehow intentional – consequences of the belief that those two concerns (developing and testing) were better if specialised and antagonised.

The result was a huge amount of waste. Developers were writing a truck load of micro unit tests to be sure that “this time” their code was rock solid, only to witness it being rejected with tons of bugs from testers.

Testers, on the other side, being completely unaware of how the software was developed, were writing loads of ineffective and repetitive tests.

It become normal for our acceptance tests suite, in it’s fast mode, to last more than 9 hours, and never to pass 100% ( yep, you can imagine…).

But the most serious communication problems were between developers and the management. As the already postponed deadlines were approaching, managers started to worry about the delivery and to ask for more productivity. But developers were already in troubles for all the tech debt accumulated, and were – at the opposite – asking for more time to refactor.

Absence of mutual trust become the main dysfunction of our team.


So, at this stage, who would you like to blame? Who would I like to?

Well… no-one really.

As you read the story you may naturally start to connect issues with responsibilities, and eventually start to blame and to imagine culprits.

But I’d like to keep all of that out of this story.

I’ve never found blame useful to understand or to improve.

I prefer looking at the issues on a system-level.

Let’s now go back to our finale.


When the day of the full launch was on the horizon, the clash came to a climax.

On the surface it appeared as if the whole team (quite big at that time: about 7 developers and 4 testers) was focused on delivering the remaining  features. But on the inside, developers were totally doing their own things. After months of unkept promises from the management about getting some time for improving the code base, they lost their faith and trust, and started to unionise on their own, sneaking in refactoring tasks as possible, ignoring the delivery pressure from the business as a fake concern.

The final straw ended up being a missed deadline, due to an unplanned piece of refactoring that, days before the date, turned into an epic two-weeks-long task.

This is where we hit rock bottom, and a couple of developers (contractors) were fired on a Friday afternoon.

But this is also where the story become a bit more personal, because then our Tech Lead moved to another team, and being the only developer permanent employee left on the team, I stepped into that role.

And here I was, for the first time with that kind of responsibility, in the middle of a highly dysfunctional and now wounded team.

To know what happened next, you’ll have to wait for the next episode….


; )

Markdown CV: why and how I rewrote my CV using Markdown

I always found updating my CV a painful, but necessary process.

I’ve decided to rewrite it in a new way because I needed to:

  • make the update process faster (going back to be a freelance means I have shorter gigs)
  • focus more on the content and leave aesthetic concerns in the background (I used to write my CV with Adobe Illustrator…)
  • manage multiple versions (I needed a version focused on my developer experience, and another version more focused on the team leadership skills)

There is a common saying in the Agile community : if it hurts, do it more often. That’s the spirit behind, for instance, having more frequent releases. By doing it more often you are guided to make it easier.

So I started to look for ways that would allow me to update my CV often, and I came out with a solution based on Markdown.

The solution

Markdown CV is basically just a script and a template I use to generate nicely formatted PDFs out of Markdown syntax.

The process is fairly simple.

  • one file written in Markdown contains all the content
  • the file is converted to HTML using github-markdown Ruby gem
  • a basic string replacement adds a tiny extension to the language (read further down)
  • the HTML content is injected inside an HTML/CSS template and saved on the build output folder
  • wkhtmltopdf is used to convert the HTML file to a PDF file

Augmented Markdown

Why Markdown? For two reasons:

  • Markdown helps me focus on the content, clearly separating the design
  • being simple text, versioning and branching (using Git) is easy and effective

However, Markdown has also a very limited set of features.
While converting my old CV to Markdown I realized I needed something more.
Using standard Markdown I couldn’t find a way to escape from the boring single-column layout.

So while trying to remain true to the original spirit, I’ve added a new syntax that allows me to identify blocks of content.

It’s as simple as this:

## Here is a title


## Here is another title


This is going to be rendered as:

<h2>Here is a title</h2>

<div class="aside">
    <h2>Here is another title</h2>

Using a unique identifier in square brackets at the beginning of the line I can now identify a block of content, that will be rendered as a </div> tag with the same class name as the identifier.

I can now style that block as I need. In the example the block is floated to the right of the page.

Note how ensuring that these identifiers are semantic and not stylistic is now up to the writer.

PDF Pages and Web Pages

One interesting challenge I had to face was ensuring that a free-flow markup language such as Markdown could be nicely rendered in a paged medium.

CSS2 have a lot of useful new functionalities that allow some tuning of how the content should be split between pages.

Unfortunately I couldn’t make the @page CSS directive work with wkhtmltopdf, but I could still rely on CSS properties page-break-* to limit or enforce page breaking.

After some failed experiments I settled for a semi-automatic solution. This single CSS property is the key:

p, li, .block, .aside{

First of all, this is forcing wkhtmltopdf to never break a page in the middle of a <p> or <li> tag.

But it’s also applying the same rule to a custom class “block”. This class is part of a div, generated using the extended Markdown syntax.


# This title...
## ...and this subtitle...
...and this paragraph, are always going to be rendered on the same page, as part of a consistent block of content.


This trick is a compromise. Using a bit of extra semantic we are still ensuring that the Markdown doesn’t have to programmatically define page break, but at the same time we are guiding the HTML/CSS engine in rendering blocks of contents together.


Feel free to fork the Github repo and experiment with it.
Check the README.md file for more details.

Image similarity in JavaScript

I wrote a little piece of JavaScript to calculate if two images are similar.

The algorithm is fairly simple, but it seems to do a decent work.
It correctly matches images with different scaling, different saturation, some blurring, different compression rates.

More details about the library can be found on its Github page:

Here you can find an in depth discussion of the algorithm:

Web App Architecture Checklist

Starting a new project – as a Tech Lead – I wish I had a checklist that reminded me all the important aspects to consider.

This is that checklist.

I’ve recently started reading Code Complete 2nd Edition, and I found the chapter about Architecture Prerequisites interesting, but out of date with the current development trends. So I thought about taking inspiration from it, adapting some of the advices to more Agile practices and narrowing it down to web development.

This list is not necessarily complete, nor mandatory, but it’s intended to be used as a memo.

Some of these issues naturally need to be addressed at the beginning, but most of them could be specified during the development cycle, depending on the delivery strategy.

Re-use VS Build

First and foremost: have you considered re-using pre existing solutions? If so, justify why (i.e. an open source framework, an existing library, a service already available, etc.).


Modern architecture tends to favor small, independent web-services instead of monolithic applications. Have you defined the components in which your app could be split?

If you have the flexibility to do so, this doesn’t need to be defined at the beginning of the project. I’ve noticed for instance that using a cloud infrastructure allows a more flexible, evolutionary architecture. If your hosting is less dynamic it’s important you figure it out upfront, or the cost of change is going to be too high.

Data organization

What databases are going to be used, and why? Particular care should be put in identifying relationships between data. This could be done by analysing important use cases generated from the core business requirements.


No application is an island nowadays. Clarify the point of integration with all other internal applications (business core apps), or external (social networks, payment services, web APIs etc.). What protocols should be used? How will the integration been assessed?

I like the architectural principle of building “hard shells and soft cores”. The shell is the interface that the application expose, the core is the actual code. Focus should be put in specifying this interfaces and how they communicate.

Testing strategy

I believe thinking about the testing and stubbing strategy is really important at the beginning of a project. In the same way as TDD/BDD practices inform the design of the code, defining how the whole system will be tested generates useful insights on the architecture itself.

So, try to identify what kind of automation test suites will be necessary, and how and where the point of integration will be stubbed.

For instance: test JavaScript front-end in isolation using Mocha, test the Ruby back-end with RSpec, functional BDD acceptance tests using Selenium to exercise the whole app – except for the unstable dependencies. Etc.

Environments and path-to-production

How many environments are you going to need? How many can you budget? Think about: dev and QA environments, integration environment, production-like environment for load test, offline/online environment separation, disaster recovery and data-center replication..

Error handling

I’ve never done this upfront, but after reading it on Code Complete 2nd Edition I realized how important it is, and how much headache it could save. Basically, try to define how errors should be handled. See if you can define a common pattern for generic error scenarios. How will this errors manifest on the user interface? How will they propagate through your code? Do your back-end and front-end frameworks already have an error-catching strategy?

On a similar topic: define a logs strategy. How much you application will need to log? Where will the logs be stored? I personally agree with what Hugh Williams (eBay) said during a panel about scalability at 2012 PHP UK Conference: log everything you can.

Content management

How will the client manage the content? Do they need the capability to update content without doing a deployment? If so, do they need a user friendly interface and a proper CMS solution? Also, what about internationalisation? Have you thought about how this will affect your testing strategy?


Have the security requirements been specified? Have a solution been proposed? How will these requirements be tested? What about data protection and PCI requirements?

Availability and performance

Have you discussed the availability requirements with the business? Are they just random numbers or have they a business meaning? How are you planning to do load testing and performance testing? Also, how are you going to monitor the availability? Start defining hooks for your monitoring tools and availability metrics.

Robustness and overengineering

What level of robustness your application needs? Is it going to be a medical surgery software where a minimal error may be fatal, or is it going to be a content-based website? Is this a code business application or is it a restricted prototype? I found defining the robustness upfront really usefull, because a lot of great developers tend to build the best robustness possible out of pride and professionality, but sometimes it may not be cost-effective.


Can you identify what areas are more likely to change? If so, can you define a strategy to make change easier? For instance, if a specific data section is likely to change, using unstructured documents-based solution like MongoDB could make it easier.

Can you identify other “strategies to delay commitment”? What about using configuration files and features flags?

Debugging Node.js

I’ve never found the time to investigate better ways to debug Node.js than using console.log. So I’ve decided to do some googling and testing, and write-down here the result as a memo.


Just one little step better than using console.log, the util.inspect function generates a human-readable representations of the parameter specified. It’s like Ruby Object.inspect, but with some extra features.

I like using it like this:

// inspect the Boolean function constructor
// options: show hidden fields, 1 depth of recursion, use ANSI colors
util.puts(util.inspect(Boolean, true, 1, true));

In my terminal, this is the output:

A proper debugger

OK, let’s move to something more serious. Node.js has an easy to use built-debugger which hooks to the V8 JavaScript engine debugger. The best deal is using it via a package called node-inspector. It’s like a chain:

node-inspector (visual UI) -> Node.js debugger -> V8 debugger

This combination is really powerful. node-inspector uses the WebKit debugger as a UI. Here is a screenshot of me, inspecting an Express.js response object:

It’s also very easy to use. Just run node with --debug param, start node-inspector, and open a browser.

node-inspector can also be hooked to an existing process and can be easily used to do remote-debugging.


Hello world!

Hi everybody. So this is my first post. After more than 3 years I’m finally back writing a blog. My previous blog is still on-line. If you are curious, here it is: http://www.maverick.it/en/

Why WordPress.com? I admit it was a tough decision. My first plan was to build a custom blog engine using Node.JS and hosted on Heroku. Then I realised I would have to duplicate a lot of things I could get for free. I also like the fact that using a ready-made solution force me to focus on the contents and less on the technical details.

My plan is to use this blog to publish my technical discoveries and my thoughts about web development and technical team leadership.

So here we are. Let’s begin.