Case Study Retrospective

Published  July 30th, 2010

As I sit in the back of a car driving up the gorgeous coast of California, I am consumed by the case study I recently posted about. What a roller coaster this project has been. Even though the project was planned well, throughly thought out, and executed exactly as the designs detailed (and on time no less), it wasn't enough to avoid a large backlash from the user base. The choice of colors, the new functionality we added, and even the way the navigation text was capitalized, pleasing everyone just wasn't possible.

To find the middle ground between our vision of the website and the user's vision, we engaged with the community to collect all their concerns and attempt to address as many as we could. Newsletters, forums, emails and our blog were being checked constantly for anything pertaining to the design and our goal was to answer candidly with transparency to share and effectively communicate our intentions behind the changes.

During this ongoing discussion with our users, we were on a mission to dig through the data to quantify the engagement metrics for the legacy version of the website compared to our new design. These metrics were crucial for us to determine whether our design changes were successful or not.

Our back-end team had implemented a logging system that tracked every meaningful action a user made. We only concentrated on new users as opposed to all users when crunching the numbers for people who experienced the new design. Our thought process was that existing users were already accustomed to the previous functionality and were less likely to try the new features and as a result they would skew the new feature data. Since we flipped a coin every time a new visitor came to the website to decide which design the visitor received, we knew this method would be fair. The results were overwhelmingly in the new design's favor, which was a relief to all of us.

Of the many things I learned throughout this case study, the most important was learning how to pull the data for the analysis in a way that made sense to everyone. For every metric I used, I supplied the methodologies used to create it and had to be able to explain every data point used by the methodologies. I also found that clearly defining the terminology used (ex: cohort conversion rate = grouped by sign-up month, the number of people who complete at least one conversion since signing up) helped reduce confusion significantly.

From my previous experience, I've learned that if there are holes in your methodology, the resulting metrics are dirty and making decisions on dirty data is a scary notion. Also if one of your metrics falls under scrutiny or faces a question to which you do not know the answer of, the accountability of the analysis is compromised and as a result it's impact to the stakeholders is reduced.

For me, I made sure that everything in the activity log was what I thought it was by clicking through the website and checking the log afterward to ensure the click stream was recorded as I expected. I did sanity queries in our database to make sure I understood the different types of data that could exist for each column. I had to know for certain my logic and data was sound. Some things can be tricky to get right in your head (ex: cohort analysis), but I discovered that when I explained the logic out loud to someone else, it somehow clarified what was confusing in my head. At the end of the day, I was just trying to build as much confidence in my findings as possible. If you don't believe in your findings, no one else will either.