Thursday, September 8, 2016

User centricity: the biggest challenge in digital analytics and marketing.


There is only one boss. The customer. And he can fire everybody in the company from the chairman on down, simply by spending his money somewhere else. Sam Walton.

The sentence is nothing but true. It's simple and responds to common sense above everything. It was pronounced by Sam Walton, founder of Walmart. This represents by itself a change of the paradigm of performance and operations optimization from store-centric (or website-centric) to customer-centric or, in a more general perspective, user-centric, understanding user as a customer or a potential. The folks from Wharton University in Pennsylvania are on the lead of customer-centricity and customer analytics research. Many applications come out from this kind of analytics: customer lifetime value, in-store analytics, etc.

One thing, however, is also clear: a customer, before becoming a customer, is just an user, flowing through different stages of its cycle (attention, awareness, etc.). This distinction between customer and not-yet-a-customer is specially important when it comes to a website or a native app. In other words, how does this shift of paradigm apply in a web-based environment? We have been observing the transformation from session-based to user-centric tracking and optimization.

Over the last months we have been witnesses of how the digital analytics tools have been shifting their main reports focusing on the user. Tools like Google Analytics report users before sessions, while some months ago it was doing it in the opposite way. For instance, look at the order in which Google Analytics reports: first users then sessions (for the App views).




Here we already face the first challenge: how to define a user. We will come with this in other posts. Before that, I want to recall a situation we faced some time ago while doing consultancy for a e-commerce site in the European market. We were requested to understand how the sessions were browsing in terms of multi-category behavior. Concretely, the website had a header looking like this:


The problem was to understand the share of sessions that were browsing through only one category, through two categories, through three categories, etc. over different periods of time. We had a split looking like:


Even more, we also showed how the carts were looking like in terms of cross-category items:


The management reacted very worried, and they immediately initiated actions towards incrementing the share of sessions that were browsing through more than one category, and to increase the number of distinct categories on each basket. If you agree with this course of action we must say you are doing it probably wrong. Indeed, all intents to increase the cross-category sessions and carts were unsuccessful.

What could have been a better approach? A good advise: put the user in the center and understand the intention of each one of its sessions. You can't simply pretend to have every user stepping at all your content on each session. Instead, you can get the most out of each session by understanding the intention of such session. If a user is visiting category A-related content on a given session, then make sure it performs a purchase over such category. The important is to prevent the user from spending its money somewhere else.

Is this everything we can take out of the user? On a session level, probably. But, what if we widen the time window? Instead of looking which category the user browses on a single session, we could check all the sessions that user performed over a week, month, quarter, etc. In the case we have being considering, the charts looked significantly different:


As we can see, the share of users that only browse over one category dropped from 60% to 40%, while the share of users that browse over 2 categories increased from 20% to 30%. And here is where we can induct some change. By incentivizing the user to reach other categories (again, over different sessions) we can improve the awareness of the user over such categories not browsed before. If the content is appealing enough, we might get a chance that the user will actually buy over it.

Another analysis that was interesting for this case was the one showing the distinct categories bought by every customer over a period of time, considering all the orders placed. We had a situation looking like:


which transformed into this:


after applying direct marketings action whose goals were precisely that: to increase the share of wallet of the user over different sessions and, probably, over different orders.

That is, be patient, and keep the user in your focus. Don't overwhelm it, and take advantage from each session... one at a time.

There is, however, a new variable that comes as an input in the equation of user-centricity, which applies specially to digital environments: the device used to reach the site. Identifying the user when it browses over different devices is a big challenge. Universal Analytics enables us to identify at least one important bunch of such users, and it can be applied for sites that somehow identify them via login, newsletters, etc. The reports look very promising:




This information is extremely useful to understand the intention of a given user (or a set of users) when reaching the site with the different available devices. For sure is not the same intention when a user reaches the site with a smartphone in the morning or with a tablet at the afternoon. Placing the user in the center is, at the very end, about understanding its intention on each one of the sessions at each seasonal moment (in-day, in-week, etc.), with each one of the different devices. To make it a bit more complicated, we can also introduce the fact that some of these users might be also visiting your traditional store (in case you have one, of course). Many sites (fashion industry, mainly) allow you to buy over the web of App and pick it in-store. Lots of research is currently ongoing, in order to track in-store behavior. For instance, Estimote is doing some efforts towards this goal.

In following posts we will talk about the shift of paradigm on the reporting strategy (not necessarily about tools but techniques and contents) whose center is on Customer Lifetime Value (CLV).

In any case, if you have more questions or issues with your omnichannel approach, don't hesitate to contact us here.

Wednesday, August 31, 2016

Curso de Google Analytics para principiantes en Barcelona


Nunca consideres el estudio como una obligación, sino como una oportunidad para penetrar en el bello y maravilloso mundo del saber. Albert Einstein.

Una de las virtudes de saber, es que nos gusta que otros sepan también. Para nosotros no hay nada más importante que el conocimiento. Bien a través del obtenido por los datos, o bien a través de la transmisión de lo que mejor sabemos hacer: manejar datos y herramientas.

Con este propósito, abrimos nuestro primer curso de Google Analytics para principiantes (o Google Analytics 101). En realidad, Google Analytics, como herramienta que es, es una excusa para meternos en el apasionante mundo de la analítica digital. La analítica digital es la ciencia que mide las interacciones de los usuarios con nuestro site o App. El propósito final de la analítica digital es la de aportar una base sólida basada en datos para la optimización de la experiencia de usuario para que el usuario realice la acción pretendida (compra, registro, descarga, etc.) con la mayor eficiencia y eficacia posibles. La analítica digital toma datos de diversas fuentes, algunas de ellas cualitativas y otras cualitativas, y toma a los usuarios desde cualquier canal (aproximación multi u omnicanal), algunas de ellas online y otras offline. Como vemos, al final estamos hablando de la aproximación 360 al comportamiento e interacción de los usuarios con nuestro producto digital.

A medida que los negocios, los mercados y la tecnología evolucionan, también lo hacen los requisitos del analista digital. No es difícil ver que es un perfil cada vez más demandado (y poco a poco mejor pagado). Así que, tanto si ya se trabaja en analítica digital como si es el primer aterrizaje en ella, la formación en este ámbito es un paso que no podemos obviar.

Por supuesto, cuando hablamos de analítica digital no podemos dejar de pensar en Google Analytics. Líder indiscutible durante años entre las herramientas de analítica digital, Google Analytics ofrece, a partir de diversos grados de implementación (siempre escalable), una gran variedad de informes y datos a partir de los cuales podemos comenzar a optimizar nuestros productos digitales.


Cuesta mucho trabajo imaginarse la analítica digital sin la presencia rutilante de Google Analytics. Este curso va sobre dominar Google Analytics desde sus más tiernos cimientos. A partir de ahí, nos podremos convertir en ninjas no sólo de Google Analytics sino de la analítica digital, hasta encontrar la mejor herramienta para nuestras necesidades. En este momento, Google Analytics ofrece tres grandes ventajas:

Tiene una versión gratuita plenamente funcional (también ofrece versiones de pago). No hace falta hacer ninguna inversión en la herramienta en sí para poder comenzar a disfrutar de una infinidad de informes.


La implementación básica es tremendamente sencilla (y de ahí al infinito). Con sólo copiar y pegar un trozo de JavaScript en todas tus plantillas, podrás comenzar a recabar datos de manera inmediata.


Tiene el sello de Google: penetración en el mercado, recursos, documentación, formación, etc. En la red de usuarios de Google Analytics podremos encontrar infinidad de recursos, foros, add-ons, etc.


Sencillamente, no podemos imaginar un mejor aterrizaje en la analítica digital que aquel hecho a través de Google Analytics.

¿Qué vas a encontrar en este curso?

El curso está orientado en cuatro grandes bloques: principios de monitoraje, adquisición de usuarios, comportamiento de usuarios y objetivos.

  1. Comenzando con Google Analytics:
    1. ¿Cómo funciona Google Analytics?
    2. Métricas y dimensiones.
    3. ¿Cuáles son las métricas que me importan?
    4. Introducción a la segmentación.
  2. Adquisición: ¿cómo llegan mis visitas?
    1. ¿De dónde llegan mis visitas y mis visitantes?
    2. Tráfico Directo, Referentes, SEO y SEM.
    3. Marcando campañas de Emailing y Afiliados.
    4. Agrupación de canales.
  3. Comportamiento: ¿qué hacen en mi site?
    1. ¿Qué páginas y contenidos han mirado?
    2. ¿Por dónde entraron al site? ¿Por dónde lo han abandonado?
    3. ¿Qué buscaron internamente?
    4. Agrupación de contenidos.
  4. Objetivos: ¿hicieron lo que pretendemos?
    1. Definir objetivos: micro y macro.
    2. Reportes de objetivos.
    3. Reportes de E-Commerce.

Este primer vistazo a la analítica digital a través de Google Analytics consta de dos sesiones consecutivas de tres horas cada una. El contenido del mismo es 100% práctico y usaremos una implementación muy completa con una gran cantidad de datos reales. El precio del curso es de 120€, y será ofrecido en nuestras oficinas, en Sant Cugat del Vallès.

¿Te lo vas a perder? ¿Vas a perder la gran oportunidad de entrar en el apasionante mundo de la analítica digital? Clica aquí para más información. Las plazas son limitadas.

¡La oportunidad de entrar por la puerta grande en la analítica digital está en frente de ti!

Friday, August 26, 2016

New course: Google Analytics for beginners.



The essence of training is to allow error without consequence. Orson Scott Card.

At Ducks|in|a|row we are excited. We are about to offer our very first course: Google Analytics 101. This course is intended to cover the first contact with the tool: how the interface works, which are the most common reports, how to read them, and, most importantly, why do we measure.

In any case, we should not forget that Google Analytics is just a tool. It is not an end by itself, but an instrument allowing us to reach such end. The end itself is Digital Analytics. The definition I like the most is the following:

"Digital Analytics is the analysis of qualitative and quantitative data from your business and the competition to drive a continual improvement of the online experience that your customers and potential customers have which translates to your desired outcomes, bot online and offline."

Google Analytics mainly helps driving insights from quantitative data. Google Analytics measures users' behavior in terms of a relation between dimensions and metrics in a temporal context. Data for qualitative analysis must be gathered from other sources: panel groups, surveys, chat, customer care data, etc.

The ultimate goal of both, qualitative and quantitative analysis, is to learn and to derive action: understand what works, what does not, and what worths a try. However, too much data (and less analysis) leads to data paralysis. Digital Analytics is about analyzing and organizing data so that actions can be actually derived and properly measured. We already talked about this in an older post
When implementing the actions, we should not be afraid of failing. Just be afraid if you don't learn from such mistakes, and make sure you react fast. Failure is tolerated. Idleness is not.

One last thought before explaining the contents of the course: data can tell you whatever you want if you torture it enough. This means that the analysis that will come out of Google Analytics' data will still go through the QA process, as well as the conclusions reached out of it. Whatever interpretation you extract can be as well biased or even wrong. Again, implement, measure, and react fast. Very fast.

The course itself is an introduction to Digital Analytics. Google Analytics is a way of touching that data and techniques. We will cover basic Digital Analytics terminology, such as KPI, metric, dimension filter, and segment, and how they are obtained in Google Analytics. We will learn about traffic sources, content consumed, and goal measurement. However, the most important topic acts as an umbrella for these topics: why do we measure? What do we want to accomplish by tagging a site or an App?

Register quick as places are limited. You will get more information here.


Good measuring!

Monday, June 27, 2016

New partnership with Visual Website Optimizer: taking A/B testing and personalization to the next level.

Front-end processes optimization must include A/B and multivariate testing. You should be doing A/B testing. Ducks|in|a|row makes it much easier for you know!


We are very proud to announce that we have closed a partnership with Visual Website Optimizer for Spanish-speaking regions (Spain and LATAM). This will allow Visual Website Optimizer to consolidate its presence in Spanish-speaking regions by having experienced consultants offering services in local language and local support. At the same time, this will allow Ducks|in|a|row to consolidate its services of Conversion Rate Optimization, UX, and Digital Analytics, by offering services via one of the most used tools in the world for A/B testing. Actually, is one of the best tools for Conversion Rate Optimization.



What is Visual Website Optimizer?

It's an easy to use A/B testing tool that allows marketing professionals to create different versions of their websites and landing pages using a point-and-click editor (no HTML knowledge needed) and then see which version produces maximum conversion rate or sales. Integrating the split testing software is dead-simple: copy-paste a code snippet in your website once and you are ready to go live.
Visual Website Optimizer is also a flexible multivariate testing software (full factorial methodology) and has number of additional tools like behavioral targeting, heatmaps, usability testing, etc. With 100+ features in Visual Website Optimizer, you can be sure that all your conversion rate optimization activities are covered by our product.

Very quick: what is an A/B test?

Imagine you have detected some issues in your website. For instance, a low product-to-cart rate, that is, lots of users actually see your product descriptions, but very few add the into the cart. You can then start listing your hypothesis on why is that actually happening. A/B testing allows you to prove or disprove such hypothesis by showing several versions of the same content to different users. In this ways we can dismiss any seasonal effect that could affect your website if the changes are not tested in this way. Concretely:



By applying statistical methods we can determine which version behaves better with respect the established goals (purchase, add-to-cart, time on page, etc.).

The A/B testing cycle goes as follows:



Let's go step by step:

1. Opportunities: what to test? It could be a section of your website that is not properly working or just another section you want to leverage. It could be a single element of the page (a button), the entire page, or an entire process. This information can come from different levels: web tracking tools, heatmaps, surveys, etc., and it's fully linked to the knowledge of the business.

2. Expected turnover: what do you expect to have in return?

3. Priorization: with respect the technical needs and the expected turnover, you must score each possible test.

4. Objectives and segments: who is going to take part of the test? Are all users? Only new users? Only users who spent more than 1 minutes in your website? What should be considered a success? Is it a purchase? Is it viewing more than 3 pages of the website?

5. Implementation: actually inserting the tags in the website and make sure each user sees only one version of the experiment (this is a basic step in order for the math to be correctly applied).

6. Follow-up: was the test successful? Do we have a clear winner? Should we restate the terms of the test?

We will be giving more detailed information and tips for A/B testing in coming posts!

If you want to know more about the services of Ducks|in|a|row with respect A/B testing and Conversion Rate Optimization, click here.

If you want to know more about Visual Website Optimizer, click here.

We hope this partnership will bring Visual website Optimizer and Ducks|in|a|row a great value by having a great tool served by great consultants with very happy customers!

Monday, June 20, 2016

A truth behind the lead business: a comprehensive approach (or the onion approach).


We are obsessed with improving our conversion rates while keeping our traffic levels. But, is this the right approach? I would say yes and no, bust mostly no. Let's see why.

Lead business is a very complex one. And, as we said in our initial post, complexity matters!

Before jumping in to a discussion regarding leads, let me recall a situation we faced time ago when doing consultancy for an e-retailer (a very big one, by the way). They were having a wonderful situation: around 20 million monthly sessions with around 3% session-to-order conversion rate, very high average order value, and pretty good margins. We were able to close a meeting and we offered them the full artillery: UX enhancements, fine tuning of the tracking tool continuous A/B testing, heatmaps, and so on. They told us: no. Surprised by this answer we asked: Why not? We were offering them a very good deal: very low fix fee and a high variable based on success (we did that because the site was horrible: we already identified couple of opportunities that could improve the conversion rate). They replied: we don't have the enough logistics resources to handle the extra amount of orders that would come! Ok. Lesson learned.

Some years later we came with a similar case. The context: traditional insurances company (life, car, and home) starting developing a digital presence. They had a horrible website that was generating around 500 leads per week. As soon as we saw it we wanted to offer the same service we offered years ago: conversion rate optimization, UX enhancement, full deployment of Google Analytics, etc. That is, we used to see the situation as:



However, as a side note, a comment from one of our analysts came to the deck. "Wait!" he said. This might not be the right approach. "Let's recall the e-retailer situation we faced years ago!". Indeed, let's take a picture of the current situation. Concretely, let's take the first step towards a comprehensive picture:

- 500 leads per week generated through the website.
- 2% lead-to-policy conversion rate (which takes place offline, via a phone call). This means 10 policies per week (simple math).
- To make 500 phone calls a week, they needed 2 full-time resources.
- The cost for a full-time resource was equivalent to the revenue generated by 4.5 policies. In other words, the "actual margin" is around one policy.

Now our landscape is broader, much broader!



With some basic enhancements to the website, we could increase leads by, let's say, 50% (site was really ugly). With more simple maths we say that we could have 750 leads per week generated through the website. But now the key point is that the lead-to-policy ratio would not necessarily change! This means that they would just increase the total number of policies up to 15! We always say that we need to ask the right question. At this point the right question to ask is: how many full-time resources do we need to handle 750 phone calls per week? A simple cross-multiplication shows us that we would need 3. This extra resource would cost us another 4.5 policies. In other words, the total cost for the 3 full-time resources would be 13.5 policies, so the actual margin would be equivalent to 1.5 policies! Wow! We did a lot of work to improve the website just to raise our margins by half a policy! This at the end means that, with the current set-up, the business does not look scalable. Or, in other words, more does not necessarily mean better. Even worse, what would happen if a sudden pike (seasonal pike, or a pike due to a sudden reduction of the prices, or by a sudden increase on your competitors prices, or by a sudden increase on traffic due, for instance, by an important investment on marketing) occurs? It's simple: the 3 full-time resources will not be able to handle all phone calls on time. In the insurances universe, if you don't handle a call soon you have high chances to loose it.

How do we solve this? Here we will apply the "onion strategy". Imagine your processes as an onion, and each one of the sub-processes are a layer of the onion. The basic idea behind this approach is that you should start optimizing processes from the end of the journey to its beginning. In this way, as the user flows he will always step towards a process that we already tried to optimize. In the case of the insurance company, the first thing we offered them was to understand whether we can improve the lead-to-policy rate. Here a new world appeared in front of our eyes: we taught them not to follow all leads (we implemented very cool models to prioritize the incoming leads), we taught them to catch the necessary data to understand why a lead converted or not into a policy, etc. At the end we were able to improve the lead-to-policy conversion rate up to 15%!. At that moment we were able to improve the website in order to bring more leads. And after that we were able to optimize the traffic sources, the prices, and to understand the competition, in order to have under control the total amount of sessions reaching the website. We got then a much broader landscape for the situation:



Despite this is a very simple representation of the full process that ranges from the intention to an actual policy, the exercise behind it is very insightful. And this is not only about data but about business. The full exercise of depicting the concrete steps through which the user flows up to becoming a client can only come from a deep knowledge of the business. Then, and only then, data science and tools appear. As we always say, the key point is to formulate the right questions. Then you find the data to answer them.

As a summary always keep in mind:
- More does not necessarily mean better.
- Broad your landscape until you have all the steps that can be improved or optimized.
- Think about the consequences the optimization of a step have to the whole process.
- Follow the onion approach: improve the latest steps of the whole process first and then improve backwards.


Good optimization!

Monday, June 13, 2016

Chicken or the egg dilemma: tools or analysis, what does come first?

We need to install the tool X, said the CEO. How much does it cost, asks the CFO. Why do we need it, asks the analyst. There is not a best-tool-for-everything. Your own and unique context should determine the right tools for your company.



This post can be thought as a continuation of our previous post called "The broken pyramid, or the hungry hungry hippos game". In that occasion we discussed what happens when a company decides to stop the data-related processes at the reporting step and forget about further analyzing and predictive analysis. In this post we are going to discuss what happens when a tool is chosen without contemplating the scenarios for its usage. This is another major disaster. Let me recall four different stories I've faced in the last years.

The current tool is the worst you could ever have.
We were doing consultancy for an e-retailer in the European market. The company decided to open a new channel: Wines. For this sake, they hired an experienced Wine buyer and expert in its logistics (pretty fascinating, by the way). The guy, as we can imagine, had no experience in the digital world, and hardly used data for its strategy and daily operations (according to Avinash, these ones should be immediately fired). It was a promising start! Right after he joined I was explaining him how to use our BI tool (Qlikview). He did not seem impressed. When I asked him why he answered: "SAP is the best tool for this kind of things". Of course, we did not implement SAP. For the business complexity the company faced at that moment, Qlikview was more than enough. Actually, it still is. He never used Qlikview, he never used data. He was fired three months after.

The best-tool-according-to-the-CEO.
Another situation I can recall is when the CEO of another e-company came to me and said: I've been in a conference, and now I'm convinced this is the tool we need for A/B testing. I don't recall the name, but it was not one of the most used tools in the market. The tool was very complex to implement, the setup of every test was a nightmare and we hardly managed to implement a single complex test successfully. And, of top of that, it was very expensive. Result: no more A/B testing because "it's a framework that brings no added value" (sic). Pretty disappointing.

We have a great tool but lets try another one.
Another situation that needs to be avoided relates to the fact of changing of tool with no apparent reason. I remember another situation in which we were using Google Analytics for web tracking. The CIO came to us and said that he managed to get a free trial for another tool (Mixpanel). We were reluctant because the tool we were already using (fully deployed and operational) was enough for our present and short and mid-term needs. We were happy with it, users were empowered, lots of decisions were taken out of it, and we did not understand why we should try another one. We implemented Mixpanel while in parallel we were working with Google Analytics. Of course, we could not use all Mixpanel's features and the final result, according to the CIO was: "Mixpanel is a bad tool", which is totally inaccurate. The implementation of a tool requires time and focus. If you don't have them, don't start a new implementation.

I need this-and-that but I will use none.
Knowing and understanding the needs is as well a very complex task: we were doing that job for another company and I recall a CMO saying that he wanted a BI tool with real-time reporting capabilities. When I asked him why, he was not able to answer. He just wanted it, despite having no resources and no process defined for reacting based on information that arose from real-time data collection. Result: we implemented something (very expensive) that can be translated as "real-time". It was hardly used. After tons of money spent and an extremely complex implementation, we came with a tool that had lots of features but were, by far, underexploited.

Based on these four (horrible) experiences, we can tell that the tool is the result of your needs, and it deeply depends on your own and unique context. The opposite will hardly work. This, however, is a very complex process, and you need to proceed thoroughly in order to get the best tool possible for your case. Communication with the stakeholders is the key: know their needs, know the current status, know a roadmap for the short and mid-term, etc. If you can't do this by your own, I strongly recommend you to hire a consultant to do this job with an independent view. 

To summarize, take always into account your context in the moment of deciding which tool to use:

- Talk to every potential user of the tool.
- Establish realistic needs.
- Stick to the tool unless a new set of realistic needs appears and your current tool can't fulfil it.

And, last but not least, remember the 90/10 rule: 10% of your budget for tools, 90% for people exploiting them.

Good hunt! There is always a right-tool if you understand your context.

Monday, June 6, 2016

Beyond selling products: listings and GA's enhanced e-commerce

Sell more. More bookings. Leverage the best-sellers. Everywhere. Is it the right tactic? Probably there is something smarter you can do to sell better.



I'm one of those still subscribed to many newsletters. Today I open one of them, from an online retailer, entitled "Check our best-sellers". I click over it and find, with not-so-big surprise that the list of products (five, and hardly more than five) is listed everywhere. I can imagine the situation: some Manager thought that the best way to proceed was to list the best-selling products everywhere. In this way what we achieve is not a best-sellers strategy but an only-seller strategy (or few-seller strategy).

Here we reach the key point of today's post: product lists is the new kid on the block to optimize. So far we use to optimize our pricing, communication, and merchandizing strategy based on the sales performance of a given product. Maybe, in a second iteration we include traffic (sessions, users, or even pageviews - or unique pageviews) and conversion rates. A traditional visualization for this is to relate the bookings and the pageviews generated by a product and mark four different areas:

- Cash-cows (low pageviews, high bookings)
- Stars (high pageviews, high bookings)
- Normal (average pageviews, average bookings)
- Problematic (high pageviews, low bookings)


Several variations can be also considered: revenue instead of bookings, pageview-to-unit conversion rate instead of pageviews, etc. But still, we use to consider the product as a silo without considering where it was seen or listed in the website, or even where has it been added to the cart from. So, let's take for example two products, one falling in the cash-cow category (few pageviews, high bookings) and another one falling in the Problematic category (high pageviews, low bookings). At this point we can formulate some questions regarding those pageviews generated:

- From which devices?
- From which traffic sources?

And, the new one:

- Where in our website or app was this product seen? Was it in the homepage? Was it at as a result of a search? Are there differences on performance depending where on our website we show the product? Does it make sense to promote those products in the same places through our website or app?

It might be the case that product A (the cash-cow) is listed in the homepage and, for example, in a category overview, and that product B (the problematic one) is listed only in the homepage. In this case we should analyze the ratio of sales (or cart entries) and impressions, segmented by the location in the website or app where the product impression took place.

This topic gets more importance as the concept of list is rapidly spreading over the businesses. The traditional concept of category is being replaced by meta-categories, by personalization, by searches and refinements, etc. The concept of category pages and the concept of static product showing is gone: lists are replacing them. Furthermore, lists are dynamic as a result of our own browsing experience and personalization efforts.


Fortunately, tools are very aware of this, and this time we want to go through the basics of Google Analytics' Enhanced E-commerce. You will find the full information about the reporting capabilities and technical instructions here. To give you a glance of the kind of reports you can find, we can state:

  • Goal funnel analysis: how users flow through your goal funnel, where did they abandoned it, and how many re-entered it.
  • Checkout behavior analysis: how users flow through the different steps of your checkout process, how many abandon it, and in which steps. The interesting part of this is that you can later create segments of users based on their behavior on the checkout. For instance, those who reached step 1 but not step 2!
  • Product performance: these are the reports that relate sales performance (bookings, transactions, quantity, etc) with shopping behavior (product listings views, product details views, product adds to carts, product removals, and their correspondent rates).
This feature of Google Analytics allow us to understand better how a product performs also in terms of intention of purchase by relating some concepts of its merchandizing (such as lists and internal promotions) with its sales performance. This will help us to find the better placement(s) for a product, or a set of products in our website or app.

As always, the power of this tool also relies on the segmentation capabilities. Understanding the lists' behavior over different devices, user type, traffic source, etc., allow us to serve a better user experience. Placing the right product in the right place is a must-have on your merchadizing strategy, and it's something that traditional off-line retailers have been doing for a long time.

Just to finish, a plea to e-retailers: stop pushing products all through your website. Stop sending me every single product that can be purchased. More does not necessarily mean better. Every product has its right placement(s). You just need to find it (them). Enhanced e-commerce will allow you to do so.

Monday, May 30, 2016

I want to measure everything. I want it all. And I want it now.


The most-heard sentence from Managers: we must measure everything. However, this is far from being true. Not everything must be measured.



Lets imagine you own a business. An e-business. Maybe successful, maybe not (yet). You are in the moment in which you consider implementing a tracking tool (Google Analytics, Omniture, MixPanel, etc.). Then the fateful sentence comes: we must measure everything. Please track every single click, every single action, every field entered in a form, etc. EVERYTHING! Usually when a Manager is asked why everything should be measured he answers: because everything can be optimized with data.

At this point two sentences crosses the Manager's mind:
"In you can't measure it, you can't improve it".
"If you didn't measure it, it didn't happen".

The first sentence is absolutely true in almost all situations. The second one would need some elaboration. Indeed, is true: if you didn't measure it, it didn't happen. My question, as an analyst, is: "Yes, we did not measure it. Yes, indeed, it did not happen. So?". I will state it very clearly here and now: it's not mandatory to track everything. Why? Simply because you can't optimize everything at the same time, or simply because the benefit of optimizing some features is insignificant.

When we measure everything we overcomplicate the implementation. It becomes a pain, it becomes never-ending. The analyst is always thinking what to measure instead of actionating the data that arises from the current implementation. The tool's interface turns into a nightmare: sampling, unclean data, impractical volumes of data to process, etc. The answer to this is: keep it simple.

How do we keep simple a tracking tool implementation? The key is to design Measurement Plans. Avinash Kaushik expresses this in a majestic way. A Measurement Plan consists of five steps:

- Goals: identify the business objectives (sell more, get more leads, increase CLV, decrease returns, improve margins, etc.). According to Kaushik's framework, the Goals must be Doable, Understandable, Manageable, and Beneficial).
- Strategies: for each objective identify crisp goals. They must be specific in the sense that they will be used to accomplish the goals (increase repurchase ratio, increase new users, decrease budget for some marketing campaigns, etc.)
- KPIs: no need to say what a Key Performance Indicator is. Of course, there are metrics that will tell you how are we doing with respect the established strategies.
- Targets: not mandatory but very useful. They are used to establish an end-point for our KPIs.
- Segments: The most important part of the plan. We take segments of users or behaviors that we'll analyze in order to understand where the fail or the success is (new users, paid campaign users, mobile users, users that land in the homepage, etc.). This is the hardest point within the Measurement Plan, and this is really where actionability arises.

The measurement plan is devoted to actionability. Whatever it brings no action is not a valid Goal or Strategy. A KPI or a segment that is not devoted to fulfil a strategy is not useful. At the end is very simple. If your Measurement Plans does not consider that a given button should be tracked, don't track it! If your Measurement Plan does not consider that the fields of a form should be tracked, don't track them! If your Measurement Plan does not consider that scrolling should be tracked, don't track it! In this way you will have a clean tracking tool interface full of data that can be transformed into actionable information.

Ideally the Measurement Plans are build by the team of analysts. They (should) know the business and they (should) know the technology. They are able to talk to the Business stakeholders and talk to the developers. They can gather business requirements across the company, think about business opportunities, and transform them into technical specifications. They are also able to gather all the data, build the KPIs, compare them to the targets, and recommend actions. In other words, they must take ownership of the Measurement Plans, from its conceptualization to its implementation.

At this point, I want to recall Kaushik's "Three Layers of So What" Test. It's a very simple test that will help you to decide whether a KPI (or a metric) is useful or not. Against every metric you want to report ask the question "So What?" three times. Each question provides an answer that will raise another question. If at the third time you don't get a clear action that must be taken, then you just face a nice-to-have metric and not an actionable one. Non-actionable metrics keep the focus off what is really important. Non-actionable metrics and, hence, non-actionable tracking, is like having Diogenes syndrome for data: you collect, collect, and collect data without extracting any useful information from it.

Last but not least, don't fell guilty for leaving features of your site without tracking. Feel proud for the recommendations to actions taken from the tracked items. And, again, keep it simple.

Tuesday, April 14, 2015

Bed & Breakfast Analytics: The 10 Motivations and the 10 Foundations

We are surrounded by data. What can we do with that? For what? How? What can we expect and what not? What are the common errors? Size matters? Open source or commercial tools? Here you should find some tips to discover your own journey on the data realm. Bon voyage!

I still remember a nearly-hilarious situation I faced in my first job as data analyst. The CEO came to me and asked me for tons of data very important for a strategic decision. Wow! Panic! Just graduated from College. Just landed in this job. No clue about the business. No clue about data structure. No clue about KPI names. No clue about anything. After some minutes of panic, I deeply breathed and tried to deliver what I've been requested. I promised I did the best I could: gathered data from different departments (no Data Warehouse, no unique source of truth) and different people, in very different formats, I used some advanced and fancy stuff in Excel, and, after 10 hours of intense job, I delivered a kind-of-report. I really had no idea what I was doing. I had no idea what data I was delivering. Some days after I went back to my CEO and asked him how useful my data was. His answer was: "Which data? Ah, that report. Well, we did not use it. We took the decision XXX based on a market research the CMO found in a blog". I suppose I should say thanks.

Sorry. Probably your data-setting is not correct. You should consider start reading this.

After years of experience, probably you've heard these stories many times. The Marketing Manager (random Manager example) requires some data. Let's depict some standard scenarios. The requested data corresponds to...

1. ... clicks, sessions, bounces, etc. This one should be easy. The Web Analytics Manager easily performs this task (is part of its basic skill set), probably by applying some complex advanced segments to the data (easy does not necessarily mean simple). Nowadays, the implementations of the web analytics tools trend to be very complex, mainly because they need to cover a lot of business cases. Simple, right? Well, now imagine that for some unfortunate reason, the Web Analytics Manager is on vacation. Panic! Then, the request is given to, let's say, a Campaign Manager. Of course, he/she has access to the web analytics tool, and hence tries to retrieve the requested data. A bit of panic appears, as the data seems not coherent (of course not, he's not applying those complex advanced segments he should be applying). He then tries to search for some documentation regarding this topics and...  surprise! he/she finds no such documentation. Finally he delivers some numbers, but they all know those numbers might not be totally reliable. At the end, and as the requests get more complex, the process to retrieve such data gets more complex as well. If the process is not clear enough for all stakeholders, the result is a lack of trust on the delivered data, leading to a lack on trust on the data strategy (if such thing exists in the company).

Here I already find my first three motivations:

  • Motivation 1: there is a lack of proper documentation. Information transference is virtually inexistent in many e-commerce companies, specially for data-related topics.
  • Motivation 2: business complexity translates directly into data complexity. Not every stakeholder understands this implication.
  • Motivation 3: wrong data strategy leads to lack of trust and, even worse, to wrong decisions. 

2. ... revenues, sales, etc. This one gets a bit trickier. The Marketing Manager pings somebody by BI, or by Finance. Traditionally the request is not complete or it's poorly written: time frames missing, before/after refunds, etc. Normally, such simple requests, requires 2-3 iterations, leading, again, to a lack of trust in the provided data. In some cases, a variation of this scenario takes place: reports and data is built by manually joining data retrieved from different data sources, as the full data map is not clear for everyone.

Again, two more motivations appear:

  • Motivation 4: it's very hard to write clear requests.
  • Motivation 5: outside our comfort area, finding data could be a challenge. Even when having a data warehouse, or a nice-and-expensive-but-totally-useless BI tool.
  • Motivation 6: it's easier to request than to retrieve, and it's easier to retrieve than to process.

3. ... data that has been already requested any time (many times?) before. This one is a quite disappointing. There is nothing more frustrating, in both directions, that performing a recurrent request, and being requested for the same time after time. Assume for a second that, indeed, such data is available. Why is data recurrently needed not easily available? Even worse, what if we have (as I mentioned in the previous paragraph) a very nice BI tool? Why some users are reluctant to use self-service data platforms? Now, assume that the data isn't available. Tough times are about to come: it's time to reach IT in order to start gathering this data. Normally, from a BI/Data department is very hard to write clear specifications for IT to start gathering some data, due to several reasons: lack of knowledge on the platform, lack of database architecture knowledge, etc.

With this, two further motivations appear:
  • Motivation 7: having a BI tool does not ensure self-service. Having a self-service platform does not ensure data availability.
  • Motivation 8: communication between BI and IT could be a struggle.

4. ... data, or analysis that we don't know whether it can be accomplished or not, or data which is not clear how is going to be used upon delivery. The first challenge when receiving a request, or when performing it, is to determine whether it can be done or not (assuming whether it makes sense or not). Many of the analysts work directly with data, without designing a plan for such analysis or request. That is, both requesters and analysts work without an analysis framework, even when it's clear that the analysis will require some time to be finished, probably due to its complexity. A different case appears when the request is coming from the CEO. We have to admit that is very hard to say no to our CEO. However, the CEO does not know everything, and he's not always right. Even CEO's requests need to be challenged, understood, and accepted.

With this, we find my two final motivations:
  • Motivation 9: working with analysis framework is a must-have.
  • Motivation 10: determine whether a request (for data or for an analysis) makes sense. Find the way to challenge every single request.

The Decalogue: the 10 Foundations of Bed & Breakfast Analytics

With my thoughts over the desk, and the motivations I find out of them, I'm ready to state my Decalogue.

1. Burn the silos! Managing data requires transversality and deepness on each vertical. Skill silos are not suitable any more.

2. Complexity matters! Understand how business complexity affects data complexity.

3. Better alone than... No data is better than wrong data.

4. Write, write, and write. Documentation is a must-have. Learn how to document and learn how to request.

5. Going beyond your comfort area. You should consider expanding your comfort area. Even more, you should consider not having any comfort area at all.

6. Bring order to chaos. Narrow your analysis: understand the need, design and framework, and only then, retrieve data.

7. Communication is the key. Your CEO does not care about regression models, decision trees, or how fast your database engine is. He wants a way to keep a sustainable and profitable business.

8. Choose wisely. The right tool for the right set-up. Self-service is not always the best solution.

9. Don't rush! Data is a path with some mandatory steps. Cheating leads to frustration and lack of trust.

10. So, how are you doing? Integrate data. Move aways from data silos. Design KPIs, reports, and dashboards based on integrated data.


So, what's next?

With all these, I want to share with you how do I measure, why do I measure and how do I analyze, with the hope that you will join me walking through the learning curve of the data-related world. In a Bed and Breakfast hotel you share your experiences with many others, and you obtain a clean and cheap way for sightseeing. This is exactly what I pretend to do here: every two or three weeks I will be sharing my thoughts, tips, tools, and techniques. Everything what I know will be shared. I'm willing to do so!

Hope you find this interesting, and welcome onboard!