One of the things I loved when I was a product manager was when I could understand my users better - both qualitatively with user interviews, user testing, and contextual inquiry, and quantitatively, by building and instrumenting my products to allow me to dig deeper into understanding my users' behavior. I've found that there is no substitute to talking with and watching users up close, but since that doesn't scale, having great tools to allow me to infer intent from actions and behaviors they made inside of the product were critical in a fast product improvement cycle. I've tried using lots of different tools to provide me with the quantitative information I was hoping for, but I found that once a product had found at least one audience (often called Product-Market fit) and the explosive growth phase had begun, all of my instrumentation and quantitative tooling began to fail me.
In fact, they did worse than fail me - they started to betray me.
And it really wasn't their fault - those analytics tools couldn't help it - they were built on a foundation that just didn't work anymore, now that I was dealing with multiple audiences doing multiple things with my products.
The problem, it turns out, was fundamentally built into the architecture and infrastructure of the reporting and analytics tools themselves. The problem was that each of these tools made certain important assumptions - and while those assumptions were often useful for when I was just trying to get some basic information about my users, they were fundamentally skewing my ability to really understand my users. In fact, to even understand who a user was, or how we defined basic metrics, like what is a session or when funnels start and end (or reset), or what truly were the multiple audiences for my product, and how people don't fit into pre-established definitions that are out of my control.
In fact, as I probed deeper, I found that these products were marketing their greatest weakness as their single biggest strength!
Here's the problem - the concept of actions means that I'm making some pretty big assumptions about what's important - and what's not. And I'm wrapping myself into this warm blanket of what the tool is promising me - a way of looking at the world.
My problem was with Framing - As in, the tools were presenting me with a view of the world that had a particular perspective on how to view the world.
The problem, as I looked deeper, was everywhere. It was in how we even defined what a user was. Or how we defined what a session was. Or what an "important" event was. All of the tools that I was using were forcing me into a particular view of the world. At best, they were defining for me who my user was (a particular cookie, for example) or what a session was (a set of events over a particular time interval). And no matter how I tried, I couldn't change those definitions. Or, in some cases, I couldn't even get clear definitions on how these definitions were calculated from my instrumented data.
And worse than that - I was doing my best to improve aggregates of these metrics. For example, I was working on how we could increase the number of sessions per user per day. Or I was working on how we could increase conversion rates or engagement rates. But all of these were based on what my tool thought was important - not on what was really going on with my users!
Enter raw event streams. And Interana.
When I met the team at Interana, I felt like this:
There's something wrong with the world, you don't know what it is, but it is there... -- Morpheus
So, what's so great about raw event streams? Aren't regular summarizations good enough? That blue pill looks pretty good...
What raw event data does for me as a product manager is it allows me to start playing with my assumptions. If I want to change my idea of what a user is, or what a session is, or what a cohort is, I can do it - and immediately have the results calculated for me, going back to the beginning of when I started recording data. I can look at my new definition of a user and see how it compares with all of my other derived metrics, like engagement and activity, and I can also start to look at the different audiences of people who may be using my product in different ways - and I can start to see through the fog of my own assumptions.
In the past I'd have to do a lot of this data exploration with a small data set, or I'd have to work with a large data engineering team in order to go back to our huge archive of raw logs, do ETL, weed out bad or unreliable data, clean it, and then start to process it using tools like Hadoop.
And it would take a week - minimum - to get some simple analysis.
What the team at Interana showed me was that I could start to do this analysis immediately. On all my data. And I didn't need a data engineer or data scientist to pull any new data for me - it was already there, stored in the raw event data that Interana's cluster had already ingested.
And for the first time, I felt like I was waking up to the real world of what my users were really doing.