Apple said today that it will be using (anonymized) data from the app to show podcasters how many people are listening and where in the app people are stopping or skipping. This has the potential to dramatically change our perception of how many people really listen to a show, and how many people skip ads, as well as how long a podcast can run before people just give up.
A nice overview of what you’d use analytics fit in investing:
Armed with these advanced techniques, digitally forward asset managers can gain a significant information advantage over peers who rely mainly on traditional data sources and analytical practices. They can crunch through vast quantities of data; scour video and satellite imagery to gauge a retailer’s Black Friday prospects; extract insights from social media, texts, and e-mail to divine market sentiment; and parse a CEO’s comments during an earnings call to estimate the potential impact on the next quarter’s results. They can discern how unexpected weather disruptions might affect their portfolio, and even disprove long-held beliefs about how markets work. Smart, dynamic investment technology also helps managers assess their own performance to see whether they may be making the right decisions at the wrong times, buying too late, or listening to “influencers” who push them in the wrong direction.
There’s also a good overview of how to introduce new approaches like the above into the organization without it be Big Bang projects, likely to fail:
In experimenting with new technologies, firms should prioritize a small number of targeted initiatives that can deliver immediate tangible benefits in a focused, resource-constrained way. In doing so, they should resist two temptations: the “esoteric science experiment,” whose focus is so narrow that the initiative can’t yield scalable results; and the “big bang rollout,” whose scope is so ambitious that it has a daunting price tag and takes too long to demonstrate value.
I’ve developed a little obsession with Twitter Analytics. It’s facinating to see all the stuff people do with my nonsense, and much more helpful than things like bit.ly and SumAll.
The addition of the “impressions” metrics is the new thing – I’m not sure there was way of actually counting how many views each tweet got previously. It’s also interesting to see things like detail expands, emailing, etc.
The click-through rate seems pretty low for most of my tweets. The CoreOS one, above, is predictably high because it’s offering a free report.
I haven’t done a deeper analysis of what all the data means. For one thing, I’m not really sure what my goals are. However:
Images work – Just for the “get more attention” metrics I have learned one thing: put images in your Tweet. People love images.
The Tweet [is|can be] the post – Now that I know the “impressions” being tracked, I’m not so worried about people clicking through to my blog. I’m trying to think of how Twitter can be used as a “primary channel.” That is, the “end of the line” or final thing in a long trail of clicks. If I look at “firehose” tweeters like James Governor, I think they treat Twitter like this.
Engagement? – Building on this, the “engagement” rate is a curious metric. It somehow summarizes “conversions” of tweets to clicks, replies, favorites, and follows. That is, how many people “did something” with this tweet other than viewing it? The screenshots above don’t list that, but the CoreOS one has an engagement rate of 7.7%, while the KACE one has a rate of 0.8% It’s probably a pretty good heuristic for sorting tweet popularity.
Twitter is most likely my “front door” on the web – Long ago, my blog(s) were my primary source of (pardon the word) “engagement” with people. My RedMonk blog had over 1,7000 RSS subscribers at it’s apex, for example. The blogs I now have are pretty piss-poor. Interaction is tumblr is low as well (though the occasional like from Robert Brook is always the highlight of the day). In comparison, Twitter is much more of a front-door. The consequence is to focus more on #2 above.
Anyhow, I check Twitter Analytics all the time. It’s much easier to understand than the mess that is Google Analytics (where I also spend a lot of time but am never sure what’s happening). I’m interested to hear y’all’s feedback on how to use it and what it “means.” For starters, I have no relative idea of how my numbers compare to others.
How the big guns are doing.
GitHub Traffic Analytics service gives developers insight into interest in their projects
As the blog post says, it does look like fun, though pretty minor in the grand scheme of things. GitHub has been a major driver of getting the development community to care more about social interactions and collaborations, here, tracking who’s looking at your code and where they’re coming from – standard web analytics stuff. Before GitHub, most of the community around code was pretty faceless: it was just forum posts, really passive users and lurkers around the code. With things like this, and GitHub as a whole, developers can get a better sense for who’s interested in their work. Developers have been learning to use this kind of meta-data in their applications to do A/B testing (is this feature better implemented one way or the other) and it’s interesting to think that they’d do some meta-data navel-gazing on their own code.
Another class of user – marketers – would find this extremely valuable. I like to throw out the idea of “code as marketing” to illustrate the idea that code can be a good source for driving a vendor’s marketing needs. As an example, you can see Rackspace putting out command line tools and other developer SDK-ish things to market to developers. More than just “tools” to use on Rackspace’s cloud, this code is a marketing artifact. Since code is, essentially, the major currency of developers, if you want to do more marketing to them, you need to spray more code their way, hopefully that’s useful. In that instance, marketers will want to intimately track who looks at what on sites like GitHub, and this will give them an even more complete picture.
The InsightsOne group has offered predictive analytics for consumer companies. It finds patterns from multiple sources of information. For example, Hasan explained how its analytics might help provide patterns in data from a fitness monitor but also health claim information. With that encompassing profile, a company may provide deeper intelligence insights.
One use-case area:
For example, there is the increasing amounts of data that people and machines create. With that scaling in data, there is a growing demand for new types of analytics capabilities. Graph databases are becoming more popular for the varied amounts of data they aggregate and analyze. These graph databases organize nodes, which might be things like a street light or people. The properties of a graph database describe the nodes. A graph database also has “edges” that connect the nodes and properties, defining the relationship between them. The value is derived when analyzing the patterns between the nodes and the properties.
Investment is coming from exploiting analytics to make B2C processes more efficient and improve customer marketing efforts…. The focus is on enhancing the customer experience throughout the presales, sales and post sales processes.
The deal is one of the best and most lucrative examples so far of applying a Google-style data-science mind-set to an existing industry — in this case, the world’s oldest and most popular: Farming.
Cyrus Farivar has a long piece on the rise and fall(?) of Zynga in Ars. Lots of delightful little bits on maxing the viruses and Zombies:
“ I got a turbo education on how to do the viral marketing,” he said. “It’s where you design features to be more social: go accomplish this with your friends. How can I make this fun, especially asynchronously, and how can I get people to invite more people? What was good and transformative about FarmVille [was that] it brought in tens of millions of adults who had never played [games] ever. It opened up casual light entertainment, and not time sensitive gaming, to 100 million people.”
As the title here suggests, it reminded me of “gamification”: how can you make boring things in your software fun so that users (read: people) use it more effectively. I’m never sure if it panned out for white-collar work. I’m note sure filling out quarterly performance reviews or weekly sales data could ever be “fun.” It’s kind of fun in Foursquare and other places that outsource (mostly meatspace) data collection.
Also, awesome quote on being too data driven:
I had a PM tell me—many times—that they “couldn’t get data on fun.”