Airbnb and The Myth of the Black Box Technology Company

blackbox.jpgEvery subsection of society has its myths. Social groups need unifying beliefs that often are designed to shield its members from harsh realities. On the political stage we are currently experiencing what bad things can happen when myths (the Tea Party’s beliefs in how government should work) run wild.

The big myth of the tech industry is that there is such a thing as a black box technology company, one that doesn’t have to deal with customers — people — directly. The technology will handle that, says the myth. Everything is self-service. No need to handle stupid questions and annoying phone calls. No need to sell, because people will find you on Google and then just buy.

Tech startup founders (who are often introverts and don’t like to deal with people that much) and VCs love this idea: The perfectly scalable company, designed around algorithms and websites and network effects, not depending on icky personal interaction and unpredictable human behavior. No need to hire a lot of service agents and salespeople, we just need a few super smart engineers. And customers are just data points swimming through the sales funnel.

The poster child for this myth is of course Google. Some people actually still believe that Google makes all its money from a pure self-service model. Funnily enough, Google seems to believe this myth itself and treats customer service — at least for the unwashed masses — as an unnecessary distraction, to be avoided if possible. Even though, according to some sources, at least half of Google’s employees work in sales and service (hidden away in the lesser buildings on campus with strongly reduced perks), Google still projects an image of a pure technology company.

There’s of course nothing wrong with trying to achieve as much self-service as possible. Customer service is expensive. The terrible quality of the service that most companies provide is not due to their evil nature, but to the fact that customers want to pay less and less, which leaves less money for good service.

But the tech industry has an aversion against customer service that goes beyond this economic aspect. The latest case in point is Airbnb’s “Ransackgate”. The apartment of a customer was completely destroyed by a renter that was referred by Airbnb. According to the customer’s description, Airbnb was somewhat slow to react to this terrible situation, and even went into adversarial mode when the customer blogged about her experience. A co-founder explained to the customer that Airbnb was currently raising money and could suffer from this negative publicity, so could she please blog something positive? Now even the respected Paul Graham, one of Airbnb’s investors, suggests online that the customer is probably lying.

This just shows how far removed the tech elite is from the reality of normal people. When your apartment gets destroyed, you’re not happy being treated with the normal standardized customer service process (“Please include your ticket number…”). It’s a destroyed apartment, a traumatic experience, not a browser compatiblity issue.

It’s pretty telling that Airbnb’s young founders and even its senior investors (who really should act as the adult supervision in such cases) don’t recognize why this could be a problem that resonates emotionally with many people and therefore needs the unconditional attention of the company’s leadership. Oh, a customer has a problem? Yuck, it’s emotionally charged. Let’s just wait and it might go away.

Customer service is more than just necessary, it can be a fundamental differentiator in a crowded market. But interestingly, it’s kind of a taboo topic for most investors. A while ago a VC told me that one of this very successful portfolio companies has this fully automated self-service sales process that needs almost no direct customer interaction. I was very surprised because I know the company fairly well, and I know that probably more than half of its employees spend most of their day on the phone with customers, selling to them and then helping them get up and running. You could even say (and another VC confirmed this later) that their personalized sales process and customer service is the secret of their success. But this VC wanted to believe that it’s all about the technology. He really, really wanted to believe in the black box.

As long as tech companies and their investors can’t go beyond their belief in black boxes, they will struggle to reach the mass market and build profitable businesses. You could even think that the awful performance of the VC industry over the past decade was partially caused by this naive view of reality. Everybody is chasing the magic black box, but nobody wants to build businesses that work in the real world.

(Picture: Martin Deutsch)

FCP X: Apple’s strategic focus and the consumerization of video editing

faq_icon.jpg The “consumerization of IT” is a trend that started about 5 years ago and that has been reshaping the world of information technology quite radically. Consumer technology such as smartphones, lightweight web-based applications and now tablets has invaded more and more enterprises, to the shock of IT managers everywhere.

The biggest winner of this trend is of course Apple, which now has a market cap close to that of the old Wintel (Microsoft/Intel) monopoly. Apple basically owns the high-end laptop market, even though MacBooks are still not considered “enterprise technology” by most IT departments. It totally dominates tablets, and it makes more than 50% of all profits of the mobile phone industry, even though its market share is still small.

Consumerization is what took Apple from a barely surviving also-ran to the dominant technology company of our time. Is it surprising that Steve Jobs and his lieutenants are focusing all their resources on this successful strategy? For instance, Apple recently killed its pro-level server business (Xserve), effectively exiting the data center market.

The latest victim of this strategy is the Final Cut Pro (FCP) line of video editing applications. FCP Studio is probably the most popular suite of video production software in the market. It started small a decade ago as a cheap alternative to Avid, but it is now the choice of many high-end editors in broadcast TV and even Hollywood. Nowadays even editor legend Walter Murch uses FCP, which once was ridiculed as a toy by the movie tech intelligentsia.

The new version of Final Cut, FCP X, caused a major sh*tstorm in the editor community when it was released two weeks ago. It gets only a 3-star rating in the App Store, attracting comments such as “FCP X = Windows Vista” (which probably is not meant as a compliment). Countless articles complain about all the missing features that professional editors can’t do without, not least the baffling fact that FCP X can’t import projects from older versions of FCP.

So what’s going on here?

First of all, FCP X is a great product, if still a bit 1.0. I’ve been playing with it for a couple of weeks now, and I certainly won’t go back to the old FCP or any of its clones (such as Adobe Premiere Pro). FCP X reinvented quite a few things in how editing is done, and most of the changes are really great, speeding up the editing process considerably.

But FCP X also asks you to relearn a lot of things. It can do practically everything FCP 7 could do and a lot more, but many tasks are just done very differently. There are a lot of “WTF?” moments when you switch to FCP X, but once you discover what the new way of doing things is, it all makes a lot of sense. I’ve only encountered one or two things that I still find more elegant in the old FCP.

To use a metaphor from my other field of work: it’s like learning a new programming language. When you switch from something like C++ or Java to Python or Ruby, a lot of things look strange or even ridiculously simplistic. But after a while, you don’t miss the overhead that the old tool required you to deal with. You recognize that the irritating, seemingly amateurish simplicity is actually productivity-enhancing elegance.

That’s great for prosumers and lone-wolf freelancers, but it’s no consolation for the high-end editing pros who depend on sophisticated, highly specialized workflows. Relearning everything and reorganizing your corporate workflow is not a great proposition for somebody who constantly works under tight deadlines.

So is Apple trying to consciously scare off the high-end pro market? In some ways, yes. Every successful business has to decide what its focus is, who its customers are. Even for a giant company like Apple it’s incredibly difficult to serve entirely different target markets.

High-end video production houses and broadcast stations often run their video production infrastructure like traditional enterprise IT: A central department decides which platform to use. Then internal technical people and consultants implement the system, endlessly tweaking every detail, and the maintenance of the whole system takes considerable resources. Individual workers don’t get to choose what tools they want to work with, but have to adapt to the rules of the organization (Don’t like our Avid system? Go look for another job).

Apple is great at selling stuff to people who make their own purchasing decisions, be it consumers, freelancers or even employees of larger corporations who have enough authority to choose their own tools. Apple is not very good at dealing with IT departments and at adapting its products to the myriad specialized requirements that larger organizations have.

The old FCP clearly suffered from feature creep that was dictated by larger customers, and that made the product difficult to use for the broader prosumer market. It looks like Apple made a clear decision with FCP X: It’s going after the big mass market, and if that means it’s going to lose the high-end segment, so be it. There’s really no other good explanation for the fact that Apple released FCP X without some crucial pro-level features.

Always remember that software is a tiny piece of Apple’s business, and the pro segment is even tinier. But pros are a tough crowd to please, and Apple probably just decided that this can’t be a priority anymore. It looks like it will deliver some of the missing features, but probably not on the scale the pros hoped for. Tough for the professionals who invested a lot in FCP, but this kind of gut-wrenching change is the reality of technology markets. Remember IBM selling off its PC business? Didn’t please a lot of people either.

Without a doubt Apple will lose a lot of fans in the video editing community. But it now has an editing product that is years ahead of everything else, perfect for the big and growing market of serious hobbyists, freelance editors (particularly in online media), independent filmmakers and corporate marketing users. It’s a big bet, but it could pay off.

Google+: Finally an end to the social media duopoly?

I can’t help myself, but I think the past few years of social media market development have been a bit of a disappointment.

What once was a bustling ecosystem of blogs, forum sites, multiple social networks, video and photo sharing sites, social bookmarking services and so on has more or less turned into a boring duopoly of Facebook and Twitter.

MySpace? Delicious? Digg? Dead, or almost dead. Blogs? The frequently updated ones are mostly run by professionals. YouTube? Very successful, but only for passive viewing and not social interaction. The many, many forum sites that run on vBulletin or phpBB? Only relevant for tiny niches. Location-oriented services like Foursquare? Feel increasingly like a short-lived fad, easily copied by the big players.

Helped by the always oversimplifying mainstream media, the only social media channel most consumers really use is Facebook, and maybe they have heard of Twitter and use it passively.

It’s a pretty sad state. How can the complexity of human interaction only take place in two venues? It’s like having only a choice of two restaurants, maybe McDonald’s and Olive Garden (which, come to think of it, might even be the reality in some small towns). And don’t get me started on the walled-garden nature and constant privacy issues of Facebook and the lack of innovation at Twitter.

The launch of Google+ finally brings back some hope for more interesting times in the social space. Google has botched all its previous attempts at going social, but G+ feels surprisingly right. It’s not only powerful and flexible, it also offers a great, mainstream-compatible user experience, definitely on the level of Facebook. Oh, and Google finally figured out that it should leverage the heck out of its search dominance. Putting G+ front and center in Webmaster Tools and Google Analytics seems like a no-brainer in retrospect, but Google somehow missed out on this aspect with earlier social projects.

After just one week, it looks like much of the Internet elite has moved on from Facebook to G+. Even Twitter seems to see a noticeably reduced post volume from the usual early adopters. It’s very understandable. Twitter’s 140 character restriction has its charm, but it’s extremely limiting when you want to share deeper content. Facebook’s overcrowded feed that mixes relevant content with puppy pictures and Farmville invites is just too distracting for serious content sharers. People who liked FriendFeed are probably already on G+ now.

So, is G+ a Facebook/Twitter killer? No, and it doesn’t need to be. But Facebook probably has already lost the elite, and if Google makes some pretty straightforward improvements (API, anyone?) it could easily take away much of Twitter’s fanbase.

G+ is a huge step towards a more diversified, use-case oriented social media environment. McDonald’s doesn’t go broke just because there’s a hip new restaurant in town. Different social environments attract different people. There’s no reason why social media should be any different.

Google’s Attempt at OS Disruption: Doing It Wrong?

ChromeYesterday, Google finally showed off some of the details of its new Chrome operating system. The new OS should be available by Q4 2010. Google most likely didn’t show everything that will be in the final product, but it’s safe to assume that the basic concepts will stay the same.
Some things turned out as previously expected: Google’s OS fully revolves around its Chrome browser, is extremely web-centric and will be based on Linux and other open source packages. But there were also some surprises: Chrome OS will only be available on special hardware that is compliant with Google’s specifications. It will not support traditional hard disks and not run any locally installed applications outside of the browser. Chrome OS devices will not do everything that a PC does, but they will be cheap and easy to use.

This sounds like a fairly typical disruptive strategy (see Clayton Christensen’s books). A new entrant (Google) tries to disrupt the incumbents’ (Microsoft, Apple) business by offering a significantly cheaper and simpler product that will only appeal to the very low end of the market. Over time, as the new product category gets better, the incumbents’ products retreat more and more into the very high-end of the market, increasingly losing relevance.

The big question is of course if Google’s approach has a serious chance to disrupt the OS market. There’s more than enough reason for skepticism.

First of all, the OS is not a major cost point anymore at the low end of the PC spectrum. According to some sources, A Windows license only adds $15 to $20 to the price of a netbook. It’s unlikely that people will go with a very limited OS just to save a few bucks on a $300-$400 purchase. Disruptive price points have to make a 5x-10x difference to really move a market. Witness the fate of Linux-based netbooks. After a few months, the whole netbook market moved to Windows XP, because most buyers were willing to pay the difference for a more familiar OS.

Secondly, Google’s vision of a purely cloud-based computer (everything in Chrome OS is stored in the cloud, the local storage just serves as a cache) could turn out to be too cutting-edge for the low end of the market. In order for this to work, you need a pretty fast broadband connection and you have to understand and trust the concept of storing your digital stuff on somebody else’s servers. I’m not sure that most consumers are really comfortable with that just yet.

Finally, there’s little reason to believe that the incumbents couldn’t offer a stripped-down version of their OSes for low-end machines in order to defend their market. Microsoft has already been toying with the idea of a limited Windows 7 version for netbooks, but did not release it after complaints from its OEM partners. Apple is rumored to work on a tablet device that probably would run a stripped-down version of Mac OS X and could compete with web-centric netbooks.

It seems fair to say that Chrome OS will likely not succeed as a traditional, straightforward disruptive product in the PC OS space. But Google probably hopes for a much bigger, much more fundamental shift. Most people today have a primary computer that they spend most of their computing time on. The massive shift from desktops to laptops in the consumer market over the last few years shows that people want to take their primary machine everywhere, and that makes a lot of sense in the traditional model of personal computing. However, the increasing availability of cheap web-capable devices (like netbooks, smartphones, tablets, even game consoles) could potentially break this 1:1 relationship between user and PC. The more people get used to accessing the Internet from a variety of devices, the more they will want to seamlessly access their data from any of these channels. The consequence is that people will move more of their data into the cloud, and local storage and applications will lose much of their importance.

Chrome OS is probably a bet that prices for web-enabled devices will drop far beyond today’s $300-$400 netbook price point and that people will have not one, but several of these devices that they can use interchangeably for most (though not all) of their computing needs. Google is not trying to win Microsoft’s game. There will be no new PC OS war. Google is trying to start an entirely new game, where it could easily turn out to be the dominant player from the outset. Or to put it another way: Google is probably not interested in a short-term disruption of Microsoft’s dominance, but in winning the next game — which it hopes to be a fundamental shift in how people use computers.

The only problem is that nobody knows yet if and when this game will take place. Dominant designs in technology, like today’s PC, can be pretty hard to displace. Remember the Segway? Looked like a great idea, a fundamentally new way to provide transportation, much more efficient than the tired old car. But it didn’t go anywhere because people tend to be happy with a “good enough” solution that they already know, even if it’s more expensive and complicated. And that’s why Chrome OS could turn out to be the Segway of computing in the end. Maybe today’s PCs are just not flawed enough to open an opportunity for an entirely new approach. Time will tell, but Google is certainly not fighting an easy battle here.

Scarcity and the world of digital media

DesertThe favorite discussion topic of the media and Internet elite is currently how the economics of content will develop in our digital age. Several big media conglomerates recently announced that they would start charging for online content. This was mostly greeted with ridicule from the digerati, who are still high on the radical ideology outlined in Chris Anderson’s book “Free”.

Brad Burnham pointed out in a recent blog post that both sides probably lack a deep understanding of the fundamental economic shift that is going on here. He mentioned the work of pioneers like Herbert Simon and Michael Goldhaber on the economics of attention as a framework for better insights.

I think it’s correct to say that we are currently experiencing the rise of something like a parallel economy, driven not (like our currently suffering traditional economy) by money and scarce physical goods, but by information and scarce attention. However, probably nobody really understands yet what this economy will look like as it matures and how its interaction with the “real world” will work. Obviously, the two will have to coexist, because last time I checked, my local supermarket didn’t accept hyperlinks as payment for groceries.

We are all so deeply rooted in the principles of the traditional physical economy that it is easy to forget the basics. The good old economy as we know it revolves around scarce physical goods and (more recently) around scarce services. The goods are scarce because considerable work has to be invested into their production, starting from the (often scarce) raw materials that we find in nature. Services are scarce because most of them require some kind of skill, and acquiring these qualifications needs time, which is limited and therefore valuable. Humans trade these scarce goods and services amongst each other because of course not everybody can produce every type of good or service. And then there’s of course money, which provides a more efficient way to trade stuff by separating value from a lumpy physical item or perishable service. Money is basically condensed value, which stems from physical scarcity.

So far so good. But how is the digital economy different?

Most importantly, value in the Internet economy is detached from the physical world and its limitations. For instance, a digital text can be valuable without having a physical manifestation. Sure, all these bits have to be stored somewhere, but the storage medium is a reusable commodity, not bound to this particular piece of information. Digital information can of course be copied without loss of quality (this doesn’t exist in the physical world) and distributed over a network, instantly reaching every corner of the world. And all of this is remarkably cheap nowadays.

The result is a huge abundance of information. And this changes what is scarce: Not the actual product (information), but the capacity for consumption — attention. Every day more free information is made available to the world than a human being could consume in a lifetime. Obviously, human attention is finite, and therefore it’s the scarce factor in the digital world.

That’s why many Internet users can’t understand that media companies want to charge for their online content. Aren’t they already getting the most valuable thing that an Internet user has to offer, his or her attention? And obviously, attention can be converted into real money through advertising, so what’s the problem?

At this point the discussion typically breaks down, because media companies, and newspapers in particular, have a very hard time financing their costly content production just from online advertising. There are probably two main reasons for this problem:

First, on the cost side, newspapers still apply the old principles from the physical world to their content production process. In the old media world, news has to be distributed physically (or through scarce airwaves), and therefore it is most efficient to produce local newspapers that cover all the important news in one single information product. This results in probably dozens or even hundreds of editors slightly rewriting the very same news agency report, adding almost no value. In a digital world of ubiquitous information, that’s completely unnecessary.

Furthermore, a key value proposition of newspapers is the context that they create by selecting the most newsworthy content. Again, this process is duplicated for every single newspaper. In the online world, there are far more efficient and sophisticated ways to provide this value, even though most can’t exist without at least some human intervention. But frankly, semi-automated aggregators like Memeorandum often provide a better view of what is going on in the world than most newspaper homepages.

About 50% of the total cost of a newspaper goes into physical production and distribution, the rest into actual content production. But if you subtract the obsolete and redundant editorial work that most newspapers are still doing, what would be left? Maybe 5%, maybe 10% of the cost base? Most probably, it’s even less. The percentage of truly original reporting in most traditional media is surprisingly low. But in the digital world, there’s no mechanism to finance the unneeded redundant production of information, because there’s already so much information out there.

Secondly, on the revenue side, advertising is a very unsophisticated and inefficient way to convert attention into money. Today’s advertising models are still built on the scarcity models of the past — the type where ad space is scarce, not attention. If you wanted to reach people in a certain local market with your commercial message, your only option was to advertise in the local newspaper or on local TV and radio stations. Even in big markets, there were only a handful of channels available, with very limited and therefore expensive ad space. Targeting was only very basic, because reaching many people with a lot of wastage through mass media was still more efficient than other methods.

That’s of course fundamentally different in the online world, but ad agencies and publishers are still stuck in old thinking. Most online publishers complain about the low rates that they’re getting for display ads. But that’s not surprising, since online ad inventory is almost unlimited. The trick is to reach the right audience at the right time. But ad agencies still think in big demographic clusters, not in the situation-specific micro-segmentation that the Internet could provide. It’s therefore not surprising that paid search is by far the most lucrative form of online advertising, since Google and its competitors can convert very specific attention (somebody searching for a certain topic right now) into commercially relevant results.

So what needs to happen to make the attention-driven online media economy viable?

  • Clearly, media companies have to become leaner and more focused. They need to concentrate on original content that really adds value and therefore is worthy of people’s attention. That’s frankly only a fraction of the current media production. All the other fluff, as well as the fat corporate structures on top of the actual content production, will simply not be viable online.
  • Also, media companies need to recognize that unique context, filtering and editorial selection are more essential than ever in dealing with people’s limited attention. This will be a great way for media brands to get competitive differentiation. But today’s typical news homepage is still built on an old, generic one-size-fits-all model that is neither cost-effective nor customer-friendly.
  • Publishers and advertisers have to experiment with new ways of converting attention into commercial value, i.e. building the bridge from the attention economy to the monetary economy. I think we currently stand at the very beginning of this process. Traditional advertising is becoming increasingly ineffective. But new ways to channel people’s attention in order to sell them something are still embryonic. Almost certainly, there will be many ways to do this, but no silver bullets.
  • Media companies have to recognize and deeply understand that attention is the scarce commodity in the digital economy and therefore is a currency in itself. The current conflicts between Google and newspaper publishers show that old media executives are starting to get this, although most of their actions go into the completely wrong direction.

But at the end of the day, attention has to be convertible into money somehow, since people still live in the physical world, where scarcity is a reality and money is needed. Companies and their executives will continue to be measured by their financial success, not by the attention they accumulate. Determining the monetary value of intangibles like attention, intellectual property, brand assets and customer loyalty is a thorny problem, and it’s unlikely that there will be a commonly accepted solution anytime soon.

However, people working outside of the traditional corporate framework might be willing to forgo at least some of this conversion. Under some circumstances, people tend to value attention more than money. Let’s be honest: Most people in the Western world already have more material things than they need. Particularly the richest European countries are increasingly turning into post-material societies that don’t necessarily try to optimize their GDP, but instead the general well-being of their population. And getting attention is something of fundamental importance for humans.

So it wouldn’t be surprising if more and more people would add value to the digital economy without getting paid for it in monetary terms. The open source ecosystem is of course a great example of how this can work. And most bloggers blog (and Twitterers tweet) because they like the attention and the good things that can result from it, not because they get paid.

Another example: Craigslist provides fundamentally the same value as the many classified sections in newspapers that it basically killed, but it captures only a fraction of the monetary value. Does that make any sense at all? Yes, because Craigslist created a top 20 website that commands a lot of attention with minimal resources. It provides a valuable service and gets paid with huge amounts of attention and loyalty, as well as with quite a bit of money. It is wildly profitable in monetary terms, but obscenely profitable in attention terms. That’s bad news for the people who used to make a living selling classified ads, but good news for Craig Newmark.

Welcome to creative destruction, attention economy edition.

(Picture: Josh Sommers, CC license)

Software pricing: When does freemium really work?

Free beerFreemium — the combination of a free basic version with paid “premium” versions of a software product — is an increasingly popular business model for software. Many Internet startups and even giants like Microsoft and Oracle are using this model for at least some of their products. A lot of iPhone apps for instance are available in a basic “light” version that needs to be upgraded to the paid version for more functionality or content — a typical freemium strategy.
The advantages of freemium seem obvious: It’s a good way to get free marketing. It can reduce sales costs dramatically, because users can self-educate. It lets the product speak for itself, thereby leveling the playing field, which is a particular advantage for smaller companies with small marketing budgets.

But does freemium really work that well in practice? There are early signs of a backlash against this model. Google recently strongly de-emphasized (i.e. practically killed) the previously offered free version of its Google Apps suite, which used to be free for companies up to 25 users. Potential customers are now encouraged to try the product for free for 30 days and then start to pay $50 per user per year. A similar change can be seen at 37signals, once the poster child for freemium, who now hides the permanently free versions of its products pretty well. Startups like photo sharing site Phanfare and the recently demised LucidEra (a vendor of SaaS business intelligence) tried freemium models, but weren’t successful. And many other startups that use freemium are still far away from profitability.

So the question is: Under what conditions does freemium really work for software vendors? Obviously, customers like freemium, but how can software companies use this model to build a really sustainable business?

It think there are probably six conditions:

1. Your marginal costs have to be extremely close to zero.
Freemium only works if the distribution and support of an additional copy of your product (i.e. the marginal costs) costs you almost nothing. Thanks to the Internet, the digital distribution of software is now nearly free, both for downloadable applications and online apps.

That sounds obvious enough, but I think many companies underestimate how close to zero the costs really need to be. Web-based applications for instance need to provide enough server capacity and storage for all these freebie users. This can quickly add up to substantial amounts, even if the capacity for one single user is cheap.

2. The target market has to be big enough.
There doesn’t seem to be any reliable data about how many users of a free product end up buying a paid edition. Obviously, this will strongly differ from product to product. But from anecdotal evidence, it’s probably safe to assume that typically less than 10% of users convert to the paid version.

If freemium is your main sales channel, this obviously means that your target market needs to be large enough so that you still can build a sizable business just from getting paid by a few percent of the total potential user base. Furthermore, the free marketing benefits (and maybe even positive network effects) of freemium can only kick in if there are enough people to spread the word about your product, and for that they first have to be interested in what you have to sell. In other words, niche products that only appeal to relatively few users are probably not ideal for freemium. Some more targeted form of sales might be the way to go there.

3. Your product has to be very simple.
It’s great that users can self-educate about your product while using it for free. Hopefully, they will soon reach the limits of the free version and feel the desire to upgrade to the paid edition.

But for this to work, your product has to be very, very simple. People don’t read manuals and rarely follow online tutorials. The product just has to be easy to use and has to provide obvious value almost instantly, so that users will have enough motivation to dig deeper. Some of the better iPhone games are a good example.

4. If your product is not simple, you need competent customers.
Not all software products can be and should be simple. This has nothing to do with a lack of usability, but with the scope of features that a product offers. Photoshop is not simple. Database systems and application servers are not simple. Content management systems (the decent ones) are not simple. They’re powerful tools for skilled professionals. Products that satisfy the needs of professional users are almost certainly not as easy to use as consumer software, because you need a certain skill set to make sense of what the product does. That’s a problem for freemium.

LucidEra’s founder tells the story of his company’s failed attempts at selling through the freemium model. Many trial customers were simply not able to figure out what the product was good for. They never really used the more advanced features and therefore never really saw a lot of value in the product.

So if you want to sell a complex, powerful product using freemium pricing, make sure that you address a well-defined, skilled audience. Your users need to already understand what they want to do with your product, and they have to be motivated and skilled enough to invest considerable time into working with it. Only then will they discover enough value to upgrade. If you offer a complex product to a user base that has not previously used this type of software (like in LucidEra’s case, selling BI to mid-market customers), freemium will be tough.

5. There has to be a minimum useful feature set, but plenty of additional functionality.
We’ve probably all used freemium software for which we didn’t see a need to upgrade. There are probably two cases where that happens: When the free version of a product doesn’t offer enough functionality, you don’t recognize that this is a useful product that is worth paying for. On the other hand, some free versions offer so much functionality that it will only make sense for relatively few users to upgrade to the paid version.

Google Apps prior to the recent strategy change was an example of the latter case: Small companies got almost all the functionality for free. If you had less than 25 users (and most small companies are way smaller than that), there was simply no good reason to upgrade.

Most successful freemium vendors now use a carefully designed combination of feature restrictions and a limit on the amount of data you can store or the number of users you can sign up. It’s clearly not easy to strike the right balance. Setting the right restrictions is probably the single most difficult thing in the freemium model.

6. You need to really understand the demand curve.

A demand curve in economics describes how many people are willing to buy a product at a certain price. If you can have only one price for your product, you lose a lot of potential customers (who don’t want to pay that price) and you lose a lot of potential profit (by undercharging the customers that would have been willing to pay more).

The solution for this dilemma is price differentiation. Why does Microsoft offer eight different versions of Windows Vista at different price points? Because it wants to ride the demand curve and extract as much money as possible from different customer groups. If Microsoft would offer only one version at a low price, it would leave a lot of money on the table (but maybe have happier customers).

Freemium is of course a form of price differentiation. The assumption is that there are many people who have a very low willingness to pay, but still find a certain product useful enough to spend some time with it and maybe tell their friends about it, some of whom will be willing to pay something. Most freemium products offer several different paid product levels with different feature sets — price differentiation at work.

The problem is that it’s really difficult to find out the shape of the demand curve and match it with your cost curve. One question is what the price point for your cheapest paid edition should be. Will many people pay $9.99 a month? Or is it better to start at $99/month and hope that you get so much free marketing out of your free version that many people will sign up who are willing to pay that price?

A key consideration is your cost curve and the usage pattern that users at different levels have. In some businesses, the most active users who use the most resources are also the most profitable ones, because they have a high willingness to pay. In that case, it makes sense to have a relatively high minimum price. But there are also cases where the low-intensity users are the most profitable. Then it makes sense to extract money from as many people as possible, even if it’s a far lower amount per user.

Conclusion: More an art than a science. And watch out for pitfalls.
There are obviously many variables that go into effective software pricing. Freemium can be a great model, particularly for smaller companies. But it is hard to get it right, and it can also be dangerous on several levels. If you get the demand structure wrong, you might leave a lot of money on the table. If you underestimate the costs caused by your free users, it will reduce your profits dramatically (and it’s not easy to get rid of these users without risking a hit to your reputation). Oh, and how about liability? If you lose a freebie user’s data, can he sue you? Better make sure that your terms of service are watertight.

Freemium is not a panacea for the software industry, it’s just another tool for the hard task of getting software pricing right. It’s great for certain market segments, but software companies should avoid to go freemium just because it’s convenient and reduces sales costs. Sure, a freemium model can get you more users relatively quickly, but in the long term, it might hurt your bottom line and growth prospects dramatically.

Finally, what would Google do? There’s probably a good reason why Google basically got rid of the free edition of Apps and now does pretty conventional software marketing with billboards and the like. They now even do competitive upgrades, as well as channel sales through resellers. Sounds more like traditional software industry tactics than the wonderful world of free Internet-based software.

(Picture: Timothy Lloyd, CC license)

Facebook’s and Twitter’s business model problem: The very long tail of user activity levels

Facebook and Twitter seem to be the big winners of the current social media wave. Both services are growing like crazy, adding hundreds of thousands per users per week. But both companies are still struggling to find a profitable business model, currently prioritizing growth over revenue. Once you dominate the world you can always figure out how to make money from it, right?

Maybe. But the seemingly huge user numbers of these services could turn out to be far less impressive (and commercially relevant) in practice. Sure, both platforms have tons of registered users, but are these users really active? That’s probably something advertisers would like to know before they spend money on these channels.

Several new reports (e.g. here and here) indicate that Twitter’s user activity patterns follow a typical “long tail” distribution.

Longtail

There are a few heavy users that are extremely active, tweet a lot, have thousands of followers etc. But beyond this small group, usage drops rapidly. 50% of Twitter’s registered users are basically inactive. Since Facebook is far less transparent than Twitter, there are no similarly precise studies about usage of the leading social network. But a quick survey of the activity level of my 358 Facebook friends showed similar patterns. Only 71 (=19.8%) of my friends showed any activity in the past 72 hours, with only a handful clearly dominating. Admittedly, that’s anecdotal, but I’ve not seen any studies that show a different pattern.

Now, it’s not surprising that this kind of service has a long tail usage pattern. You can find very similar patterns in other types of communication networks like the phone system, e-mail, instant messaging, etc. The problem lies in how to best monetize these services. Both Facebook and Twitter don’t charge users. They want to monetize their services indirectly. Facebook mainly sells ads and virtual goods, Twitter still has not come up with a business model, but probably will go a similar route.

The problem is: these monetization approaches depend heavily on actual usage. Nobody pays much for ads that not many people see or click on. Virtual goods are profitable, but you can only reach a decent revenue size if many people buy them. So if it’s true that only 20% of users on both Facebook and Twitter are really active, that’s a big problem for both services, since their opportunity to sell ads and premium services is much smaller than their raw user numbers suggest. Granted, both platforms are very big even then, but maybe it’s not quite enough for total world domination…

So what would be a better way? Think about it: How does your phone company charge you? Your cable company? They charge flat fees because they want to extract a lot of money from low-volume users. Sure, they have different price levels for different user types, and they offer premium services, but the lowest levels are not cheap at all.

For instance, I’m a very low volume user of phone services. AT&T charges me for a 550 minutes per month package for my iPhone plan, of which I typically use 100 minutes or less. There’s no smaller package — tough luck for me. Do you want high-speed Internet at home, but use it only a couple of hours per week? You’ll still pay the full price. You need Microsoft Office for your business (because people send you fancy PPTX and DOCX files), but only use it every couple of weeks? That’ll be $399 for the “Standard Edition”…

These flat-price plans are simply a very profitable way to make money from services that have strong network effects plus a long-tail usage pattern. Charging based on usage (and online advertising is economically speaking an indirect way to do that) is fine as long as you sell to a heavy user, early adopter customer base, but as soon as you reach the mainstream, flat-fee models are way more profitable.

So the problem for Facebook (and, supposedly, Twitter at some point) is that the current business model will not really scale well with further growth in the mainstream, low-usage market. Online ads are measurable, and advertisers will only pay for audiences that are really active, i.e. generate page views or click on ads. I think Facebook and Twitter can only scale financially if they find a way to charge people even if they don’t use the service frequently.

By the way: Google recently killed strongly de-emphasized the free version of their “Google Apps” suite of messaging and productivity apps. Their model until recently was that small companies with less than 50 users could get the product for free, financed through advertising. Looks like that didn’t work out so well. The costs for accommodating a lot of mainstream users probably grew more quickly than the revenue from heavy users and ads.

And I’m convinced that Facebook and Twitter will face similar challenges in the near future.

What will the news media of the future cost?

(This post is a translated version of an article that I wrote for netzwertig.com, the leading German tech blog)

The media industry is in the middle of a deep transformation, and nobody knows what the successful business models of the future will look like. Sometimes it’s useful to look at this kind of situation from the perspective of good old-fashioned economic laws, because they apply even in the digital age.

The current discussion about the future of media is shaped to a large part by ideology and wishful thinking instead of rational analysis. There seems to be little common ground between Internet prophets (“Information wants to be free”) and traditionalists (“Good journalism deserves good money”). Some online pundits seem to think that media companies should never, ever charge for their products again, while many managers of traditional media businesses would love to see the Internet shut down altogether.

Unfortunately, this whole discussion typically doesn’t address the heart of the matter. The future of media business models will not be decided by ideology and idealistic visions, but by simple market laws. And to understand these, a bit of economic analysis typically works much better than idealism.

Obviously, most traditional media companies, newspapers in particular, currently have a revenue problem. Print circulations (and ad sales) are falling rapidly, and users seem to be unwilling to pay for online media.

Broadly speaking, this trend reflects a change in the supply situation. To use a bit of Economics 101: The price that can be achieved for a certain product is a result of the relationship between supply and demand. Economists like to illustrate this with supply and demand curves:

Supplydemand1-1

The supply curve S shows that suppliers will produce more units of a product (x) if prices (p) are higher. On the opposite site, customers will buy more of a product if prices are lower. The demand curve D therefore is sloped in the opposite direction. Since we’re talking about media here, the demand curve is pretty steep. This reflects the fact that people can’t consume much more media even if prices fall. There are only so many hours in a day, and attention is limited. The point where the supply and demand curve intersect is the market equilibrium that determines the market price and the volume of product sold.

The increasing popularity of the Internet as a news medium has pretty severe consequences for this equilibrium. Since Internet-based media have a much lower cost of distribution, suppliers can provide much more of their product at the same price. The supply curve shifts down (from S to S2). As a result, the volume of media products sold increases a bit, but prices fall pretty dramatically. That’s exactly what we’re currently experiencing in reality, although reality is of course more complex than this very simple model.

Supplydemand2-1

Now for the important question: Is this new position of the supply curve (and therefore the much lower price level) sustainable, i.e. can it remain at this point in the long run? That’s not obvious, because supply behavior often overshoots a sustainable point in technological revolutions — just think back to the dot com era. Business models need some time to find a stable new point.

But the way to a new long-term equilibrium is not linear and depends on a lot of factors, not least of all the interaction with traditional market segment. In news media, it is currently rational behavior for newspaper companies to offer their content on the web for free. But that’s only true as long as the traditional print business is healthy. Let’s play a bit with some numbers to show the idea behind this claim.

Each supplier obviously has to consider at which price he wants to offer a certain amount of his product. Without going into theoretical details: roughly speaking, the supplier has to consider his marginal costs, i.e. the costs that are caused by producing another unit of a product, and the average costs that in the end have to be covered by revenue.

And now we see the fundamental difference between traditional media that are bound to physical distribution, and the digital world: Each information product initially creates costs for the production of the actual content (for instance, salaries for journalists). But when it comes to the costs per unit — the marginal costs — analog and digital media are fundamentally different. The costs for the production and distribution of a printed newspaper are substantial. But serving an additional reader on a newspaper website costs almost nothing.
Now let’s look at a simplified model case for a hypothetical newspaper company. Let’s assume that the production of content costs $250,000 every day (not unrealistic for a major newspaper). Let’s further assume that the printing and distribution of a newspaper costs 50 cents per copy, but serving another reader on the newspaper’s website costs only 5 cents (caused by the need for additional bandwidth and server capacity).

If we look at average costs for different audience sizes, we’ll see the following picture:

Avcosts

The first unit of a newspaper is incredibly expensive, but the more copies you produce, the better initial content production costs are distributed over all readers. The more circulation grows, the more the average costs per print copy (red curve) approaches marginal costs.

But what happens if the company publishes the same content both in print and on the web? Since the additional costs for the web channel are low, the total average costs for both media taken together (green curve) rise only slightly. If we assume that revenue from the print business already covers the costs of the newspaper, the Internet can potentially be very profitable for the media company. Even if online ad sales are relatively low, they can easily cover the small additional costs, and the rest is pure profit.

But: If the print business starts to fail — because readers migrate to the online edition — and the web channel has to carry initial costs alone (blue curve), we get a different picture:
Avcostsweb
Of course, costs for a pure web offering are much lower, but they are way higher than zero, particularly if the number of readers is relatively small. As soon as the cross-subsidy from the print side falls, the Internet channel has to provide much higher revenue to finance the initial costs of content production. And in many cases, it could be difficult to achieve this from ad sales alone.

Some Internet prophets seem to think that media companies should never again charge for content. That’s a pretty naive view. This approach only works as long as there is a healthy traditional (print) business. And obviously, this traditional business is eroding.

So what can be expected if the print business fails altogether? The harsh reality is that many media companies will not survive it. That’s why overall supply of news will start to decrease, and prices for news will rise.

Supplydemand3-1

More specifically: The phase in the development of the news business that we are currently experiencing is not sustainable. I’m convinced that the era of a huge oversupply of free news will come to an end sooner or later.

Media companies with a well differentiated product will start to charge for their content again, and users will pay if they can’t get the same quality content elsewhere — and that will increasingly be the case with shrinking supply. Online news subscriptions will be much cheaper than their paper equivalent, but they will cost something, probably around 10-40% of a newspaper subscription.

The economic characteristics of online media have another important consequence: The big players will have a huge advantage, since it’s economically crucial to distribute initial costs over as many readers as possible. Print media experience scalability problems at some point due to their physical production. But the Internet is largely free of these concerns.

The world of digital news will almost certainly be bi-polar. On one side there will be a few giant media conglomerates that will produce news for global markets. Almost certainly, they will offer their products in different versions for different price points. The freemium model, which is already popular in the software market, could very well be the model of the future in media. And these media conglomerates will make sure that the oversupply of news will be kept in check.

Next to this, there will be plenty of space for small, often semi-professional providers that can publish their content for free, thanks to low production costs. Advertising will be the typical business model, driven by an increased selectiveness of advertisers who want to reach certain interesting target groups in focused way. There will be a few segments where small media companies can established paid content franchises, but this will be a niche phenomenon.

The situation will be tough for the group in between: The medium-sized publishers that have to carry a substantial cost base from the print era, but don’t have the scale to reach a huge audience. Realistically, the model of the medium sized regional publishing house will very likely vanish. The economic structure of digital media doesn’t leave much room for this type of company.

Again: The current situation of the media industry is an anomaly that is not sustainable in the long run. This conclusion is not about ideology, but about market forces. We all will have to get used to paying for high quality news, probably pretty soon. It will be cheaper than the good old newspaper, but not free forever.

Wave could be Google’s Microsoft Office

Wave-LogoThe blogosphere is buzzing about Google’s major announcement yesterday: Google Wave will be a new communications platform that integrates elements of e-mail, instant messaging, wikis, photo sharing, collaborative document editing and more.

Aside from all the technical niceties of this new platform (Open APIs! Instant content updates! Smart spell checking!), Wave could turn out to be one of Google’s smartest strategic moves in a long time.

Many critics say that Google is still a one-trick pony: The company makes huge amounts of money in its search advertising business, but all the many other Google products barely generate any revenue at all.

But if you apply the same standards, even Microsoft is just a two-trick pony: The Redmond empire generates most of its profits from Windows and Microsoft Office. Windows of course was the basis for the success of Office, but Microsoft leveraged this platform dominance in a particularly smart way: By bundling several formerly disparate productivity apps into an attractive suite that provided, thanks to Windows, a much nicer user interface than the competition (remember WordPerfect? Exactly my point).

In some ways, Google Wave is doing the same. It leverages many of Google’s particular platform advantages from its search business that no competitor can match: its huge server farms that guarantee instant responsiveness, the rich data from its billions of spidered web pages (how do you think they do their spell checking?), its rich cloud-oriented programming frameworks, its experience in browser-based GUIs (finally something to do for that fast JavaScript engine in Chrome) and so on.

And more than that, Wave pulls together elements of previously separated web-based applications. Even though some people already fear that the result could be bloatware, I think that this is a particularly smart move. The current ecosystem of web-based collaboration tools is way too complicated for the average user. Sure, theoretically all these independent applications could be pulled together with open APIs and some RSS magic, but in practice, that’s too complex for normal people. Most users would probably prefer a single, consistently structured place where they could do all this stuff.

Wave could be to web-based collaboration what Microsoft Office was to PC-based productivity apps: The product that unifies all this emerging functionality for the average user, under a trusted brand and leveraging an established platform. And in the process, Google could potentially find a second cash cow.

What Media Companies Could Learn From Microsoft: Smart Bundling

BundleThe media industry is desperately trying to find new business models for the online age. A lot of the current discussion revolves around micro payments: Is it possible to get users to pay small amounts for each newspaper article? The metaphor “iTunes for news” seems to become a favorite model of many media people, and major players such as News Corp. are already planning to roll out micro payments.

I think they couldn’t be more wrong about this. It’s actually amazing that traditional media companies seem to be largely blind to the factors that made their traditional business models successful.

One factor is the control of distribution channels (I blogged about this earlier). This is difficult to replicate in the digital world, because digital content is so easy to replicate and distribute.

But the second element is actually much easier to implement for digital content: Bundling.

When you buy a CD (if you are still old-fashioned enough to do that), you pay $15 or so for a collection of around 10 songs. Chances are that you are only interested in one or two of these songs. So why don’t you just buy the single? Mainly because the music industry since the 50s consciously pushed the album format, suggesting more value. Look, you only pay $1.50 per song on the album, but singles often cost $5 or more for just one song.

How about your newspaper (if you still read one)? How much would you be willing to pay for today’s front page story in the New York Times? How much for the top article in the business section or sports section? A dollar? A few cents? Nothing at all? This probably depends strongly on your interests. On any given day, there are probably a handful of articles in a newspaper that you would be willing to pay for specifically. Most of the rest are worth almost nothing to you. But you are willing to pay a couple of dollars for the whole thing.

This is bundling at work. It’s extremely difficult to set the right price for a piece of content, since different people will see very different value in it. Therefore, it’s often the most profitable solution to sell bundles of content items at a relatively low total price to extract the maximum value from customers.

A great example for this from another industry is Microsoft Office. This suite of productivity programs today completely dominates the market. Most people would probably agree that that’s not because Microsoft had the best programs –some people still have nostalgic feelings for WordPerfect and Lotus 1-2-3. It’s because Microsoft sold the most attractive bundle of adequate programs at a very nice total price.

Here’s a simplified example that explains why this is smart: Let’s assume that User A wants to do a lot of word processing. He’s willing to pay $250 for a good word processor. He also wants a spreadsheet program, but is only willing to pay $50 for it.

User B is a finance guy and needs a good spreadsheet, for which he is willing to pay $350. He has no use for a word processor, but will pay $50 for a presentation program. And User C, a consultant, is willing to pay $200 for a presentation program, $100 for a word processor and $50 for a spreadsheet.

So, if you’re a spreadsheet vendor, what’s your ideal price? You could charge $350 and only sell to User B. You could charge $50 and sell to all three users, but that would leave money on the table. It’s really difficult to set the right price.

The best solution for this is to sell a bundle of a word processor, spreadsheet and presentation program and charge $300 for it. At this price point, all our fictional users will buy the whole package and will be very happy, because they get a solution at a price they are willing to pay, but with much more overall functionality. The vendor could only make more money if he were able to charge each user an individual price (what economists call perfect price discrimination), but in most markets, that’s impossible.

Microsoft is great at coming up with bundled editions of its software. There are five different versions of Microsoft Office, all with different elements and at different price points, but of course all based on the same code base. Of course, it’s dangerous to overdo this. The seven different versions of Windows Vista were just confusing.

Obviously, bundling works for software. It also works for most forms of content, and it can work particularly well for digital content, because it’s so easy to build bundles of digital content at almost no additional cost.

Unfortunately, the music business largely missed the boat on this. By allowing Apple to sell individual songs through iTunes, the music industry broke the album model, and there’s probably no way to get it back. The new subscription models that some record labels are experimenting with are of course nothing but another form of bundling, although at a much lower average price point.

Newspaper publishers don’t seem to get bundling at all. That’s probably because in the world of the physical paper, they can only sell a very limited number of different bundles (maybe a local and a national edition). Even the only two newspapers that successfully charge for online editions, the Wall Street Journal and the Financial Times, only sell one or two different online bundles. That’s simply stupid. Why isn’t there an expensive Pro version of the FT with full access to all market data, maybe even bundled with additional data sources? A cheap student version? A standard version just with the news and opinion columns? A version for people who want to read the FT primarily on their mobile device and just want the most important headlines? This kind of creative price differentiation would certainly extend the number of subscribers dramatically.

And the same applies to other parts of the media industry: Why doesn’t Hulu (or iTunes) sell an attractively priced subscription for its most popular shows, for instance a bundle that gives you The Office, Family Guy and Saturday Night Live, but also throws in a number of less well-known shows? If that’s the easiest way to get these shows, many people will sign up. The TV industry seems to believe that many people are going to pay $2 or more per episode on iTunes, they are almost certainly wrong. Nobody does that in traditional media. People pay for a satellite or cable subscription, which is a classic bundle. Deciding for each show individually if it is worth $2 is simply too much work. Pay-per-view only works for big-ticket items, and there’s no reason why this should be different in online media.

It’s really remarkable how little media companies seem to get the basic rules of bundling: Sell a bundle of products that have different value to different people at a price that seems really, really attractive when compared to the prices of the individual items. Make sure that you offer different editions that appeal to different target markets. That’s all. Just ask Bill Gates.

(Picture: My aim is true, CC license)