Sufficiently on demand

One argument against network neutrality—in favor of use(r)-specific discrimination by networks—is that neutrality is a misnomer, bias is inevitable, and we should not “artificially” bias the system against latency-sensitive applications like video-on-demand. Now I think this argument and its variants have significant holes. I discussed them elsewhere and have an article coming out soon with Barbara van Schewick that goes into plenty of detail debunking many of the economic arguments against network neutrality.

Still, I acknowledge that one cost of network neutrality regulation is the bias against latency-sensitive applications like video-on-demand. But is this such a serious cost? Won’t competing providers of “on-demand” services innovate, compete, and find a way to meet consumer demands even under a network neutrality regime? Haven’t they already? I can get plenty of video content “on-demand” through a variety of sources (iTunes and Netflix are two services I use that are sufficiently on-demand for my tastes, but then I am willing to wait a little bit while a video downloads from iTunes and even longer (a day) for a movie to arrive in the mail.) Tim Wu and I sat in his office the other day watching an episode of Lost he had downloaded from iTunes–the quality was fine and we didn’t have to wait that long for our demands to be met. iTunes was sufficiently on-demand. I suppose (some, many) people are willing to pay (much?) more for instant video gratification, but I am not so sure. So while a bias against latency-sensitive applications like video-on-demand might entail a cost, I am not sure whether the magnitude of that cost is as significant as opponents of network neutrality make it out to be; perhaps the issue is not really about the magnitude of the cost to society but rather concerns who gets the surplus generated by emergent on-demand services.

(Note: Along similar lines, we might consider the evolution of IP telephony over the past decade.)

5 thoughts on “Sufficiently on demand

  1. One may mischievously suggest that anything that gets people reading books rather than watching video is a good thing.

    On a more serious note, I just don’t see why it’s so bad to get this money from the subscribers rather than the people putting the content on the web. A big company or wealthy person who really wants video on-demand can pay for a bigger slice of bandwidth, right? The rest of us can just use Netflix or iTunes.

  2. Well, the argument made against network neutrality is that merely buying more bandwidth will not solve the latency problem; e.g., high quality on-demand video may require prioritization of packets, not just more bandwidth for subscribers. I am wondering—in this post—whether the existence of sufficient quality and sufficiently on-demand services (sufficient to meet the demands of many consumers) significantly discounts the alleged costs of network neutrality on latency-sensitive applications.

  3. Video over the Internet is not particularly sensitive to latency; it’s almost entirely sensitive to bandwidth. Bad network latency on the Internet is measured in seconds. Good network latency on the Internet is measured in hundredths of a second. No one needs their copy of Lost five seconds earlier. If you have a bad experience downloading a TV show from the iTunes Store or watching a video on YouTube, the problem is one of bandwidth, and that’s a network capacity issue, not a packet prioritization issue. You could solve it in the short-term by prioritizing iTunes packets so that the user gets a larger share of the total bandwidth, but then all the other services will suffer, making it a bad long-term solution. Killing the World Wide Web in order to make the iTunes Store work better is not a tenable solution.

    For all the talk of providing multimedia services, the real fight in network neutrality is IP-telephony. It’s low bandwidth but very latency sensitive. Even a fraction of a second in delay is noticeable to the user.

    IP-telephony is not an application where clever software engineering can solve the latency problem. In Internet gaming, one of the solution to latency issues was to have the game servers do limited predictions as to how players in the game would move for the next fraction of a second. It’s not practical to do this with a voice conversation, so we’re stuck with having to get the voice packets from person A to person B as fast as possible, and that’s entirely in the hands of the network provider. It’s not clear what the solution to this is going to be, moving forward.

    Letting the users be responsible for bandwidth upgrades is reasonable. The problem of moving packets quickly, though, is not the same problem, and isn’t one which can be solved simply by buying more of something (cables, routers, servers, etc.). I dislike the idea of losing network neutrality, but there are real services which network neutrality is a problem. I’m not sure what the solution is.

  4. For the most part, I agree with you Bryan. There is a distinction, though, between video over the Internet and the high quality video-on-demand (also over-the-Internet) services that networks would like to be able to provide. I believe the latter are sensitive to latency (as well as bandwidth and jitter). In the end, I think you make my basic point–as far as video services are concerned, currently available video over the Internet services are sufficient. Sure they might be improved with prioritization and enhanced service offerings, but the marginal benefits don’t seem to justify prioritization.

  5. Even that sort of video-on-demand isn’t particularly sensitive to latency so long as the bandwidth is there. Current generation TV-top boxes for video-on-demand, which use the internal cable system, not the Internet, require several seconds to begin playing. That is much longer than even poor latency over the Internet would delay the beginning of playing a video if adequate bandwidth were available.

    High latency is only a significant problem in applications which require a back-and-forth between the two parties on each end of the connection: voice conferencing, video conferencing, online gaming, and so forth. Applications where one of the parties is a mostly-passive receiver of bulk data can have high latency and periodic Internet congestion masked by buffering and other software techniques.

    Even normal web browsing is more latency sensitive than on-demand video; Google is very interested in having their homepage and search results load in a fraction of a second on your computer, and in that time scale, latency matters.

    If there’s going to be a corporate fight between competing on-demand video services trying to use the Internet, bandwidth is going to be the problem. The solution of giving some vendors prioritized packets isn’t a real solution, as the total bandwidth isn’t there. As a practical matter, that total bandwidth isn’t going to be increased to the level required. The practical solution is to physically locate the storage of data closer to the users, and companies have already been doing this for years. Akamai has been providing this as a service for a decade, and next-generation peer-to-peer file sharing applications are getting smarter about the location of the peers. We’re going to be seeing more and more third-party boxes sitting in ISP’s points of presence.

Comments are closed.