TIL Netflix Packets Never Leave Town!

I got a message from my ISP’s (Shaw) Bandwidth Team today. I wasn’t able to return the call, but I suspect they were calling to scold me about my bandwidth usage.

Some History

Bandwidth cap policies were a knee jerk reaction from ISPs ill-prepared for the era file-sharing on Napster and later voracious bittorrent usage. An era when someone using hundreds of gigabytes of bandwidth every month was likely a digital media hoarder, pirating more MP3s and MKVs than they could ever consume in a lifetime. An era of poor network management technologies, when a heavy movie pirate, could legitimately have a massively negative impact on other customer sharing their node.

I would never condone the hostile vilification of customers that these sorts of policies brought on. However, I am a reasonable person and I can understand where the ISPs were coming from. One the one hand they had the MPAA to deal with, on the other than they had technology and networks that were still maturing and not totally up to the task.

Times Change(d)

In 2016, it is a completely different landscape.

ISPs, hardware vendors and standards bodies have come a long way in improving network congestion. One of the reasons you don’t see buffering youtube videos is not because your ISP and copyright lawyers have convinced your neighbour to stop pirating with bittorrent. It’s because the the network has improved in general.

Even if your neighbour has stopped torrenting movies though, they’re probably consuming media online than ever. If he’s anything like me, he’s been using perfectly legit streaming services and the amount of bandwidth used by these streaming services is just as intensive as bittorrent. Netflix and friends are not doing anything magical to compress the video any more than the high quality rips you can find on the pirate bay.

Free-for-all (well, except the customer)

However, they have done something magical that makes these bit free for your ISP.

In 2014, Netflix revealed that they provide an “Open Connect Appliance” to ISPs. Free of charge.

Netflix’s OCA is a $20,000 server, that sits inside your ISP’s datacenter and stores a good chunk of Netflix’s library. They give it away for free because it is key factor in loading Netflix movies without having to wait for buffering. It stands to reason that Youtube, Crackle, Akami or any company looking to provide fast content has a similar set up (but they haven’t said as much).

Before today, I assumed that Netflix probably only had a few of these boxes in each ISP’s network. I assumed Shaw’s would be located in Calgary or wherever their HQ is.

Nerd Stuff

Then I dug into it by using very rudimentary investigation tools. Every resource on the internet has a unique URL and Netflix’s URLs seem to have logical names, so it wasn’t really too hard to figure out.

Here’s the breakdown:

When I load a video from Netflix it’s served from a URL that starts with: https://ipv4_1-lagg0-c005.1.ywg002.shaw.isp.nflxvideo.net.

This isn’t a website you can actually visit, it’s just the URL were Netflix videos are hosted. For me.

You see, before I even load the video, Netflix has figured out the closest physical location of the video file I’ve requested. When every second of load time counts, every kilometre of fibre is important. Hosting a file in Winnipeg instead of Calgary makes a difference.

I think it should be clear to most what’s going on in the URL, but if not. I’ll break it down further. URL are ordered from right to left.

.net = network
.nflxvideo = their stock symbol + the word ‘video’
.isp = Internet Service Provider, indicating that every URL above sub-domain are for an individual ISP
.shaw = my ISP
.ywg002 = Airport Code for Winnipeg + 002, probably the #2 OAC in Winnipeg
ipv4_1-lagg0-c005.1. = It’s hard to guess what exactly this means. It almost looks like it has something to do with my connection type.

In other words, when you request a video from Netflix your request does not get routed through expensive backbone connections to some far away server in Dallas or San Francisco, it does not even leave the city! It might only good a few metres down the street.

To further confirm this, I ran a traceroute, a command that follows a packet through the network.

Screen Shot 2016-06-30 at 12.41.48 AM

I’m uncertain where those IP addresses are physically, they don’t have convenient hostnames that give it away. Maybe a Shaw employee could leave the details in the comments. But it’s clear that the packets absolutely stay inside Shaw’s network.

Stop Harassing Customers

In conclusion, when packets do not leave your ISPs network, your ISP does not have to pay a third-party to transmit and receive those packets to and from their destination. Whether you are watching 1 hour of Netflix per week, or 100 hours, it doesn’t cost your ISP any more money.

If this is true for other content providers and content distribution system (and it probably is), then we actually have a network architecture where the heaviest data is the least expensive, if not completely free.

Harassing customers about bandwidth usage is non-sensical.

 

Update for clarity: The IP addresses that the Shaw related hostnames resolve to are owned by Shaw themselves (as verified by ARIN).

The Story of Alkaline Trio’s Goddamnit

Chicago’s Alkaline Trio is one of the most influential bands from the turn of the century “emo” era.

I came across the Original Sin documentary from 2008, about their first full-length release Goddamnit. It’s a great watch! Check it out.

Part 1

Part 2

https://www.youtube.com/watch?v=pm1LhObR3F4

Part 3

https://www.youtube.com/watch?v=AzjLoPEeMyU

Part 4

Is missing 🙁

Are Humans Bad Drivers?

Whenever self-driving are discussed, almost everyone — the media, tech podcasts, twitter, your uncle — universally asserts that “humans are bad drivers,” usually adding something to the affect of “we need machines to save us from ourselves.” As a contrarian through and through, foregone conclusions like that immediately trigger my gag reflex.

I don’t think humans are bad drivers and I more importantly I don’t agree with the tone of the conversation.

It is true that Google’s autonomous vehicle programs has an incredible safety record, having driven 1.6 million kilometres and only caused 12 minor accidents. However, those statistics alone don’t paint the entire picture.

To compare the raw driving abilities of robots vs humans, you have to level the playing field. Looking at the entirety of driving statistics, compared to the entirety of Google’s testing is not at all level. Google’s cars are driving under ideal road conditions (they can’t handle rain or snow), with precise GPS based instructions, on nice roads, in a well-maintained car.

In order to compare the safety records of humans and robots, you would have to at the very least rule out all collisions that occurred under less than ideal driving conditions, with improperly maintained cars, on bad roads, by drivers not following GPS, etc. You should probably also rule out all collisions caused by inexperienced, since robots have perfect knowledge of driving rules and car physics. Given a level playing field against the current state of robotics, I suspect an random sampling of human-Americans driving the equivalent to Google’s fleet would result in similar safety statistics. I could be wrong, but I’ve never heard of a study comparing these factors.

It is the soft skills surrounding driving that humans are less than perfect terrible at. A short list of things that we are bad at:

  • staying awake for long periods of time
  • staying focused
  • reaction time
  • stopping themselves from driving too drunk
  • driving for the road conditions
  • following traffic at appropriate distances
  • etc, etc.

By removing the human element from the equation, a whole slew of potentially dangerous activities can be eliminated from the roadways. Case closed.

Except, this changes the fundamental question from “are robots better drivers than humans,” to a question of human fallibility versus software fallibility, ie. does software make a fatal mistake more or less often than a human.

While I believe this is at least in part what the manufacturers are trying to determine with real world road test. The scale of these test is much much too small. According to the US Federal Highways Administration, Americans drive an average of 341,000 miles per hour! Meaning, every 3 hours Americans drive as many miles as the entirety of miles Google has logged to date. The level of software reliability required for the scale of the US driving machine is orders of magnitude greater than what has been tested to date. Is software in other mass install bases this reliable? How many blue screens of death per hour does the world receive in aggregate?

The world where robots are better at driving than humans, under a variety of conditions, en masse, seems like a long way away. I think that everyone from Joe Q Public to knowledgable tech pundits are drastically under-estimating the difficulty of this problem.

More importantly thought, statements like “humans are bad drivers that need to be replaced by robots” expresses a human inferiority complex that is completely unfounded in 2016. If we’re going to go around saying shit like this, the robot overlords have already won.