I was reading an article recently ((where “recently” is relative to when I started writing this, not when I actually published it…)) about the challenges in getting IPv6 up and running – before we finally run out of IPv4 addresses, and can’t plug anything else into the internet. One big change would be the end of address sharing – NAT – since there’ll be enough IPv6 addresses for every computer in your house to have a globally unique address. NAT is annoying, and in general we’ll be better off without it, but if every device is visible to the whole internet, there are some interesting implications which will only be advantages if we work out how to harness them. So here is my optimist’s guide to next year’s internet…

Good Riddance

The internet was originally designed to have an address for everything that was connected to it – if you ask a remote server for something, it sends the response back to your address; if you want to make something available, you advertise your address, and so on. But that didn’t scale very well, so a whole host of protocols and programs have sprung up for sharing one public address between maybe dozens of computers, and handling the connections going in and out. This is fiddly, and it could get even worse – the pessimists predict that instead of IPv6, we’ll end up with ISPs sharing IP adresses between multiple broadband connections, using yet more NAT. Not only will your Skype client be asking your router for a “pinhole” for incoming data, your router will have to ask your ISP for a pinhole through to the “real” internet.

Firewall on a Chip

So, giving everything an IPv6 address will make everything easier, right?

Well… it turns out, the very things that make NAT a nuisance have a rather handy side-effect: ne’er-do-wells can’t send random data to your laptop trying to hack in and steal that text file you store all your passwords in. Your router has to look at packets to determine which computer they’re for anyway, so it ends up acting like a simple firewall without really trying. And when we connect to a trusted network, we expect it to provide this basic insulation from the wider internet.

Wi-fi has already made this assumption out of date – devices like smartphones, netbooks, and tablets are more or less designed to connect to untrusted networks; even if you never use public hotspots, you don’t know how well the office network you’ve just borrowed the WPA key for is secured. So this is probably a good time to stop trusting routers to protect us.

The alternative is that every computer runs a “personal firewall” – a program that checks all traffic on the machine, both inbound and outbound. At the moment, this means an application you install, which clogs up your CPU and memory, slowing down your machine. So maybe the next generation of devices won’t just have a simple network controller, they’ll have a fully flexible Network Processing Unit (NPU) – a firewall on a chip, with its own memory, perhaps a bit of solid state storage for accumulated rules, and so on. The CPU won’t be taken up with traffic analysis; the OS authors won’t have to implement an entire firewall, just drivers for communicating with the NPU; and yet you’ll be carrying your own firewall whatever network you connect to.

Splinternet

Now if every device (eventually) has an NPU, what other assumptions can we challenge? First up is that other hallmark of the “trusted” network, the notion of “local” resources. In future, rather than relying on topology (“it’s on the same router therefore it’s local”) trust decisions will have to be made explicitly – the VPN will be more important than the LAN. If you bring your laptop to a meeting, you’ll only be granted access to the internet, and maybe to send jobs to a nearby printer; but you’ll still have access to your shared server back at HQ (if it’s not all “in the cloud” anyway). The devices in question will each have some authentication that knows whether you should be allowed in or not.

The key thing about this is that it is de-centralized – devices would be able to talk to each other in all sorts of ways, using central servers only as brokers for authentication. Devices could share – or even trade – spare resources such as bandwidth, or even CPU. A laptop in a neighbouring room might be able to access a wi-fi hotspot that’s out of your range, or be physically closer to a file server than you. (Just make sure you’re encrypting all your traffic – another job for the NPU perhaps?)

HTTP over BitTorrent

Now at this point anyone who actually knows how all this works will no doubt be champing at the bit to tell me why everything I’ve written is outrageous nonsense (and everyone else will have got bored and stopped reading), so I’ll just mention one last crazy idea.

Connecting your smartphone to your next-door neighbour’s laptop is all very well, but it’s hardly going to be a fast connection is it? On the other hand, what if you’re both trying to access the same content – wouldn’t it be great if you could somehow access each other’s cached data?

Obviously you’ve got to know that the data is kosher, but imagine an HTTP/2.0 response of “333 Use This P2P Hash”, or handing your browser a .torrent file. This wouldn’t have to be limited to huge files, either – the common resources of a website (JS, CSS, “chrome” images, etc) could be bundled up into one torrent, like the episodes of a series, for you to grab and extract what you need.

Maybe all this is a bit far-fetched, but unless people start working on this next generation of “super-distributed internet” soon, we’re going to be stuck in the “IPv6 is more pain than gain” loop for some time to come…