Found in 1 comment on Hacker News
pfraze · 2013-06-29 · Original thread
Well, in the spirit of the topic, here are my counter-predictions.

I'm betting that between now and quantum computers, memristors will play a significant role, and (as I understand them) they'll push us much further toward parallel computing than multi-core and device networking would. A friend of mine believes they will behave as a network of small computing units, so he's betting on an actor model. We'll see!

The book "Trillions" [1] talks a lot about computing future, and focuses very heavily on the idea of "device fungibility" with "data liquidity" - basically the idea that the computing device is insignificant and replaceable, as the computing work&data can move freely between them. When you consider how prevalent general-computing devices are-- microwaves, toasters, cars, phones, dish-washers, toys, etc-- this is pretty compelling. I highly recommend that book.

Now, I personally think localized connectivity&sync between devices, strong P2P Web infrastructure, and more powerful client participation in the network will alter the importance of vertical-scaling central services and give much more interesting experiences to boot (as things in your proximate range will factor much more largely into your computing network). "Cloud computing" as we have it now is really just renting instead of buying. Yes, you can easily spin up a new server instance, but it's much more interesting to imagine distributing a script which causes interconnected browser peers to align under your software. Easy server spin-up? Try no server! This means users can drive the composition of the network's application software, which should create a much richer system.

Considering privacy issues, I think it's an important change. Not only is it inefficient to always participate in centralized services and public networks, it's unsafe. P2P and localized network topologies improve that situation. Similar points can be made about network resiliency and single-points-of-failure-- how efficient is it to require full uptime from central points vs. minimal uptime from a mesh? I imagine it depends on the complexity of decentralized systems, but I'm optimistic about it.

Along with network infrastructure and computing device changes, I think the new VR/AR technology is going to flip computing on its head. Not only do we gain much more "information surface area" - meaning we can represent a lot more knowledge about the system - but we gain a ton of UX metaphors that 2d can't do. One thing I get excited about is the "full spectrum view" of a system, where you're able to watch every message passed and every mutation made, because in the background you can see them moving between endpoints, and, hey, that process shouldn't be reading that, or, ha, that's where that file is saving to.

So TL;DR: I say the future of computing is VR/AR, peer-connective, user-driven, and massively parallel.

1 http://www.amazon.com/Trillions-Thriving-Emerging-Informatio...

Fresh book recommendations delivered straight to your inbox every Thursday.