E3 has come and gone for another year, and as ever it brought about promises of untold future glory on the back of marvelous technical wizardry. A core theme of the conference was clear; we stand at the precipice of consumer-grade cloud computing.
Clarke’s first law states that any sufficiently advanced technology is indistinguishable from magic and currently we find ourselves in the audience for a magician’s illusion, and we’re not talking about The Witcher 3 coming to the Switch. Cloud computing has the potential to revolutionise the entire world, and tangentially video games, by an order of magnitude yet unknowable.
Cloud gaming may sound particularly pleasant to the average consumer; no more paying for expensive boxes that sit under the TV gathering dust and whirring away like a Boeing 747. We can happily hand-off processing power to the cloud, let them deal with the increase in power consumption and the seemingly endless queue of updates required by consoles. We will all happily pay a fee, have access to our games and trust in the internet to provide for us ad infinitum.
And yet, what might we lose?
In the immediate term, the likely answer is ‘nothing at all’. Microsoft has announced they’re committed to consoles, in any case, providing technical specifications for the next Xbox that represent a true generational leap across the board. By developing and manufacturing a new console, Microsoft is committed to the conventional console model for the foreseeable future. The experience on this beast of a machine will doubtless be a considerable improvement on existing cloud service, but what happens when that commitment ends? After all, how realistic is it to expect consumers to purchase expensive hardware for next-to-no reason when eventually their cloud-experience catches up with its physical ancestor?
Eventually, all consumer electronics will be replaced by this model. It is inevitable and makes perfect sense from those who own the infrastructure and offer the associated services. It is an extension of what is known in technology as ‘lock-in’ and it is increasingly prevalent. Big technology corporations provide incentives for staying within their ecosystems, while actively working to exclude developers and providers that don’t play the game how they might like.
Microsoft and Google specifically are the only corporations with the sheer infrastructure to make mass-cloud computing a physical reality, it requires a truly global private network and thousands of data centres to pull off. Consumers may find their choices limited to these two entities and their respective services if they are not careful.
The concern for me is that processing power is a democratic tool. The wielding of computing power gives the individual a sort of leverage in the digital age. The internet allows an individual to spread information around the world, and access to this is hugely important but it pales in comparison to an individual being able to harness the power of their computer for their own ends. It is tempting to believe that high powered computers are chiefly the realm of the gaming nerd; obsessed with achieving an arbitrarily high processor clock speed or number of frames per second.
The reality is quite different, high powered PCs are used by film creators to tell stories and sometimes not everyone wants those stories to be told. Think of it this way; do you think Google would allow its own servers to be used to render video for a documentary exposing them as part of an antitrust scandal?
Consider a different example of academia continuing its trend of migrating to the cloud for its IT resources. Research at universities is incredibly expensive and saving money by porting IT infrastructure from on-premise to the cloud would be very appealing for these institutions. A resource-intensive area of research in terms of computing power is the domain of Artificial General Intelligence and Machine Learning, which also happens to be a particular area of interest for Google and Microsoft.
It is not difficult to imagine a conflict of interest or the compromising of research and data where academic institutions are leveraging computing power hosted by their direct rivals. Would Google allow their competitors’ unfettered access to their own infrastructure to beat them to a breakthrough potentially worth billions in future revenue? These cloud-infrastructure arrangements could also open the door to cases of corporate espionage. Given Google’s spotty track record with privacy and data retention, it is not unreasonable to assume that research data, as well as that pertaining to individuals, could be at risk.
Instead, we look sure to hand the keys of the kingdom over to a handful of massive corporations and keep nothing for ourselves in reserve. It begins in the consumer electronics space; we will become comfortable with handing the resources back to the corporations who originally empowered us. Eventually, it could lead somewhere far darker.
There exists a conceptual contract between us and the corporations from whom we purchase and subscribe to goods and services. We are responsible for making the best of the new service and for mitigating the worst of it. While it may seem like a long way off, a future without meaningful hardware could be on us sooner than we think. I would ask all those reading this to at least consider the road that we are on because for all the wonderful magic tricks we will see from cloud computing soon, we should be under no illusion that we may end up being the stooge after all.