In recent years, Intel has moved towards integrating some pretty nifty remote administration features into its CPUs. While this may be a good idea for certain enterprises, it may quickly turn into a nightmare as soon as exploits and vulnerabilities are found. And guess what^?
Software has bugs. Hey, it happens, everybody makes mistakes. But in this case, the mistakes can’t be corrected in time (before an attacker exploits them). That’s because, in typical monopolist corporation fashion, Intel is obscuring the process by not allowing the security community to analyze whatever code the company decides to shove into our machines. The same argument stands true regarding any proprietary code, especially Microsoft’s Windows, which after 20 years of fixes is still the most vulnerable mainstream operating system.
The following article describes the problem pretty well:
It’s probably only a matter of time until a clever attacker will compromise the company’s buggy code. Of course, Intel will eventually patch its security holes, but given that the company’s CPUs are used across the world in some pretty sensitive contexts, there’s no telling how much damage such attacks can cause.
As for us mortals, we are at risk of having our privacy compromised even by petty criminals. This is because there’s a large window of opportunity between the time when a security hole is found and the time that Intel moves to fix it for less prioritized customers.
And don’t even get me started on how governments across the world can (and probably will) force Intel’s hand into giving over political dissidents on a silver platter. Privacy? What privacy?
If you want to learn more, here’s another article on the same topic:
I wrote this hot on the heels of a Dissected News piece about Cyber-Warfare^. There’s additional interesting information to be found there.