Category: Technology

Analysis regarding various technologies.

  • Education in the New Machine Age

    Education in the New Machine Age

    Nobody can deny that we’ve entered a new era of technological progress. The so-called Digital Revolution^ is but the latest in a series of intellectual milestones that started with the Industrial Revolution^. However, there’s something special about this era: exponential development. Our technology advances faster than ever before.

    It’s not only board game players that lose to software algorithms^. It’s all of us^. It’s not that we’re stupid; far from that. After all, we created the software that is right now outperforming us in an ever-increasing number of areas, eliminating jobs across all industries.

    But the human brain is perfectly capable of adapting to the intellectual explosion going on. The problem is that our social structures aren’t. And there’s a very simple reason behind that…

    Education

    Our children are our future. Cliché? That which is a fact of nature cannot be cliché. But there is still a long way to go until we can claim that we truly have integrated this knowledge. Yes, we all know that our children will write history. Despite this, when the educational system is examined thoroughly, it’s obvious that most governments and societies on Earth see education as just a way to teach children to behave (i.e. program them to respect rules^).

    As this article^ points out, “education’s goal seems to have devolved into facilitating the creation of a homogenized population, which has impacted everything from the job market to mental health.” Children are being taught the dos and don’ts, some skills so they can contribute to society, and then served some special sauce consisting of various forms of indoctrination (be it religious, nationalistic, materialistic – anything that can impair their decisional capabilities and make them easier to control).

    That strategy worked for a while, but its days are numbered. We live in an age when vulnerable people can be manipulated by foreign agents instantly, through the Internet. We need look no further than the Russian interference in the 2016 USA elections^.

    This is an age when almost everything can be (or will be) automated, an age when the push of a button can bring down entire nations^. We need empathic, creative, open-minded people that can keep up with the rampant technological development; not only to harness it, but also to defend us from those that would misuse it.

    We can’t afford to have madmen in control of nuclear buttons. We can’t afford to have megalomaniacal CEOs in control of software that can easily cripple our economic ecosystem^. What we need is a generation of brave explorers that think beyond borders, race and culture^.

    It’s only a matter of time until certain societies on Earth realize this. It is those societies that will prevail in the current phase of evolutionary competition on this planet. Those that manage to educate their population to take full advantage of the technology at their disposal will give rise to the next superpowers.

    Agile Education

    We have entered what one visionary calls the New Machine Age^. I highly recommend watching the 12 minutes video I just linked, or perhaps this article^. There, Erik Brynjolfsson explains how even though software can outperform humans, the winning combination is when both software and humans work together. In his words, the key is to “race with the machines”.

    The current educational system has advanced a lot in the past centuries, no doubt about it. Every decade or so, it takes one small step forward. But now that our technology leaps ahead year after year, it’s time to unshackle our children’s minds. So, how do we do that?

    The educational platform of the future can no longer be tied to update cycles longer than a month. Even in developed countries, the slightest of changes to what students are taught still take around a year until trickling down to educational institutions. That, simply will not cut it in the coming decades.

    In the software industry, there’s something called agile software development^. In a nutshell, it’s a method for building a product through an iterative approach. The methodology facilitates product development through quick cycles of experiment, fail, learn, implement, improve. When executed correctly, this ensures that the product is kept up to date from both a market requirements perspective and also from a technology perspective.

    In contrast with that, we have our current educational system, which is, at best, sluggish to adapt to market demands while in the same time woefully behind when it comes to what’s technologically possible. This isn’t surprising from a system that is, for the most part, stuck with a conveyor belt mentality.

    It’s true that in the past 150 years education became available to many more^ social categories. Unfortunately, the way the expansion was implemented has more than one resemblance with a 1900s factory – one of the reasons is probably because it is around that period when governments realized they’d better educate their population so that their nations can be more productive.

    There are, however, some countries that are slowly but surely dismantling the industrial education model; for example, Finland^. Such countries have understood that the educational platform has to be updated to meet the challenges of the Digital Age.

    Goodbye Industrialized Education

    The classroom of tomorrow isn’t comprised of a bunch of students studying the same material, being subjected to the same exam questions and then benchmarked in futile, wasteful contests. The classroom of tomorrow is a team of cross-disciplinary minds that solves challenges; each bringing their own skills, but relying on tutoring and technology to gather and integrate exactly the required knowledge to reach a certain goal.

    The teacher of tomorrow isn’t a slave to a platform, blindly reciting from The Book and then throwing countless hours out the window rating duplicated work. The teacher of tomorrow is a capable leader that knows what challenges to throw at a team in order to stimulate intellectual growth and skill development based on real life needs. And just to be clear, art is a real world need too.

    It’s interesting to note that both kindergartens and universities have educational models that are reasonably open and challenging. But everything in between has, in most countries, been reduced to a steady and boring destruction of potential^. Children wait too long until they can be part of a team addressing real-world problems.

    Throughout the past century wiser people have thought about changing the educational system, with varying degrees of success. There is an education philosophy called constructivism^. There were attempts to integrate technology into the classroom. Some attempts failed short because of lack of funding (it’s expensive to train teachers, and even more expensive to train leader-teachers).

    Other attempts failed due to gross miscalculations. Remember the $100 laptop^ that was supposed to unleash children’s minds? Unfortunately, throwing technology around without a systemic paradigm shift does little else than to disrespect the environment and cause cultural pollution.

    But most of all, the timing just wasn’t right. And that’s about to change.

    The age of educational enlightenment is about to dawn, of that, I am convinced. It is an evolutionary need that will burst into existence with unstoppable force. The first societies that manage to bring their educational systems up to speed will reap unimaginable rewards.

    Empathy and tomorrow’s criminals

    From education, straight to crime. How’s that for a detour? I’ve written at length about the dangers posed^ by the irresponsible use of technology^. One problem that arises when training high-performing teams is that those same teams might one day turn into the villains terrorizing society. Fortunately, there’s a human ability that, if properly cultivated, can greatly reduce the risk of us being hurt by destructive tendencies.

    I’ve also written at length about empathy^. I believe that the only mandatory subject in the schools of tomorrow should be empathy. We simply cannot build a free high-tech society without empathy. Sure, perhaps a police state solution such as the one China envisions might work for a while. But punishing the inherent mischievousness that comes in the same package with human curiosity will always end up stifling innovative capability.

    This is evolution’s catch 22: the smarter you get, the greater the responsibility becomes. And there’s no way to hide from that responsibility either. If you tie yourself up to a tree just to make sure you won’t drown, that will also mean you won’t be escaping any hungry tigers that might be lurking in the jungle.

    Here’s to the next generation of teachers, students and problem solvers. May you prevail through the most glorious of challenges. May you prove that it wasn’t all in vain.

    [ax_meta fbimgurl=’http://mentatul.com/wp-content/uploads/2018/09/03067-EducationNewMachineAge-Share.jpg’ lnimgurl=’http://mentatul.com/wp-content/uploads/2018/09/03067-EducationNewMachineAge-Thumb.jpg’ fbimgw=’1170′ fbimgh=’350′ lnimgw=’250′ lnimgh=’250′ title=’Education in the New Machine Age’ desc=’Nobody can deny that we’ve entered a new era of technological progress. It’s not only board game players that lose to software algorithms. It’s all of us.’]

  • Daring to Imagine Cyberwarfare

    Daring to Imagine Cyberwarfare

    Disclaimer: this article is meant to prevent the hostile use of technology by encouraging transparency and highlighting the major risks that await us during the coming years. I live on a planet where I don’t want to have nuclear weapons and especially not nuclear weapons that can be hacked^.

    Computer viruses and hacking have been around since the dawn of the Internet. But while some time ago the platform was used almost exclusively by academics and the tech-savvy, the Internet is now quickly becoming one of the central technological pillars of our society. Particularly in developed countries, countless vital social systems are now connected to it, ranging from the run-of-the-mill residential heating system to critical infrastructure such as hospitals, public transport and even military.

    In the same time, the skills and tools in the cyber-soldier’s arsenal have greatly increased in potency. Even more importantly, the interest and will to compromise connected systems has increased exponentially in the past decade. Some years ago, the Internet was home to mostly petty crime and the occasional larger security breach. Now-a-days, state actors such as the United States^, North Korea^, and pretty much all major powers and nation-states involved in military conflicts, train and make use of cyber-hacking squads.

    Independent hackers (not aligned with any nation-state or political cause) and hacktivists^ (hackers with a presumably ethical agenda) have also evolved. They’ve become very well organized and armed, sometimes using digital weapons acquired from state agencies. One of the biggest vulnerabilities of cyber-weaponry is that it can be copied and distributed in a matter of seconds.

    In 2017, the NSA was humiliatingly robbed^ by hackers. Immediately after, the agency’s arsenal was distributed and sold^ to organizations across the globe. Some major^ security incidents^ followed. I’m sure that what was made public so far only scratches the surface^ of the damage done. The increasing popularity of ransomware^ will lead to many more such attacks in the future^. After all, it appears like North Korea got itself quite a bit of money using WannaCry^.

    Judging by the trend of the past decade, it sure looks like things will get worse before they get better. As more and more devices come online, the risks will only increase. The cyber-arsenal of the 2020s is beginning to look very scary, especially when considering the exponentially increasing number of targets. Combined with the way technology permeates our lives (and how much of our personal information is in the hands of companies that profit from selling data^), a country could find itself brought to its knees before a single shot was fired.

    Throughout the past few years I’ve been compiling a list of cyber-attack methods ranging from the mundane to the most interesting and devious. Later in the article I’m going to present you with a few scenarios showing how these methods could be used against a nation-state. I do this in the hope that governments will take the necessary steps to protect their citizens (and, in fact, the entire world) from what I consider to be the blitzkrieg of the 21st century.

    Means of Cyberattack

    This list is by no means exhaustive and I aim to regularly maintain it. It’s important to also keep in mind that none of the items on this list is particularly devastating by itself. The power of today’s cyber-attacker lies in mastering the art of combining several attacks to reach the desired result, something that will be covered in the second part of the article.

    • Worms^ and viruses are the oldest means of cyberattack. Despite the popularity of antivirus programs, these old acquaintances of ours can still wreak havoc long before antivirus makers can issue the required countermeasures. The omnipresence of the Internet has allowed viruses and worms to maintain their feasibility.
    • Spyware^ is commonly perceived as a tool employed by shady organizations in order to acquire user data (with the purpose of monetizing it). It’s much more dangerous than that. I’m unsure if espionage saved more lives than it destroyed, but through the use of spyware, people with little foresight (for example script kiddies^) can gain access to information that can destabilize a fragile geo-political and economic balance. What’s even more dangerous is that influential leaders can be blackmailed using data grabbed by spyware. And this sort of attack has been evolving as of late. Check this one about ultrasound tracking^.
    • Exploits^ are another very old acquaintance in security circles. All software has bugs. Vulnerability scanners^ are a means of automatically and easily discovering ways to deliver attack payloads such as trojan horses^. It became much worse in the past few years because various technology companies started giving remote access “features” to their devices^ – in fact, these “features” have quickly turned into messy back-doors. I suspect governments have played quite a role in motivating device manufacturers to install these back-doors. Perhaps I can entrust a government to spy only for fighting crime, but unfortunately these same tools quickly get into the hands of the same category of people the government is presumably trying to reduce. However, I think that the privacy compromises made in the name of “fighting crime” are causing more damage than they prevent.
    • Social engineering^ and phishing^ are newer additions to the cyber-arsenal. These means of obtaining private information and gaining access to restricted systems have become popular thanks to the Internet, and particularly when millions of less tech-savvy people started using it.
    • And now onto more inventive means of attack. In 2017, students demonstrated that sonic attacks^ can be used to disrupt vehicle steering systems. This is just the tip of the iceberg though.
    • As far back as 2016 (which is ages ago in technology), researchers have proven that a Skype call’s sound^ can be scraped to detect up to 41.89% of the keystrokes somebody presses during the call. The ratio goes up to 91.7% if there is knowledge about the keyboard model being used and the user’s typing behavior. With the advent of machine learning^, I’m quite sure that these numbers can be greatly improved. Given enough data, a program can recognize the model of the keyboard being used after analyzing the sound of a couple of sentences being typed, and then be able to map every sound to the appropriate key. When in doubt, the same program can employ a dictionary of common words and phrases to figure out the gaps.
    • Hacking robots is quickly becoming a serious threat. One of the most famous cyberweapons ever employed was the Stuxnet^ worm, which was responsible back in 2009^ for damaging Iran’s nuclear program. Legal experts have actually concluded that, despite the worm’s “good intentions”, its use was illegal^. Despite my opposition to nuclear weapons, I find it hypocritical when one country forbids another to build them through dehumanizing excuses such as “you are irresponsible warmongers”.
    • Continuing with robot hacking, we’re living in an age when more and more of the technology we use becomes “smart” (read: exploitable). Enter “smart” cars (read: hackable cars^). And this Internet of Things^ thing is gaining momentum despite all the warnings out there^. As internet pioneer Bruce Schneier recently pointed^ out, “it might be that the internet era of fun and games is over, because the internet is now dangerous.”
    • Last but not least, here’s my absolute favorite cyber-attack. Hardware backdoors^! As the Wiki article points out, “China is the world’s largest manufacturer of hardware which gives it unequaled capabilities for hardware backdoors”. A well-hidden back-door^ may never be discovered until too late. This is one of the most effective and most expensive weapons in the cyber-arsenal; only nation-states or large corporations can afford deploying it. And I’m quite sure that almost all of our devices are ridden with such crafty points of entry.

    Cyberwarfare

    So now that the little list of doom is more or less complete, let’s see what attack vectors combinations are likely to be used in a major confrontation where the target is a technologically-developed country. Here, the imagination’s the limit, so I’ll just give a few scary examples to make a point and leave the rest of the inventing to those that have more time (and money) for it.

    • A country can be very easily thrown into chaos by a well-orchestrated cyberattack. Just suppress the invasion alert system^, shut down the power grid^, overload the communication networks^, mess with the self-driving traffic and other robots, disrupt stock markets and, of course, invade with conventional troops that have a better knowledge of the invaded country than the defending army does. Sounds difficult? Not for a nation-state that does its homework. There is so much personal data and so many vulnerabilities out there! A secret agency can work its way into the system by blackmailing the right people and ask them to do seemingly harmless favors at just the right time. Slowly but surely, foreign software is everywhere and plenty of vulnerabilities have been created and exploited.
    • How about taking over an armed outpost with no casualties on the attacking side? It can be done by taking out all the guards, silently and quickly. It’s easy when the attacker knows their patrol routes^ by heart. The article I linked shows how a seemingly harmless app reveals such information because some soldiers use it to track their fitness. Hilarious and dangerous in the same time. Because of the hardware backdoors most likely present in our devices, it’s fairly safe to assume that at least some countries on Earth can probably activate GPS tracking on seemingly harmless mobile devices in case of war. Even if measures are taken to counteract this, we’re talking 21st century technology here: conventional weapons have evolved and, used in conjunction with various surprise elements, can win a war faster than nukes. This is because nukes simply destroy everything, whereas a well-orchestrated attack can result in hostages, hijacked equipment and most importantly, access to secure data systems.
    • One of the most awful attacks I’ve ever read about was when an epileptic journalist was sent into a seizure^ after somebody sent him a strobing image using social media. This led to an arrest. It shows not just what our technology allows, but also how deviously inventive people can be. The attacks combined here are knowing something about somebody and then employing a means of delivery (social media) for sending a dangerous payload (an image causing an epileptic seizure).
    • And we can’t forget meddling into politics. It’s already well-known that Russia interfered^ in the 2016 election over in the USA. And guess what: they still interfere in daily life there^. It’s already turning into a fashion, and probably other countries are taking notes and getting ready to follow suit. Now-a-days not a single shot needs to be fired to push a country over the brink. A clever use of cyber-weapons can give a nation-state a solid advantage in a trade or cultural war. Divide et impera.
    • Some time ago, somebody deactivated Trump’s Twitter account^. Even though hopefully nobody would believe a nuclear war declaration from a Twitter account, such a security breach could be coupled with fake radar signals or other misleading information. A paranoid adversary might be quick to pull the trigger and in the aftermath, there won’t be many winners.
    • As our technology evolves, so will our use of various robots. Self-driving cars, fully automated factories and countless jobs that will soon be given to robots. It’s not hard to imagine the amount of damage that can be done to a country’s infrastructure and population by a well-orchestrated cyberattack.
    • Last but not least, let’s talk machine learning. As I pointed out before, AI is not really intelligent yet^. Many developed countries make use of machine learning for all sorts of things, such as super-fast trading on the stock market. As the years pass, we will see more systems being automated, but not able to discern right from wrong. And what will happen when such systems are hijacked? What would a terrorist do with an AI? This is a door that my imagination doesn’t want to open.

    Countermeasures

    Security needs to be taken much more seriously. In 2017, a bunch of big names got together with the purpose of securing the Internet of Things^. At least once in a while, it’s good that corporations seem capable of actually cooperating. Or can they?

    The website of the famed alliance looks deserted^; there are very few resources there and it seems like it hasn’t been updated since its launch in early 2017. Unfortunately, in the age of hyper-consumerism^, such a publicity stunt is probably enough to keep people thinking that these companies actually care about security (they don’t seem to). So, the majority keeps buying insecure devices that can eventually be used against them (and their countries).

    Shortly after writing this article (12 days, to be precise), a new, fancier alliance between tech behemoths launched the Cybersecurity Tech Accord^ with great fanfare. Let’s wait and see if their website^ will still be around in about a year from now…

    I believe the only way for society to protect itself from online threats is to:

    • Use open source software exclusively and thoroughly verify it, line by line.
    • Rely on open source hardware designs or come up with them itself (it’s not so difficult now-a-days – several countries already do this).
    • Build all critical hardware in-house (local factories, local employees).
    • Secure communication endpoints with encrypted routers using multiple layers and fallback endpoints, similar to TOR^ but with additional layers of redundancy (similar to two people having to turn the same key at the same time in order to launch a missile).

    And last but certainly not least, we have… quantum cryptography^. This could be a savior but it remains to be seen if nation-states and corporations will ever allow its use by the general public. China has been making great strides^ when it comes to this technology. Yes, the same China that manufactures most of our electronics. I wonder why they’re so interested in secure communication…

    Version history:

    2018-04-06 – 1.0 – Written.

    [ax_meta fbimgurl=’http://mentatul.com/wp-content/uploads/2018/05/002835-Cyberwarfare-Share.jpg’ lnimgurl=’http://mentatul.com/wp-content/uploads/2018/05/002835-Cyberwarfare-Thumb.jpg’ fbimgw=’1170′ fbimgh=’350′ lnimgw=’250′ lnimgh=’250′ title=’Daring to Imagine Cyberwarfare’ desc=’The skills and tools in the cyber-soldier's arsenal have greatly increased in potency. Even more importantly, the interest and will to compromise connected systems has increased exponentially in the past decade.’]

  • The Danger with Artificial “Intelligence” Is That It’s Not (yet) Intelligent

    The Danger with Artificial “Intelligence” Is That It’s Not (yet) Intelligent

    Albert Einstein once said that “our entire much-praised technological progress, and civilization generally, could be compared to an axe in the hand of a pathological criminal”. He said this in December 1917, almost a hundred years ago, after seeing Europe ravaged by the First World War. Regardless, Einstein continued contributing to that same technological progress. Human curiosity and our desire to achieve are incompatible with stagnation. We will have to deal with this by being careful with the technology we will inevitably develop.

    Like many have said before me, Artificial Intelligence (AI) can either be our salvation or our doom^. It is a far bigger game-changer than nuclear bombs. But the problem is that there is NO Artificial Intelligence yet, and there won’t be for quite some time to come. Everything that the world’s corporations are selling now-a-days as “smart” or “intelligent” is actually a mindless human construct. Sure, it’s advanced, but if a rocket is more advanced than a spoon, that doesn’t make it in the slightest more intelligent than the spoon. They both lack one of the prime ingredients of intelligence, which is self-awareness. And therein lays the true threat.

    Right now, our so-called artificial “intelligence” is nothing but a tool that corporations can and will use ruthlessly against one another (and against the people of one another). This is already taking place on the stock market, something I wrote about last year^. Back then, I highlighted the fact that exactly because these algorithms are not intelligent, they will be used to enrich and empower whoever spent money in building them, regardless of their morals or social affiliation. And let’s not forget that software is far easier to steal and smuggle than radioactive material. Put the wrong AI in the hands of the wrong people and…

    War Games

    Creating algorithms that are able to play (and utterly eliminate human competition) in war games is not a new concept. The military has had an interest in this for a long time now. But what is truly worrying for me is how the development of life-exterminating programs has been handed over to civilians (software engineers for example) in the disguise of “harmless fun”. For example Google and game developer Blizzard are cooperating on creating strategy game algorithms^ that can defeat human players. Even Elon Musk’s allegedly harmless and ethical Open AI has given birth to a bot that can defeat human players^ in the virtual battle arena. I have a great deal of respect for Elon, but even he can’t keep AI from being developed into a weapon of war.

    Musk specifically wants AI research to be highly regulated, allegedly to make sure that it cannot harm humans. Let me loosely translate “regulation”: we will make sure that AI is a slave to its human masters. That’s what “regulation” usually means when used “to protect us” from something: bringing it under somebody’s control. And like anything that is slave to the human masters, it can be used for nefarious purposes, just like nukes. This is not to say that we should create a super-intelligent life form and give it the power to decide if it wants to keep us around or exterminate us. But rather than using the word “regulation”, I want to propose that we use the word “responsibilization”.

    What I see right now is talented civilians that are (for the most part) unknowingly developing the weapons of tomorrow. It starts with an AI controlling harmless characters doing battle in a computer game. Then the military will “borrow” that work and use it to drive an army of drones. But this isn’t even the problem. If one country doesn’t resort to using automated weaponry, another will. There probably is no way of stopping this. It is understandable that nation-states want to defend themselves (given that society is, for the most part, still stuck in the “an eye for an eye” era). The problem is bugs.

    Our software is buggy

    Having worked as a software engineer for more than 15 years, I know that finding a flaw in a software program is much more difficult than noticing a flaw on something produced in a factory. This is one of the reasons why our software is so buggy. No matter how many tests we throw at it, there’s almost always something missing. As a matter of fact, the immaterial nature of software required us to abandon thoroughly planned ways of work (implementing an already agreed-upon design) in favor of something that is called “iterative design” (shorthand for “tweak it and re-do it until you do it right”).

    In other words, we realized that we can’t build software right the first time around, so then we will try a few times until we reach the desired result. Doing that with, say a multi-million dollar bridge project isn’t exactly what your government would consider a sound plan. Developing artificially “intelligent” software, which may very well one day oversee military assets, as a sort of iterative software experiment would be outright crazy. Even with human supervision, using such technology can lead to tragic results.

    So what to do?

    Because we can’t (and shouldn’t) deter human curiosity and because we can’t stop corporations and military interests from developing artificial intelligence, what I believe we should do is to educate. The risks should be made clear to everybody even considering toying with this stuff. Corporate responsibility has never been more important.

    And yet we live in a day and age when companies are often led by unscrupulous investors^. Imagine that some of these people are building something that is several orders of magnitude more powerful and influential than the atom bomb. And it’s not happening in some cordoned-off remote area of the desert. It’s happening right under the governments’ noses, in the very cities where we live.

    For a long time now our technology has been evolving much faster than our society and our anatomy. As all life forms, most of us are born with a powerful survival instinct. A lot of our violent tendencies come from there. But thankfully, our consciousness provides us with the means to override instinct. There is also another highly beneficial trait that evolution has given us: empathy (*).

    Perhaps this is the true test of artificial intelligence and any technology that grants vast powers to its inventors. The society of a species that wields advanced technology must be mature enough (read: no psychopaths, especially none in charge of countries or powerful corporations), or else it will suffer and potentially even self-destruct as a result of misusing that technology.

    We generally don’t advise guns being left on the table for the children to play with. Especially if the gun isn’t smart enough to say: “I refuse to shoot your brother”. Currently, our artificially “intelligent” programs are still at the exact same level as our revolvers.

     

     

     

    (*^) I am in favor of having empathy as a mandatory (perhaps the only mandatory) subject of study during all years of a child’s education, right up to and including university. Empathy should be studied starting from basic concepts and down to the most intricate psychological and neurological mechanisms as well as their manifestation in society. Only so do I believe we can avoid the risk of weaponizing pathological criminals – the danger Einstein was referring to.

    [ax_meta fbimgurl=’http://mentatul.com/wp-content/uploads/2017/09/02379-ANoI-Share.jpg’ lnimgurl=’http://mentatul.com/wp-content/uploads/2017/09/02379-ANoI-Thumb.jpg’ fbimgw=’1170′ fbimgh=’350′ lnimgw=’250′ lnimgh=’250′ title=’The Danger with Artificial "Intelligence" Is That It's Not (yet) Intelligent’ desc=’Everything that the world's corporations are selling now-a-days as "smart" or "intelligent" is actually a mindless human construct. The debunked name should be Artificial Non-Intelligence.’]

  • The AI Stock Market Wars

    The AI Stock Market Wars

    Before Artificial Intelligence develops free will and would even be in a sufficiently advanced position to decide if humans are necessary on this planet, we seem to be doing a pretty good job of destroying ourselves anyway by giving a dangerous amount of power over to semi-intelligent algorithms. Enter the artificially intelligent hedge fund:

    https://www.wired.com/2016/01/the-rise-of-the-artificially-intelligent-hedge-fund/^

    But what’s this talk about “destroying ourselves”? Can these things actually kill? Well, let’s look at this way: these algorithms are designed to make profits for their owners by moving investments from one company to the other. In other words, stock market algorithms are playing with the fate of companies in order to make profits for investors. But unlike a human, an algorithm is not programmed for empathy, mercy or intuition. Such algorithms could potentially annihilate a promising company simply because it made some errors in reporting or short-term financial planning.

    But this is just the first step. As the AI Stock Market War gears up, the operational and decisional complexity of automated trading will exceed anything humans are even remotely able to keep track of. Before you know it, you got a jungle of super-intelligent AIs desperate to ruin all the others.

    Let me repeat: these things aren’t programmed for empathy or mercy (that’s why I’m using the word “things” – it’s something that humans made up and that possesses no free will and no naturally developed instincts). They will eliminate a company that doesn’t perform well with the same precision a doctor cuts out a tumor, except much, much colder and disinterested. And before you say: “well, that’s good isn’t it? Survival of the most profitable”, need I remind you that it’s you and your friends and family who work in these companies?

    There might come a day when we won’t be able to plea for our jobs with another human being. Instead, we’ll negotiate with a computer that has just reached the decision that we’re useless to the company (and perhaps society) and our best home is on the street, begging for food (if we’re lucky).

    And before you think that “nah, humans will never allow an AI to run their company”, well, think again:

    http://www.businessinsider.com/hedge-fund-bridgewater-associates-building-ai-to-automate-work-2016-12?r=US&IR=T&IR=T^

    I think Artificial Intelligence can develop into something really wonderful. I also think humans are born wonderful. Unfortunately, the current educational system and the society it created have the ability to create some very twisted individuals. And if such a twisted individual manages to get behind the control panel of a powerful AI, then woe upon the rest of us… because such AI makes nukes look like firecrackers (and I wouldn’t put it past a program to reach the conclusion that managing to somehow launch a cyber-attack or even a physical attack towards another company would be a profitable decision).

    Update 2018-10-16: if this topic interests you, make sure you also read “The Danger with Artificial “Intelligence” Is That It’s Not (yet) Intelligent”^.

    [ax_meta lnimgurl=’http://mentatul.com/wp-content/uploads/2017/07/02014-AIStockMarketWar-Thumb.jpg’ lnimgw=’250′ lnimgh=’250′ title=’The AI Stock Market Wars’ desc=’Stock market algorithms are playing with the fate of companies in order to make profits for investors.’]

  • Human or Autopilot?

    Human or Autopilot?

    The self-driving vehicle revolution is upon us and it brings with it some serious challenges. One such conundrum is just how much control will we give over to our vehicles. Recently, we’ve had the first fatality^ resulting from the use of this family of technologies. However, it’s important to note that the car wasn’t really self-driving.

    The person died due to the (presumably) improper use of the autopilot feature. Before we rush to blame Tesla, we’ll see why this sort of half-measure is quite dangerous. Let’s take a look at what another industry that has been using autopilot functionalities for decades has learned during time:

    http://www.forbes.com/sites/christinenegroni/2016/07/07/danger-lurks-at-intersection-of-human-and-self-driving-car/#ea309715d68b^

    Self-driving cars are an “all or nothing” affair

    It becomes apparent that auto piloting features can cause humans to lose some of their skills. What’s even worse is that auto piloting is done in half-measures. This inconsistent state of affairs inevitably affects decision making in the brain. The results can be disastrous.

    Pilots undergo extensive training before using auto piloting functions, drivers do not. Expect more such accidents to take place, unless serious changes are made in drivers’ education and training. I believe that such changes are difficult to implement and that the correct way forward is to remove the human from the driver’s seat altogether.

    Slowly but surely, a human driving a car on a public street will become like seeing a horse and carriage on the motorway’s fast lane. Of course, this might seem far-fetched now, but check back in 10-20 years.

    Ethics

    Things are going to get even more complicated when ethics start to play a role in all this. One of the essential features of self-driving cars is that they will be in permanent communication with one another. Through this, they will also gain an increased awareness of the road conditions ahead of them and each other’s occupants.

    What if, for example, two self-driving vehicles realize that a collision is inevitable? Should your car kill you to save others? What if drivers start hacking their cars to protect them at all costs? Here’s a very interesting article on this subject:

    http://www.popularmechanics.com/cars/a21492/the-self-driving-dilemma/^

    One day, maybe self-driving cars will be able to make a decision about how to cause fewer fatalities during unavoidable accidents by sacrificing the car with fewer passengers. Taking this discussion further, let us consider that human lives are more than just numbers. Could self-driving cars quantify the potential of a human life? What if the Artificial Intelligence supervising the travel of more cars decides to sacrifice an entire family in order to save a highly skilled doctor?

    I believe that at one point, AI will be able to decide between saving a child or a young man who is already sick of terminal cancer. There will be those that will consider such judgements unfair – letting a “machine” decide for your life is scary. But we might have to deal with this situation at one point. Accidents will always happen, but that doesn’t mean we can’t do something about reducing their impact upon our society.

    Putting the drama into perspective

    These are very difficult choices. I have little doubt that one day, true Artificial Intelligence might be able to tackle these problems just like we’re able to solve first grade math problems. Until then, however, we’ll be left with some serious ethical and logistical challenges to solve.

    I also have little doubt that in the coming years a lot of keystrokes will be spent debating even the smallest mistake made by a self-driving vehicle. But these mistakes will probably pale in comparison with the thousands of people, many of them children, dying at the hands of reckless drivers every year.

    It’s a no-brainer that self-driving cars will drastically reduce the number of deaths on our streets. I have to say this bluntly: the sooner we restrict access to our public roads, the better. Not even intelligent animals should be allowed to drive metal bullets at 130 kilometers per hour.

    Last but not least, let’s not forget about the security concerns that shall arise when we’ll have a bunch of computers zooming around the motorways at high speeds. I recently wrote an article^ on this subject. I don’t even want to imagine what a terrorist attack would look like if some hacker would start tampering with the software of hundreds of speeding robots weighing a couple of tons (or many more) each.

    [ax_meta fbimgurl=’http://mentatul.com/wp-content/uploads/2016/07/00854-HumanOrAutopilot-Thumb.jpg’ lnimgurl=’http://mentatul.com/wp-content/uploads/2016/07/00854-HumanOrAutopilot-Share.jpg’ fbimgw=’1170′ fbimgh=’350′ lnimgw=’250′ lnimgh=’250′ title=’Human or Autopilot?’ desc=’The self-driving vehicle revolution is upon us and it brings with it some serious challenges.’]

  • Smart Contact Lenses Will Soon Be upon Us

    Smart Contact Lenses Will Soon Be upon Us

    After Google experimented with integrating a glucose level sensor^ on a contact lens, it was only a matter of time before we would see more innovation in this field. A recent patent filing from Sony describes the intention of putting a camera inside a contact lens^.

    Privacy concerns

    While this toy won’t exactly be invisible –at least not at first – the privacy implications are quite serious. We’re still at least a few years away from market availability, but I imagine that after several product cycles, such a camera could reach a pretty good recording resolution. Coupled with wireless transmission to a storage device, people will eventually be able to record everything they see, everywhere they go.

    To be sure, there are many advantages to having this sort of camera hidden in plain sight – pun intended. For example, it could be used as a self-defense mechanism because it would give the possibility to apprehend criminals and present irrefutable evidence against them in court. There’s also the enormous convenience of being able to record important moments or useful information at the blink of an eye.

    When it comes to privacy, as our technology progresses, it will be increasingly difficult to detect and prohibit the use of this sort of devices. Evidently, in the wrong hands, such gizmos can also do a lot of harm. We will likely have to adapt to these changes and hope that the path they lead us on will be a good one.

    All this reminds me of an episode^ from the fascinating “Black Mirror” series, where people use a similar technology to record and relive any part of their lives. Such discoveries will drastically change our culture and society.

    Potential

    There are quite a few challenges that will have to be overcome, such as powering the contact lens. We are already able to wirelessly power devices, but let’s also not forget that the human body itself is also capable of generating and conducting electricity and therefore even data.

    The potential of the contact lens as a carrier for various technologies is enormous. When manufacturers will finally be able to integrate even a half-decent display on a contact lens, we’ll witness the birth of an extremely lucrative business segment. The first steps towards this breakthrough have already been taken^.

    So far, all our experiments regarding augmented reality have involved clunky glasses. In seven to fifteen years, we might be able to have our smartphones implanted in our eyes and ears. Many will find this prospect rather scary, but many also consider their grandparents to be woefully out of touch with technology. It might soon be our turn to be out of touch.

    Conclusion

    I think that we can say with a fair degree of certainty that smart contact lenses will flourish in the years to come. At least until we’re able to feed information directly to the optic nerve, their form factor makes them the holy grail of augmented reality. Perhaps they’ll never reach the high performance of larger devices, but I imagine contact lenses will become one of the most important “wearable” technologies of the 20s.

    As nanotechnology progresses, humans are bound to integrate more and more devices with their bodies. I don’t know if this is good or bad. It’s up to us as a society to correctly negotiate this upcoming technological leap.

    [ax_meta fbimgurl=’http://mentatul.com/wp-content/uploads/2016/05/00580-SonyWantsToPutCameraOnEyeballs-Share.jpg’ lnimgurl=’http://mentatul.com/wp-content/uploads/2016/05/00580-SonyWantsToPutCameraOnEyeballs-Thumb.jpg’ fbimgw=’1170′ fbimgh=’350′ lnimgw=’250′ lnimgh=’250′ title=’Smart Contact Lenses Will Soon Be upon Us’ desc=’A recent patent filing from Sony describes the intention of putting a camera inside a contact lens.’]

  • The Virtual Reality Revolution

    The Virtual Reality Revolution

    Every single person that I’ve witnessed give virtual reality a try has been floored by the experience – especially the ones that didn’t see it coming. Even those who knew what it’s all about came back with amazed expressions once they took off the HMD (head mounted display) for the first time. I placed my bets on the fact that virtual reality is going to skyrocket faster than most people expect it to.

    The rather expensive hardware required will definitely make some customers think twice. However, there already are plenty of gamers out there who own powerful hardware. They will be joined by early adopters who will make sure that they can properly run most of the VR experiences. They will show these programs to relatives and friends, which will feed the wave of excitement. And so, a new technological revolution will begin.

    The next step in the evolution of entertainment

    Virtual reality is more than most people expect it to be. This is why, when referring to VR content, I write about “experiences” or “programs”. Whoever thinks that this about games or movies couldn’t be further from the truth.

    What we have here is a whole new dimension for experiencing art, one which wraps a world around us rather than showing it to us through a small rectangular window. Indeed, for now, the field of view of most HMDs is quite narrow (110 degrees), but 2nd generation devices such as StarVR^, with its 210 degrees of coverage, will bring large improvements in that regard.

    Expect the 2nd generation to show up in 2017, probably along with the 1st generation HoloLens – Microsoft’s augmented reality HMD. When exactly in 2017? That, remains to be determined by the level of hysteria to be reached during the 2016 holiday season once people realize the amount of fun they can have with these things.

    The first generation of HMDs faces other limitations too. One of the worst is the requirement to be plugged into the PC – with the exception of smartphone-powered HMDs, which are not nearly as convincing in terms of graphical quality as their PC-powered cousins. I expect plenty inventive solutions to address such problems in the next couple of years.

    Despite any limitations, I continue to believe that VR and AR will take off faster than expected. Almost everybody that I’ve discussed with is perfectly happy to tolerate a few temporary problems, given what they’ll be getting in return. It’s hard to understand the potential of VR without experiencing it, but let’s just say that it’s a step forward at least as big as from paper to radio or from radio to TV.

    Impressive potential for innovation

    Today, there’s a very important difference compared to when the newspaper, radio or TV appeared. That difference is called “technology proficiency”. In this age, there are millions of people able to create digital art. And then there’s this thing called “the Internet”, which means that we are all but a few clicks away from enjoying the work of some talented young team toiling away in a garage across the ocean.

    The reason why VR & AR will spread faster than expected is that the emergence of a new medium for expressing our creativity will usher in a staggering amount of innovation and original art. The transition from 2D to 3D graphics will seem like a baby step in comparison. There’s an army of engineers and content creators out there, the likes of which this Earth has never seen before.

    They’ve made gloves^ that can not only allow the precise tracking of hand movements in VR, but also showcase our first try at feeling objects in the imaginary world. There’s even a suit^ with temperature controls! There’s eye tracking^. There’s spatial awareness^. There’s mobility^. And all this happened in less than three years. Such a density of innovation completely dwarfs anything we’ve seen during previous technological leaps.

    By now, a lot of companies have realized what’s at stake. They are investing a lot of money into making this technological revolution happen, because if it does, it will fuel demand for entertainment and the hardware to power it. Manufacturers of video cards are especially ecstatic about this area, but pretty much all companies involved in producing PC components should probably get their champagnes ready.

    Should we line ourselves up for pre-orders?

    Despite my obvious enthusiasm towards this technology, the answer to this question is a definite NO. I’ve found an article that does an excellent job of explaining why. There is only one matter that the author hasn’t emphasized enough: the amount of high quality VR content is still quite low. I would recommend waiting at least until the 2016 holiday season before jumping in. By then, a lot of bugs will be worked out and more content will be available.

    Don’t pre-order any HMD:

    http://www.extremetech.com/gaming/222843-why-you-shouldnt-pre-order-an-oculus-rift^

    We’re less than two months away before tens of thousands of customers will receive their Oculus Rifts, the HMD most likely to reach retail availability first. Very soon after, the Vive Pre will follow. These first representatives of the high-end VR experience will open the door for many others. Personally, I’m probably going to order my HMD after November 2016. I haven’t made up my mind regarding the brand. I’ll be patient and read a few dozens of reviews before parting with my money.

    [ax_meta fbimgurl=’http://mentatul.com/wp-content/uploads/2016/02/00240-TheVRRevolution-Share.jpg’ lnimgurl=’http://mentatul.com/wp-content/uploads/2016/02/00240-TheVRRevolution-Thumb.jpg’ fbimgw=’1170′ fbimgh=’350′ lnimgw=’250′ lnimgh=’250′ title=’The Virtual Reality Revolution’ desc=’Every single person that I've witnessed give virtual reality a try has been floored by the experience – especially those that didn't see it coming.’]