Tuesday, September 26, 2017

AI on land, sea, air (space) & cyberspace – it’s truly terrifying

Vladamir Putin, announced, to an audience of one million online, that, “Artificial intelligence is the future, not only for Russia, but for all humankind…  It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world… If we become leaders in this area, we will share this know-how with entire world, the same way we share our nuclear technologies today.Elon Musk, tweeted a reply, “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo”, then, “May be initiated not by the country leaders, but one of the AI's, if it decides that a pre-emptive strike is most probable path to victory.
That pretty much sums up the problem. Large and even small nations, even terrorist groups, may soon have the ability to use ‘smart’, autonomous AI-driven tech in warfare. To be honest, it doesn’t have to be that smart. A mobile device, a drone and explosives are all that one needs to deliver a lethal device from a distance. You may even have left the country when it takes off and delivers its deadly payload. Here’s the rub – sharing may be the last thing we want to do. The problem with sharing, is that anyone can benefit.
In truth, AI has long been part of the war game. Turing, the father of AI, used it to crack German codes, and thankfully contributed to ending the second World War and let’s not imagine that it has been dormant for the last half a century. The landmine, essentially, a dormant robot that acts autonomously, has been in use since the 17th century. One way to imagine the future is to extend the concept of the landmine. What we now face are autonomous, small landmines, armed with deadly force on land, sea, air and even space.
AI is already a major force in intelligence, security and in the theatre of war. AI exists in all war zones, on all four fronts – land, sea, air (space) and cyberspace.
AI on land
Robot soldiers are with us. You can watch Boston Analytics videos on YouTube and see machines that match humans in some, not all, aspects of carrying, shooting and fighting. The era of the AI-driven robot soldier is here. We have to be careful here, as the cognitive side of soldiering is far from being achieved.
Nevertheless, in the DMZ between South and North Korea, robot guard are armed with and will shoot on sight. Known as a Lethal Autonomous Weapons System (LAWS) it will shoot on sight, and by sight we mean infrared detection and laser identification and tracking of a target. It has an AI-driven voice recognition system, asks for identification, and can shoot autonomously. This is a seriously scary development as they are already mounted on tanks. You can see why these sentry or rapid response systems have become autonomous. Humans are far too slow in detecting in-coming attacks or targeting with enough accuracy. Many guns are now targeted automatically with sensors and systems way beyond the capabilities of any human.
AI at sea
Lethal Autonomous Weapons can already operate on or beneath the sea. Naval mines (let’s call them autonomous robots) have been in operation for centuries. Unmanned submarines have been around for decades and have been used for purposes good and bad, for example, the deliver of drugs using autonomous GPS navigation, as well as finding aircraft that have downed in mid-ocean. In military terms, large submarines capable of travelling thousands of miles, sensor-rich, with payloads are already in play. Russian drone submarines have already been detected, code-named Kanyon by the Pentagon, they are thought to have a range of up to 6,200 miles with speeds up to 56 knots. They can also deliver nucear payloads.
AI in the air
I flew to Oslo to give a talk on AI in the National gallery. The pilot of the Norwegian Air 737 had switched to autopilot at 1000 feet and we were then technically flying in a robot for the rest of the flight, albeit being supervised by the pilots – fuel consumption, weather and so on. They could have landed using autoland but most pilots still prefer to land the aircraft themselves. The bottom line is that software does most flying better than humans and will soon outclass them on all tasks. Flying is safe precisely because it is highly regulated and smart software is used to ensure safety.
Drones are the most obvious example, largely controlled from the ground, often at huge distances, they are now AI-driven, operate from aircraft carriers, can defend themselves against other aircraft and, worryingly, deliver deadly missiles to selected targets. The days of the fighter plane may be numbered, as drones, free from the problem of seating and coping with a human pilot, is cheaper and can be produced in larger numbers. Even ISIS use drones to spy and drop bombs.
A terrifying vocabulary of nanoweapons, mosquito-like robots and mini-nukes have entered the vocabulary. Nanoweapons: A Growing Threat to Humanity by Louis A. Del Monte is a terrifying account of how nanoweapons may change the whole nature of warfare, making other forms almost redundant. It is the miniaturisation of weaponry that also makes this more of a lethal threat.
AI in cyberspace
War used to be fought on land, sea and air, with the services -  army, navy and airforce – representing those three theatres of war. It is thought that a brand new front has opened up on the internet but this is not entirely true, as the information and communications war has always been the fourth front. The Persians did it, the Romans were masters of it and it has featured in all modern conflicts. Whenever a new form of communications technology is invented, from clay tablets, paper, printing, broadcast media and the internet, it has been used as a weapon of war.
However, the internet offers a much wider, deeper and difficult arena, as it is global and encrypted. Russia, China, US are the major players, with billions invested. China also wages a war against freedom of expression within its country with its infamous Great Firewall of China. Russia has banned LinkedIn and Putin has been explicit in seeing this as the new battlefield. The US is no different, with explicit lies about the surveillance of its own citizens. But it is the smaller state actors that have had real wins – ISIS, North Korea and others. With limited resources they see this amphitheatre as somewhere they can compete and outwit the big boys.
It is here that AI comes into play. AI has a habit of being demoted. No sooner has an algorithm been invented than it is demoted to being mere software, a part of the landscape. So it has been with encryption – one of the great successes of AI, it keeps the financial system afloat and secure, as well as preserving privacy in our communications. However, it also allows private and
AI as weapon of peace
When I landed at Olso airport, I walked through a gate that scanned my passport. In a chip on my passport is stored an image of my face and face recognition software, along with other checks, identifies me as being able to enter the country. I never spoke to a human on my entire trip, from home to Oslo. You will soon be able to walk through borders using a mobile phone only. Restricting the movement of criminals and terrorists is being achieved through the use of many types of AI. The war on terror is being fought using AI. It is AI that is identifying and taking down ISIS propaganda. What is required, is a determined effort to use AI to police AI. All robots may have to have black boxes, like aircraft, so that rogue behaviour can be forensically examined. AI may be our best defence against offensive (in both senses of the word) AI.
Conclusion

What is worrying is that while most of the above is known, you can bet that this is merely the tip of a chilling iceberg, as most of these weapon and systems are being developed in deep secrecy. Musk and many others, especially the AI research and development community are screaming out for regulation at an international level on this front. Our politicians seem ill-equipped to deal with these developments, so it is up to the AI community and those in the 'know' to press this home. This is an arms race that is far more dangerous than the nuclear race, where only large nations and humans have been in control and calls for a declaration of war on AI weaponry. We are facing a future where even small nations, rogue states and actors within states could get hold of this technology. That is a terrifying prospect.

 Subscribe to RSS

Sunday, September 10, 2017

ResearchEd - 1000 teachers turn up on a Saturday for grassroots event....

Way back I wrote a piece on awful INSET days and how inadequate they were on CPD, often promulgating half-baked myths and fads. Organisations don’t, these days, throw their customers out of the door for an entire day of training. The cost/load on parents in terms of childcare is significant. Kids lose about a week of schooling a year. There is no convincing research evidence that INSET days have any beneficial effects. Many are hotchpotches of non-empirical training. Many (not all) are ill-planned, dull and irrelevant. So here’s an alternative.
ResearchED is a welcome antidote. A thousand teachers rock up to spend their Saturday, with 100 speakers (none of whom are paid), to a school in the East End of London, to share their knowledge and experiences. What’s not to like? This is as grassroots as it gets. No gun to the head by the head, just folk who want to be there – most as keen as mustard. They get detailed talks and discussions on a massive range of topics but above all it tries to build on an evidence-based approach to teaching and learning.
Judging from some on Twitter, conspiracy theories abound that Tom Bennett, its founder, is a bounder, in the pocket of…. well someone or another. Truth is that this event is run on a shoestring, and there’s no strings attached to what minimal sponsorship there is to host the event. It’s refreshingly free from the usual forced feel of quango-led events or large conferences or festivals of education. Set in a school, with pupils as volunteers, even a band playing soul numbers, it felt real. And Tom walks the floor – I’m sure, in the end, he talked to every single person that day.
Tom invited me to speak about AI and technology, hardly a ‘trad’ topic. I did, to a full house, with standing room only. Why? Education may be a slow learner but young teachers are keen to learn about research, examples and what’s new. Pedro de Bruykere was there from Belgium to give an opposing view, with some solid research on the use of technology in education. It was all good. Nobody got precious.
But most of the sessions were on nuts and bolts issues, such as behaviour, teaching practice and assessment. For example, Daisy Christodoulou gave a brilliant and detailed talk on assessment, first demolishing four distorting factors but also gave practical advice to teachers on alternatives. I can’t believe that any teacher would walk out of that talk without reflecting deeply on their own attitudes towards assessment and practice.
What was interesting for me, was the lack of the usual ‘teachers always know best’ attitudes. You know, that defensive pose, that it’s all about practice and that theory and evidence don’t matter, which simply begs the question, what practice? People were there to learn, to see what’s new, not to be defensive.
Even more important was Tom’s exhortation at the end to share – I have already done two podcasts on the experience, got several emails and Twitter was twittering away like fury. He asked that people go back to school – talk, write, blog… whatever… so that’s what I’ve done here. Give it a go – you will rarely learn more in a single day – isn’t that what this is all about?

 Subscribe to RSS