Showing posts with label Steven Spielberg. Show all posts
Showing posts with label Steven Spielberg. Show all posts

Tuesday 12 December 2017

Robotic AI: key to utopia or instrument of Armageddon?

Recent surveys around the world suggest the public feel they don't receive enough science and non-consumer technology news in a format they can readily understand. Despite this, one area of STEM that captures the public imagination is an ever-growing concern with the development of self-aware robots. Perhaps Hollywood is to blame. Although there is a range of well-known cute robot characters, from WALL-E to BB-8 (both surely designed with a firm eye on the toy market), Ex Machina's Ava and the synthetic humans of the Blade Runner sequel appear to be shaping our suspicious attitudes towards androids far more than real-life projects are.

Then again, the idea of thinking mechanisms and the fears they bring out in us organic machines has been around far longer than Hollywood. In 1863 the English novelist Samuel Butler wrote an article entitled Darwin among the Machines, wherein he recommended the destruction of all mechanical devices since they would one day surpass and likely enslave mankind. So perhaps the anxiety runs deeper than our modern technocratic society. It would be interesting to see - if such concepts could be explained to them - whether an Amazonian tribe would rate intelligent, autonomous devices as dangerous. Could it be that it is the humanoid shape that we fear rather than the new technology, since R2-D2 and co. are much-loved, whereas the non-mechanical Golem of Prague and Frankenstein's monster are pioneering examples of anthropoid-shaped violence?

Looking in more detail, this apprehension appears to be split into two separate concerns:

  1. How will humans fare in a world where we are not the only species at our level of consciousness - or possibly even the most intelligent?
  2. Will our artificial offspring deserve or receive the same rights as humans - or even some animals (i.e. appropriate to their level of consciousness)?

1) Utopia, dystopia, or somewhere in the middle?

The development of artificial intelligence has had a long and tortuous history, with the top-down and bottom-up approaches (plus everything in between) still falling short of the hype. Robots as mobile mechanisms however have recently begun to catch up with fiction, gaining complete autonomy in both two- and four-legged varieties. Humanoid robots and their three principal behavioural laws have been popularised since 1950 via Isaac Asimov's I, Robot collection of short stories. In addition, fiction has presented many instances of self-aware computers with non-mobile extensions into the physical world. In both types of entity, unexpected programming loopholes prove detrimental to their human collaborators. Prominent examples include HAL 9000 in 2001: A Space Odyssey and VIKI in the Asimov-inspired feature film called I, Robot. That these decidedly non-anthropomorphic machines have been promoted in dystopian fiction runs counter to the idea above concerning humanoid shapes - could it be instead that it is a human-like personality that is the deciding fear factor?

Although similar attitudes might be expected of a public with limited knowledge of the latest science and technology (except where given the gee-whiz or Luddite treatment by the right-of-centre tabloid press) some famous scientists and technology entrepreneurs have also expressed doubts and concerns. Stephen Hawking, who appears to be getting negative about a lot of things in his old age, has called for comprehensive controls around sentient robots and artificial intelligence in general. His fears are that we may miss something when coding safeguards, leading to our unintentional destruction. This is reminiscent of HAL 9000, who became stuck in a Moebius loop after being given instructions counter to his primary programming.

Politics and economics are also a cause for concern is this area. A few months' ago, SpaceX and Tesla's Elon Musk stated that global conflict is the almost inevitable outcome of nations attempting to gain primacy in the development of AI and intelligent robots. Both Mark Zuckerberg and Bill Gates promote the opposite opinion, with the latter claiming such machines will free up more of humanity - and finances - for work that requires empathy and other complex emotional responses, such as education and care for the elderly.

All in all, there appears to be a very mixed bag of responses from sci-tech royalty. However, Musk's case may not be completely wrong: Vladimir Putin recently stated that the nation who leads AI will rule the world. Although China, the USA and India may be leading the race to develop the technology, Russia is prominent amongst the countries engaged in sophisticated industrial espionage. It may sound too much like James Bond, but clearly the dark side of international competition should not be underestimated.

There is a chance that attitudes are beginning to change in some nations, at least for those who work in the most IT-savvy professions. An online survey over the Asia Pacific region in October and November this year compiled some interesting statistics. In New Zealand and Australia only 8% of office professionals expressed serious concern about the potential impact of AI. However, this was in stark contrast to China, where 41% of interviewees claimed they were extremely concerned. India lay between these two groups at 18%. One factor these four countries had in common was the very high interest in the use of artificial intelligence to free humans from mundane tasks, with the figures here varying from 87% to 98%.

Talking of which, if robots do take on more and more jobs, what will everyone do? Most people just aren't temperamentally suited to the teaching or caring professions, so could it be that those who previously did repetitive, low-initiative tasks will be relegated to a life of enforced leisure? This appears reminiscent of the far-future, human-descended Eloi encountered by the Time Traveller in H.G. Wells' The Time Machine; some wags might say that you only have to look at a small sample of celebrity culture and social media to see that this has already happened...

Robots were once restricted to either the factory or the cinema screen, but now they are becoming integrated into other areas of society. In June this year Dubai introduced a wheeled robot policeman onto its streets, with the intention of making one quarter of the police force equally mechanical by 2030. It seems to be the case that wherever there's the potential to replace a human with a machine, at some point soon a robot will be trialling that role.

2) Robot rights or heartless humans?

Hanson Robotics' Sophia gained international fame when Saudi Arabia made her the world's first silicon citizen. A person in her own right, Sophia is usually referred to as 'she' rather than 'it' - or at least as a 'female robot' - and one who has professed the desire to have children. But would switching her off constitute murder? So far, her general level of intelligence (as opposed to specific skills) varies widely, so she's unlikely to pass the Turing test in most subjects. One thing is for certain: for an audience used to the androids of the Westworld TV series or Blade Runner 2049, Sophia is more akin to a clunky toy.

However, what's interesting here is not so much Sophia's level of sophistication as the human response to her and other contemporary human-like machines. The British tabloid press have perhaps somewhat predictably decided that the notion of robots as individuals is 'bonkers', following appeals to give rights to sexbots - who are presumably well down the intellectual chain from the cutting edge of Sophia. However, researchers at the Massachusetts Institute of Technology and officers in the US military have shown aversion to causing damage to their robots, which in the case of the latter was termed 'inhumane'. This is thought-provoking since the army's tracked robot in question bore far greater resemblance to WALL-E than to a human being.

A few months' ago I attended a talk given by New Zealand company Soul Machines, which featured a real-time chat with Rachel, one of their 'emotionally intelligent digital humans'. Admittedly Rachel is entirely virtual, but her ability to respond to words (both the tone in which they are said as well as their meaning) as well as to physical and facial gestures, presented an uncanny facsimile of human behaviour. Rachel is a later version of the AI software that was first showcased in BabyX, who easily generated feelings of sympathy when she became distraught. BabyX is perhaps the first proof that we are well on the way to creating a real-life version of David, the child android in Spielberg's A.I. Artificial Intelligence; robots may soon be able to generate powerful, positive emotions in us.

Whilst Soul Machines' work is entirely virtual, the mechanical shell of Sophia and other less intelligent bipedal robots shows that the physical problem of subtle, independent movement has been almost solved. This begs the question, when Soul Machines' 'computational model of consciousness' is fully realised, will we have any choice but to extend human rights to them, regardless of whether these entities have mechanical bodies or only exist on a computer screen?

To some extent, Philip K. Dick's intention in Do Androids Dream of Electric Sheep? to show that robots will always be inferior to humans due to their facsimile emotions was reversed by Blade Runner and its sequel. Despite their actions, we felt sorry for the replicants since although they were capable of both rational thought and human-like feelings, they were treated as slaves. The Blade Runner films, along with the Cylons of the Battlestar Galactica reboot, suggest that it is in our best interest to discuss robot rights sooner rather than later, both to prevent the return of slavery (albeit of an organic variety) and to limit a prospective AI revolution. It might sound glib, but any overly-rational self-aware machine might consider itself the second-hand product of natural selection and therefore the successor of humanity. If that is the case, then what does one do with an inferior predecessor that is holding it up its true potential?

One thing for certain is that AI robot research is unlikely to be slowing down any time soon. China is thought to be on the verge of catching up with the USA whilst an Accenture report last year suggested that within the next two decades the implementation of such research could add hundreds of billions of dollars to the economies of participating nations. Perhaps for peace of mind AI manufacturers should follow the suggestion of a European Union draft report from May 2016, which recommended an opt-out mechanism, a euphemistic name for a kill switch, to be installed in all self-aware entities. What with human fallibility and all, isn't there a slight chance that a loophole could be found in Asimov's Three Laws of Robotics, after which we find out if we have created partners or successors..?

Wednesday 24 February 2016

Drowning by numbers: how to survive the information age

2002 was a big year. According to some statistics, it was the year that digital storage capacity overtook analogue: books gave way to online information; binary became king. Or hyperbole to that effect. Between email, social media, websites and the interminable selfie, we are all guilty to greater or lesser extent of creating data archived in digital format. The human race now generates zettabytes of data every year (a zettabyte being a trillion gigabytes, in case you're still dealing in such minute amounts of data).

So what's so bad about that? More and more we rely on syntheses of information in order to keep up with the exponential amount of knowledge revealed to our species by the scientific and other methods. Counter to Plato's 2400 year-old dialogue Phaedrus, we can no longer work out everything important for ourselves; instead, we must rely on analysis and results created by other, often long-dead, humans. Even those with superb memories cannot retain more than a miniscule fraction of the information known about even one discipline. In addition, we can now create data-rich types of content undreamed of in Plato's time. Some, MRSI medical scans being an ad-hoc example , may require long-term storage. If quantum computing becomes mainstream, then that will presumably generate an exponential growth in data.

What then, are the primary concerns of living in a society that has such high demands for the creation and safe storage of data? I've been thinking about this for a while now and the following is my analysis of the situation.

1. Storage. In recent years it has become widely known that CDs and to a lesser extent DVDs are subject to several forms of disk rot. I've heard horror stories of people putting their entire photo and/or video collection onto portable hard drives, only for these to fail within a year or two, the data being irrevocably lost. With the advent of cloud storage, this lessens the issue, but not completely. Servers are still subject to all sorts of problems, with even enterprise-level solutions suffering due to insufficient disaster recovery and resilience (to use terms us web developers use). I'm not saying audio tapes, vinyl records and VHS were any better, far from it, but there is a lot less data stored in these formats. There are times when good old-fashioned paper still rules - as it still does in the legal and compliance sectors I've had contact with.

2. Security and privacy. As for safety, the arms race against hackers, etal, is well and truly engaged. Incompetence also has its place. When living in the UK I once received a letter stating that my children's social services records, including their contact details, had become publicly available. This turned out to be due to loss of a memory stick containing database passwords. As for identify theft, well, let's just say that Facebook is a rude word. I managed to track down an old friend after nearly twenty years' incommunicado, finding details such as his address, wife's name and occupation, etc, mostly via Facebook, in less than half an hour. Lucky I'm not a stalker, really!

Even those who avoid social media may find themselves with some form of internet presence. I had a friend who signed a political petition on paper and then several years' later found his name on a petition website. Let's hope it was the sort of campaign that didn't work against his career - these things can happen.

And then there's the fact that being a consumer means numerous manufacturers and retail outlets will have your personal details on file. I've heard that in some countries if you - and more particularly your smartphone - enter a shopping mall, you may get a message saying that as a loyal customer of a particular store there is a special sale on just for you, the crunch being that you only have a limited time, possibly minutes, to get to the outlet and make a purchase. Okay, that doesn't sound so bad, but the more storage locations that contain your personal details, the greater the chance they will be used against you. Paranoid? No, just careful. Considering how easy it was for me to become a victim of financial fraud about fifteen years ago, I have experience of these things.

As any Amazon customer knows, you are bombarded with offers tailored via your purchase record. How long will it be before smart advertising billboards recognise your presence, as per Steven Spielberg's Minority Report? Yes, the merchandiser's dream of ultimate granularity in customer targeting, but also a fundamental infringement of their anonymity. Perhaps everyone will end up getting five seconds' of public fame on a daily basis, thanks to such devices. Big Brother is truly watching you, even if most of the time it's for the purpose of flogging you consumer junk.

3. Efficiency. There are several million blog posts each day, several hundred billion emails and half a billion daily tweets. How can we possibly extract the wheat from the chaff (showing my age with that idiom), if we spend so much time ploughing through social media? I, for one, am not convinced there's much worth in a lot of this new-fangled stuff anyway (insert smiley here). I really don't want to know what friends, relatives or celebrities had for breakfast or which humorous cat videos they've just watched. Of course it's subjective, but I think there's a good case for claiming the vast majority of digital content is a complete load of rubbish. So how can we live useful, worthwhile or even fulfilled lives when surrounded by it? In other words, how do we find the little gems of genuine worth among the flood of noise? It seems highly probable that a lot of the prominent nonsense theories such as moon landing hoax wouldn't be anywhere near as popular if it wasn't for the World Wide Web disseminating them.

4. Fatigue and overload. Research has shown that our contemporary news culture (short snippets repeated ad nauseum over the course of a day or so) leads to a weary attitude. Far from empowering us, bombarding everyone with the same information, frequently lacking context, can rapidly lead to antipathy. Besides which, if information is inaccurate in the first place it can quickly achieve canonical status as it gets spread across the digital world. As for the effect all this audio-visual over-stimulation is having on children's attention spans...now where was I?

5. The future. So are there any solutions to these issues? I assume as we speak there are research projects aiming to develop heuristic programs that are the electronic equivalent of a personal assistant. If a user carefully builds their personality profile, then the program would be expected to extract nuggets of digital gold from all the sludge. Yet even personally-tailored smart filters that provide daily doses of information, entertainment, commerce and all points in between have their own issues. For example, unless the software is exceptional (i.e. rather more advanced than anything commercially available today) you would probably miss out on laterally- or tangentially-associated content. Even for scientists, this sort of serendipity is a great boon to creativity, but is rarely found in any form of machine intelligence. There's also the risk that corporate or governmental forces could bias the programming…or is that just the paranoia returning? All I can say: knowledge is power.

All in all, this sounds a touch pessimistic. I think Arthur C. Clarke once raised his concern about the inevitable decay within societies that overproduced information. The digital age is centered on the dissemination of content that is both current and popular, but not necessarily optimal. We are assailed by numerous sources of data, often created for purely commercial purposes; rarely for anything of worth. Let's hope we don't end up drowning in videos of pesky kittens. Aw, aren't they cute, though?

Tuesday 26 January 2016

Spreading the word: 10 reasons why science communication is so important

Although there have been science-promoting societies since the Renaissance, most of the dissemination of scientific ideas was played out at royal courts, religious foundations or for similarly elite audiences. Only since the Royal Institution lectures of the early 19th century and such leading lights as Michael Faraday and Sir Humphry Davy has there been any organised communication of the discipline to the general public.

Today, it would appear that there is a plethora - possibly even a glut - in the market. Amazon.com carries over 192,000 popular science books and over 4,000 science documentary DVD titles, so there's certainly plenty of choice! Things have dramatically improved since the middle of the last century, when according to the late evolutionary biologist Stephen Jay Gould, there was essentially no publicly-available material about dinosaurs.

From the ubiquity of the latter (especially since the appearance of Steven Spielberg's originally 1993 Jurassic Park) it might appear that most science communication is aimed at children - and, dishearteningly, primarily at boys - but this really shouldn't be so. Just as anyone can take evening courses in everything from pottery to a foreign language, why shouldn't the public be encouraged to understand some of the most important current issues in the fields of science, technology, engineering and mathematics (STEM), at the same time hopefully picking up key methods of the discipline?

As Carl Sagan once said, the public are all too eager to accept the products of science, so why not the methods? It may not be important if most people don't know how to throw a clay pot on a wheel or understand why a Cubist painting looks as it does, but it certainly matters as to how massive amounts of public money are invested in a project and whether that research has far-reaching consequences.
Here then are the points I consider the most important as to why science should be popularised in the most accessible way - although without oversimplifying the material to the point of distortion:

1. Politicians and the associated bureaucracy need basic understanding of some STEM research, often at the cutting edge, in order to generate new policies. Yet as I have previously examined, few current politicians have a scientific background. If our elected leaders are to make informed decisions, they need to understand the science involved. It's obvious, but then if the summary material they are supplied with is incorrect or deliberately biased, the outcome may not be the most appropriate one. STEM isn't just small fry: in 2010 the nations with the ten highest research and development budgets had a combined spend of over US$1.2 trillion.

2. If public money is being used for certain projects, then taxpayers are only able to make valid disagreements as to how their money is spent if they understand the research (military R&D excepted of course, since this is usually too hush-hush for the rest of us poor folk to know about). In 1993 the US Government cancelled the Superconducting Super Collider particle accelerator as it was deemed good science but not affordable science. Much as I love the results coming out of the Large Hadron Collider, I do worry that the immense amount of funding (over US$13 billion spent by 2012) might be better used elsewhere on other high-technology projects with more immediate benefits. I've previously discussed both the highs and lows of nuclear fusion research, which surely has to be one of the most important areas in mega-budget research and development today?

3. Criminal law serves to protect the populace from the unscrupulous, but since the speed of scientific advances and technological change run way ahead of legislation, public knowledge of the issues could help prevent miscarriages of justice or at least wasting money. The USA population has spent over US$3 billion on homeopathy, despite a 1997 report by the President of the National Council Against Health Fraud that stated "Homeopathy is a fraud perpetrated on the public." Even a basic level of critical thinking might help in the good fight against baloney.

4. Understanding of current developments might lead to reliance as much on the head as the heart. For example, what are the practical versus moral implications for embryonic stem cell research (exceptionally potent with President Obama's State of the Union speech to cure cancer). Or what about the pioneering work in xenotransplantation: could the next few decades see the use of genetically-altered pig hearts to save humans, and if so would patients with strong religious convictions agree to such transplants?

5. The realisation that much popular journalism is sensationalist and has little connection to reality. The British tabloid press labelling of genetically-modified crops as 'Frankenstein foods' is typical of the nonsense that clouds complex and serious issues for the sake of high sales. Again, critical thinking might more easily differentiate biased rhetoric from 'neutral' facts.

6. Sometimes scientists can be paid to lie. Remember campaigns with scientific support from the last century that stated smoking tobacco is good for you or that lead in petrol is harmless? How about the DuPont Corporation refusing to stop CFC production, with the excuse that capitalist profit should outweigh environmental degradation and the resulting increase in skin cancer? Whistle-blowers have often been marginalised by industry-funded scientists (think of the initial reaction to Rachel Carson concerning DDT) so it's doubtful anything other than knowledge of the issues would penetrate the slick corporate smokescreen.

7. Knowing the boundaries of the scientific method - what science can and cannot tell us and what should be left to other areas of human activity - is key to understanding where the discipline should fit into society. I've already mentioned the moral implications and whether research can be justified due to the potential outcome, but conversely, are there habits and rituals, or just societal conditioning, that blinds us to what could be achieved with public lobbying to governments?

8. Nations may be enriched as a whole by cutting out nonsense and focusing on solutions for critical issues, for example by not having to waste time and money explaining that global warming and evolution by natural selection are successful working theories due to the mass of evidence. Notice how uncontroversial most astronomical and dinosaur-related popularisations are. Now compare to the evolution of our own species. Enough said!

9. Improving the public perspective of scientists themselves. A primary consensus still seems to promote the notion of lone geniuses, emotionally removed from the rest of society and frequently promoting their own goals above the general good. Apart from the obvious ways in which this conflicts with other points already stated, much research is undertaken by large, frequently multi-national teams; think Large Hadron Collider, of course. Such knowledge may aid removal of the juvenile Hollywood science hero (rarely a heroine) and increase support for the sustained efforts that require public substantial funding (nuclear fusion being a perfect example).

10. Reducing the parochialism, sectarianism and their associated conflict that if anything appears to be on the increase. It's a difficult issue and unlikely that it could be a key player but let's face it, any help here must be worth trying. Neil deGrasse Tyson's attitude is worth mentioning: our ideological differences seem untenable against a cosmic perspective. Naïve perhaps, but surely worth the effort?

Last year Bill Gates said: "In science, we're all kids. A good scientist is somebody who has redeveloped from scratch many times the chain of reasoning of how we know what we know, just to see where there are holes." The more the rest of us understand this, isn't there a chance we would notice the holes in other spheres of thought we currently consider unbending? This can only be a good thing, if we wish to survive our turbulent technological adolescence.