Buzzword Books - unusual, intriguing, intelligent, perceptive

Here, you'll find musings from our authors and staff. We don't promise daily updates. Just posts worth your time.

Wednesday 25 April 2018

CAN AI (ARTIFICIAL INTELLIGENCE) BE CONSCIOUS?



Technological positivists confidently predict it can. For an answer, we have to fast-forward to the year 3010, when a lecture from a Futurist puts an end to such speculation - described in this extract from a remarkable soon to be published novel.

 

Picture a small group of 300 year old elders served by a eugenically adapted underclass called Funcs and sophisticated humaniod robots called Hubots. All controlled by 'brain in a box' cyborgs called Neuros with their central nervous systems mounted in mechanical bodies.


Now read on:



'Without going into formulas,' Katz, the Futurist, said, 'complexity is a function of time or duration. Note that time creates complexity. Therefore, a timeless state is necessary to avoid it. Consider this thought experiment...'
The screen changed to:
AI - TIME = INERTIA.
ORGANIC INTELLIGENCE + TIME = FAILURE.

'Artificial intelligence minus duration equals inertia. The machine has power but does nothing. Obvious enough. However, to understand the second proposition, we need to examine the diff between organics and intelligent machines. Can someone mention a diff? And, by the way, it has nothing to do with conduct or ordinary knowledge.'
   A white haired fem elder in the front row said, 'Relative consciousness.'
   The Futurist nodded to her. 'Thank you, Erva. Despite our efforts, it's still impossible to program consciousness. It's easier to genetically adapt organics than adapt machines—to build pain into bots, sensory cognition, responsiveness to unexpected situations, emotions, risk, social interactions... You can't turn responses like surprise or anguish into numbers. How do you code an algorithm for dopamine? Computers have no experience—just run software. But let's assume, for now, that machine learning can eventually emulate a form of consciousness.' He waved a hand at Jason and Pitho. 'In fact, we have two augmented H10s in the audience that go some way toward this goal. So accepting this as a future possibility, we'll provisionally ignore this factor. What else?'
   Pitho said, 'Entropy.'
   'Exactly. For those unfamiliar with the term, in isolated systems, heat flows spontaneously from hot to cold. Things are caught in a process of natural disorganisation. For instance, broken glass doesn't revert to an unbroken state. This is scientifically termed thermodynamic equilibrium. Machines, likewise, are subject to entropy. Only living matter can be said to have "negative entropy". In other words, only organisms can persist, for a while at least, in the face of dissolution by absorbing nutrients, combating disease and reproducing themselves. Note also that organics are the origin of machines. So, if there are not enough organics, machines will regress.'
   A Func from Combat in the fourth row spoke up. 'Not if we let bots pass the Singularity.'
   'Ah! And are we all sure what that is?' He looked directly at Mark5.
   Mark5 said, 'Something you can't undo?'
   'For the benefit of our young friend here, Singularity spans two aspects—the Palladium Council's embargo against machine reproduction, and the possibility that AI can outstrip OI, or Organic Intelligence.'
   The greasy-haired Hosie2 raised an arm. 'Hey man. You say, man, that, even if we let them build themselves, they'll tank...?'
   'Good question. Let's examine it. Firstly, the Precepts forbid machine autonomy. It doesn't Conform. But let's say, for the sake of argument, that we permit bots to reproduce themselves. It still goes nowhere. Because they'll soon have maint issues.'
   'Why?' the Combat Func asked.
   'Firstly, for a machine to self-replicate it must extract the materials necessary and process them, estab a supply chain, set up and maintain assembly plants, secure a 100 per cent reliable energy source, estab a fail-safe temp controlled environment and many other factors. Then there'll be hardware failures. And software inevitably becomes corrupted because complex inert systems eventually degrade. Even if bots self-reproduced, they couldn't survive. Because logical inorganic intelligence inevitably develops defects.'
   'Not if the coding's perfect...'
   'It can't be.'
   'Why?'
   'Because organics wrote the code and human intelligence is flawed. Therefore, anything produced by human intelligence is flawed. Therefore, all coding is flawed. And attempts to patch those flaws add complexity and increase the system's entropy. Which brings us to the prob of organic intelligence. Do you see where this is going?'