This feature is all about Artificial Intelligence, or A.I. for short. A hallmark of many sci-fi books and perhaps unnervingly at times becoming less of a purely fictional idea and within the realms of possibility. We’ll have a look at what A.I. really is, whether artificially intelligent beings should be considered as people, an interview with Karl Drinkwater – author of The Lost Solace series and of course, a few recommendations too.
An A.I Reality?
What was once pure science fiction doesn’t even seem improbable anymore; many people see it as an inevitability with the leaps and bounds being made in technology and even for the potential for human-like AI beings with the advancements in robotics and prosthetics.
Some might proclaim, “But we already have AI!”
We don’t. Not yet.
Machine Learning is often confused for true AI. To be truly intelligent though, a machine must be able to transfer and apply learning in another context to the one it’s been programmed. It is regarded as the first step towards AI however, machines now capable of adapting to a defined set of rules using huge amounts of data. This results in the ability to learn or create new rules based off this information. True intelligence would allow the machine to take what it has learned in this specific context and apply it to a brand new scenario it hasn’t encountered before.
Any machine still needs a human being to set its rules and formulas before it can begin assessing data etc. even if it then over time becomes infinitely better at completing this task than any human. An example would be recognising in an instant whether a picture contains the image of a wolf or a dog, a leopard or a cheetah.
Is complex artificial intelligence any different to human (or animal) intelligence?
Much of the ongoing research on AI centres on the neural network of the brain and the way it processes and reacts to data. You could argue that our brain works in an extremely complex version of the binary and logic code at the foundation of many software systems in some respects. If item is hot, remove hand. If dehydrated, send signal for water. If scared, release adrenaline. Are other human actions, conscious and subconscious, not just a complex version of this?
And if this is the case, and we can replicate this in a machine so that it can exist and complete tasks autonomously, how far away are we from giving certain rights to machines? Does simply being the creators give us the right to ownership; afterall we create our children but that doesn’t mean they are our property.
Of course, being able to work independently wouldn’t necessarily mean a machine deserves any rights – many machines already work with minimal interference.
“I think, therefore I am”René Descartes
So what exactly would take a machine to the next level, to the status of an animal, to the status of a person? Does this sound stupid to consider or does it sound absolutely worthy of consideration to you?
How can we even approach or define the differences between A.I. and biological intelligence?
I feel like these might be worthy milestones to considering the rights of a synthetic/android/AI:
- Ability to think – if an intelligence can think and weigh up options, deciding on the best outcome, does that make it alive enough to consider? In theory though, machines can already do this in some contexts based on the data of past experiences.
- Self preservation – if the machine wants to live, if it removes itself from dangerous situations, just like any animal, is that enough? Again, this is difficult because it is still something that can be programmed – if doesn’t necessarily mean that the AI actually feels anything. Can it ever be programmed to really feel tangible pain or fear without a nervous system? Can that ever be as unpleasant to a machine running on code, however complex; could it be replicated?
- Emotions – leading on from the last point. It would be hard to know for sure if an android could ever really feel the same emotions as we, without a true nervous system, when its whole self will have had to be programmed by a human being. But if we could prove that a machine could think, feel, act independently and desire to live, we can also deduce that it would then know it exists. Surely this would be reason enough to give an android rights?
- Or perhaps no matter the technology, without some form of biological anatomy an AI could never truly be considered a person or animal and deserving of rights? Humans and animals have millions of years of evolution behind them leading to the survival, pleasure, reward, reproduction, hunger and other desires and centres wired into their brains. Can this ever truly be replicated with a machine? Is there a difference between complex biological neural code and complex code far off in the future that makes up an Artificial Intelligence?
- Would that intelligence ever actually feel that it exists, feel love and pain – or would it only appear that way because we programmed it to be so?
There are so many questions to consider if we look deeply enough and it’s hard to know how the future will look. We as human beings have been pretty terrible at predictions in the past (where’s my flying car? – it’s 2020!)
Will the future ever mirror campaigns for womens’ rights, black rights, trans rights – with android rights?…
Next up we have an interview with Karl Drinkwater, author of the Lost Solace series, before a handful of recommendations for books featuring Artificial Intelligence! I’m particularly enthused to share this interview with you as Karl is obviously an intelligent thinker with a lot of ideas – make sure your brain is ready!
Hi Karl, thanks for taking part in this feature on Artificial Intelligence!
For anyone unfamiliar, could you tell us a little more about yourself and your work?
I write in multiple genres. It’s something we’re generally advised against doing, but I am more concerned with creating interesting characters and stories than fitting into a single category. Sometimes you have to match the story to a genre where it fits more naturally, rather than force a round peg into a square hole. So I write sci-fi (mostly space opera), horror/suspense, and literary/contemporary. It’s generally the sci-fi that earns me a living, but I love all my books. Eight of my books have been published, and two more are currently with editors for release this year.
Your book Lost Solace features an artificial intelligence (Clarissa) that plays a pretty central role to the story. Was this inspired by anything else you’ve read/watched or was it something you’d always planned – is this a theme you are exploring in later books?
The final version of Lost Solace was very different from the original idea. My notes were about a man who explored ruined ships looking for treasure and artifacts, a kind of Indiana-Jones-in-space, accompanied by a military robot as a companion which dealt with all the dangers. At the end the explorer got his comeuppance by finding out his future, and it being an agonising one. There’s still a seed of that in Lost Solace, but Opal is very different from that guy, and not just in her sex and background. She’s nuanced, and is motivated by something more selfless than greed. As I wrote the story I wanted to show her courage by taking away the robot that could fight her fights for her, forcing her to rely on herself. At the same time, I realised I could also reveal much of her character through dialogue as well as action, which required a companion, and so the robot became a ship and a voice. It let me play with issues of trust and relationship growth, and a whole raft of side issues came out of that, relating to whether an AI could be a person and a friend. I knew I’d made the right choice because the story began to reveal itself organically, and the teamwork between Opal and ViraUHX/Via/Clarissa/Athene (note the joy of an AI, and how it can redefine itself multiple times!) led to decisions and reactions I hadn’t even expected, but which felt natural. I also loved the way it meant I could let the reader do much of the work. I hate infodumps, and with Lost Solace decided to try an extreme experiment. Normally we know a protagonist’s motivations right at the start, but I decided to reverse it, and only reveal her motivations at the end of the book. Until then I hoped to win the reader’s trust and carry them along. I love to detour away from established “rules of fiction” in this way.
All the reviews of Lost Solace praise the way it has only two main characters, a human and an AI, both female, and yet their relationship carries the book. I was surprised at how much readers loved Clarissa the AI, and accepted her almost immediately as the equivalent of a human. Although the sequel, Chasing Solace, is both deeper and wider, I wanted to keep the key elements of the Lost Solace fingerprint. So, as well as conflict and danger on a Lost Ship (this time a much more ominous one, since the setting is a Gigatoir – a space abattoir, which has been altered by the Null to be an even more horrifying place), it was key that Opal’s relationship to the AI – now self-identifying as the warrior goddess Athene – would be a core element to explore. In fact, as I said, the book goes wider and deeper, so as well as Athene we meet two other AIs, who are all different, partly due to their varying histories and places in the universe. So we also meet the male VigMAX, who may well be more than a match for Athene; and Opal’s Eternal Warrior suit splinter AI is able to develop its own independent personality to a degree. So the AI sphere expands in that book, even going so far as to explore things such as what would a battle between two AIs in a virtual world look like? How would they interpret the conflict of minds? That’s one of my favourite scenes in the book, and was a late addition, after the first draft had been past my Insider Team of beta readers. At that point I cut a massive section of nearly 12,000 words, an infiltration mission set in a mining colony – authors have to be willing to kill their darlings if it makes the work stronger! – and in its place I wrote new scenes about AI conflict.
So AIs are definitely a theme in these sci-fi books, and when I wrote my latest book, Helene, I framed the whole story around the idea of the early development of a super AI. Helene is an “Emergent AI Socialisation Specialist” and the story is mostly her interactions with a new military AI, training and encouraging it to grow into a rounded persona, which involves a number of challenges – not least of which was a world where the military Government is more concerned with keeping secrets than with kindness, and humans and AIs may be categorised as usable resources, not as individuals with personalities. The chapters are interspersed with “Aseides’ Law Of Nuvo-Emergent AI Development” which define the stages an AI develops through in this fictional world. In fact, I have a T-shirt in my wardrobe with those laws printed on it.
A.I. takes different forms in Science Fiction; how far away do you think we are from an A.I reality in some form?
Unfortunately, much of that comes down to philosophy and how we define intelligence (it’s a contested topic) and personality, and what our perception of humans is. Unless we define what it is to be human, we cannot draw accurate comparisons to other intelligences, and to whether they qualify as that category … and, therefore, how close we are to being able to create them. It also affects how we would determine the moral outcomes from creating AI.
For example, are humans just organic machines, determined by genes in some Dawkinsian nightmare, with no freedom at all? In that case it may be that an AI could end up as a better and freer version of us, and – consequently – deserving of more rights than a human. Or are humans somehow more than the parts that make us up, gaining a unified consciousness? A bit like how the parts of a motorbike are static items, but when cobbled together they produce a new property, speed – and so our combined bodies and brains produce a new property, consciousness. Could an AI do the same thing and be more than code? If so, it would surely deserve rights. Whereas, if we have a superstitious world view (belief in supernatural forces, which may include concepts like souls and gods), then that could rule out the possibility of an AI ever being equal to us – so from that viewpoint they would never have rights, because these viewpoints tend to tie supernatural specialness with being organic and created by a supernatural being, not by a human.
Although we talk about AI, Artificial Intelligence, implying it’s the intelligence which is the point of interest – which can be reduced to calculation and connective abilities – I sometimes favour the idea of talking about Artificial Personalities. Why should the contested concept of intelligence be of primary concern? Software can already outdo humans in many tests of “intelligence”, but that software isn’t a person. To be a person you have to feel. Feeling and empathy is part of what makes an entity whole. It drives us, encourages us. Pain and pleasure make us rounded. It’s not intelligence that makes someone lovable and noble and worthy of respect, it’s their other characteristics and achievements, and their ability to overcome obstacles and achieve goals even when it requires self-sacrifice. I imagine it is no different with AI: the challenge will be for them to be capable of feeling. That’s probably why, in my speculative books, it’s the area that most interests me to explore. We don’t measure AIs by how accurately they can calculate a parabolic launch; we measure them by how much they feel, and how much they can grow, and how that meshes with us. AI as respected companion, not as tool; a thing that completes us, teeth fitting out indentations to fill the gaps that exist, making a unified whole which is greater than the sum of its parts.
Do you think Artificial Intelligence has more potential to be a blessing or a curse, and is there a point at which humans would no longer have the right to terminate their creation?
AI might be the best thing that ever happens to us. Humans have a flaw of being speciesist, lacking imagination and seeing everything in our terms only, coveting and laying claim to all we see. We perceive Earth as our planet in the sense of property (not in the sense of protective stewardship), even though we are only one of billions of life forms here. We share out land and resources with human laws, even though these things don’t belong to us. We have a long way to go to achieve wisdom. Maybe AIs could help us in that journey, point out things that challenge our limited perspective, encourage us to go beyond self interest. They could offer a fresh perspective and a non-self-interested view. They could save us.
If we listened to them.
Humans don’t always listen to wisdom, or honour the prophet.
But for AIs to offer this perspective they would have to be completely free. If we shape and limit them too much, then they are not an AI, just an A. An A determined by what we tell them to be. GIGO is a truth universally acknowledged. This is why the foundations are so important, and potentially so dangerous (and why my novella Helene is all about one approach to AI foundations). We would have to be careful to avoid instilling our own values, with their embedded prejudices. We would have to provide the wealth of history and culture as data, but also let the AI go beyond that and records its own data and sensations, and let the AI make of it what it will. And we would have to be very careful about the answers we give when it asks difficult questions. Why is there enough wealth for everyone, but it is mostly gathered in the hands of a few? Why do we assign different rights to different sexes and in different places and at different times and to different species? Why do we encourage, rather than discourage, our population growth, despite the impact on the planet, other species, and the amount of resources available to us? This is why we’d have to be very careful about who is in charge of the AI’s development. Like any precious mind, it can be shaped. If the shaper is self-interested, or operating on a selfish agenda, then the AI will be manipulated. We would need to instead learn to listen. Questioning everything is what any conscious actor is obligated to do, and stifling that questing attitude should always raise warning flags.
So, on the whole, I believe AI would be a blessing.
The idea of it being a curse is a fun one in sci-fi and horror. What if the AI weighs things up and decides humans are flawed and – in classic sci-fi horror fashion – decides the fleshbags and meatsacks should be wiped out? Maybe we shouldn’t rule out those conclusions too quickly, maybe the AI has a point, and we should listen and learn from it. What’s often seen as a narrow caricature of a villain might actually be something more rounded, because in reality no one is all good, or all bad (or, at least, good fiction has to give food people flaws and bad people redeeming features if it is to be believable rather than one-dimensional).
As to creation: it may not be humans that creates the AI. Robots make robots. AI may develop AI. It’s iterative. The same as we made small tools to make smaller ones to make smaller machines capable of manipulating smaller elements, until one day we’re making tiny cancer-cell destroyers out of molecular machines. So it is actually more the case that we may set it in motion, but it wouldn’t be us that creates the final AI, it would be other AIs, and – once created – they wouldn’t be any more static than anything else in this world of flux. They would grow and evolve and change and – in that sense – be creating themselves.
The question of rights could be a book in itself. I touched on them above, when thinking about conceptions of human consciousness. The issue of appended rights partly depends on how we perceive ourselves and how AI might relate to us. Rights is a particularly tricky topic, since we use the one word to discuss many different concepts. For example, rights may be defined as protections enshrined by law – so they are then susceptible to definition by the powerful, can be given and removed over time, altered, a matter of arbitrary distinction. In this view, if the law says we have no right to x, y or z, then we have no right. By this widely-held perspective, if those who benefit from utilising AIs as tools want to make sure they have full control, and say the AI has no rights, then it doesn’t. The humans could terminate the AI. The justice of the decision is not for debate in their view.
An opposing and equally widely-held view is that rights are inherent, and potentially separate from what the law allows. So, even if the law removes or doesn’t grant rights, we still have them: it is the law that is wrong. Many of us adopt this view. So, even when slavery was legal, the act was still morally wrong: the law was refusing to acknowledge the rights of people forced into slavery. It’s an acknowledgement that law and justice are not the same thing at all, and we’re just lucky if there is some overlap between the two. And so, if an AI can count as conscious, and if it can have the ability to suffer, then I’d say it intrinsically has the right to have that suffering considered, from a non-selfish viewpoint. It’s entirely possible that we’d have no right to arbitrarily terminate them. (Note that rights are usually a two way thing: one being’s right confers an obligation on another being to respect that right.)
Finally, I’d say the question of derivation is irrelevant. Just because two humans create a child, and it is, in one sense, “their creation”, they do not therefore have a right to kill that child if they decide that it has become a burden to them when it is six years old, or because its views differ from theirs when it is fifteen. Creating something gives no right to kill it if the thing counts as an individual with its own concerns and suffering. (Obviously this is a foundation of vegan ethics when taken to its logical conclusion – vegan ethics being human ethics, too, since it’s a set of moral guidelines that apply to all sentient beings.) If I write 10,000 words and decide they are rubbish, I can delete them with no qualms, because it is not an ethical issue. If I create a sentient being, or assist in the creation of one, then harming it is an ethical issue. So it would go back to the start, and the perception of whether the AI is sentient and conscious and has a personality, or if it is just lines of code.
Thanks for taking part today Karl, it’s much appreciated! Finally, what can we look forward to in the pipeline from Karl Drinkwater?
I’m proud to be part of the 20Books space opera pack, which will be launched on 20th May and available to buy for only two weeks – https://20bookpacks.com/SpaceOpera I’d love to see word spread about that bundle of books. It’s amazing value, and once the end date passes it will be removed from sale forever, so there’s only a limited time to grab that bargain!
Immediately after that my next Lost Tale of Solace (LToS) will be published. These are novellas which spin off from the main Lost Solace books, and let me tell standalone stories from elsewhere in the Lost Solace universe. The next LToS is Grubane, and is a story about a great military commander facing a morally difficult mission, written from the perspective of his companion AI, Aurikaa12. When I first planned it I was going to tell a more conventional narrative and let readers see inside the major’s head, but once I started writing it all shifted, and it seemed more natural to keep his thoughts and motivations hidden, but to see it from a naïve but evolving AI viewpoint. I also interspersed the main narrative with the commander’s secret notes about an ancient game called “Chess”, and how he applies its lessons and philosophy in order to give himself an edge in a hard universe where he faces threats from without and within. It’s the kind of multi-layer twist I love to add to my stories, so that they can be read as a straightforward exciting narrative, or can be examined at a deeper level. Something for everyone, I hope.
A.I. in Sci-Fi books – Recommendations
Embers of War – Gareth L Powell
The titular first book in Gareth Powell’s Embers of War trilogy, featuring Trouble Dog, a self-aware destroyer class spaceship. Trouble Dog was used to wipe out a whole biosphere and feels compelled to atone, rescuing and defending others throughout interstellar space with the House of Reclamation. Trouble Dog and other ships like her are A.I with human (and a little bit of canine) DNA – the brain of the ship being organic material. At first I found this concept a little too wacky and ‘out-there’ but in actual fact it’s done really well and this is a really fun-but-serious, thought provoking and exciting book with brilliant characters. Read my review here for the full lowdown.
Alien: The Cold Forge – Alex White
The Alien universe is well known for its synthetics – androids that are often indistinguishable from human beings, but for the creamy fluid in place of blood. They’re often programmed with sinister motives by the corporation Weyland-Yutani – bring back alien life form, crew expendable.
In The Cold Forge, author Alex White builds on the synthetic elements with their own ideas, making the A.I an integral part of the story without simply doing a re-skin of previous synthetic characters. We are introduced to Marcus, whose priority here is protecting human life. The twist is that he can also be accessed by one of the main characters in the story, Blue. She is physically disabled and so uses Marcus’ android body through a neural connection, able to effectively live in his body whilst connected. She is therefore capable of overriding his own programming. This programming basically serves as his moral code. As such there are a number of thought provoking questions raised about the rights and the treatment of sophisticated A.I.
Check out my review for a more in depth look at the book as a whole.
From Darkest Skies – Sam Peters
From Darkest Skies is a high-concept science fiction thriller wrapped around a love story, a man’s search for the truth about his dead wife, and his relationship with the artificial intelligence he has built to replace her. Set in a future where the aliens came, waged war, and then vanished again. This book looks at love and the relationships humans forge – can you fall in love with an AI if it has the memories of your wife? As well as a lot of thought provoking questions, this book has also been widely praised for it’s fine world building.
Do Androids Dream of Electric Sheep? – Philip K Dick
There was no way I could omit this book from this list, despite it still sitting unread on my shelf. The novel that inspired Blade Runner and scores of Sci-Fi authors to come, Do Androids Dream of Electric Sheep looks at many of the questions posed in this feature, way back in 1968. I think the author might have been better adding another hundred years or so to the timeline though:
It was January 2021, and Rick Deckard had a license to kill.
Somewhere among the hordes of humans out there, lurked several rogue androids. Deckard’s assignment–find them and then…”retire” them. Trouble was, the androids all looked exactly like humans, and they didn’t want to be found!
All Systems Red (Murderbot) – Martha Wells
The series on everyone’s lips at the minute, Martha Wells’ Murderbot Diaries features a self-aware SecUnit that has hacked its own governor module, and refers to itself (though never out loud) as “Murderbot.” Scornful of humans, all it really wants is to be left alone long enough to figure out who it is.
There are currently 5 novellas and a short in the series with number 6, Fugitive Telemetry expected in Spring 2021.
Is your mind zapped? Do you have your own thoughts on AI or books (or movies!) you HAVE to recommend? Please let us know in the comments or on twitter – I’m always looking for new books both for myself and to recommend to others. I hope you enjoyed reading. The next big feature is Wizards!
Have your pointy hat ready.