Once the two on-going wars have been covered, other subjects begin to claim pride of media space. One of them, Artificial Intelligence, is taking up more and more newspaper inches, most reading like a dystopic sci-fi fantasy.
Details vary quite a bit, but the overall thrust less so. Many commentators see AI as a threat to people’s jobs or even to the very survival of mankind. Robots will turn into Frankenstein’s monsters, they say, but with a much greater destructive power. Artificial intelligence will then trump the natural kind to annihilate humans and set up an electronic totalitarian realm from hell.
Unfortunately, my knowledge of computers barely stretches to using them in lieu of typewriters and looking up bits of information on Google. Hence I can’t judge the technological feasibility of such doomsday scenarios.
On general principles, however, I can’t imagine a creature ever being superior to its creator, although I’m sure some atheists may feel differently. Come to think of it, computer geeks and chess players have joined forces to create software packages that can wipe the board with any human wood-pusher, but that’s a rather narrow area.
That example isn’t substantially different from, say, cars made by man and yet capable of going much faster than a human can. Unless, of course, we are talking about traffic in London or Paris, strangulated as it is by bicycle lanes, derisive speed limits and socialist mayors.
However, I am prepared to accept that those computer experts who warn against the awesome power of AI have a point, and we are in dire danger. Even so, I’d suggest it’s not AI we should fear but ourselves.
AI is nothing but a tool or, if you will, a weapon and, as Soviet drill sergeants used to tell me, the most important part of a weapon is the head of its owner. Tools can be used for various purposes, good or bad. Which it will be depends on whether their wielders are good or bad.
A knife can be sharpened to make a kebab or to cut a baby’s throat. A shotgun can put a brace of pheasants into your oven or you six feet under. A split atom can light up a city or blow it up.
Likewise, I’m sure, AI can be put to good or evil use. The same goes for high-tech in general. Thanks to computers, I no longer have to tote a suitcase full of reference literature every time I go on holiday. Also thanks to computers, when some idiot drove into me a few years ago, the crash was caught on two CCTV cameras. He was banned for a year; I got the insurance money.
However, put a different software package into those same computers, and they tell strangers all sorts of things about me I’d rather keep private: what I eat, cook, read or watch, which holiday destinations and types of music I prefer and so on.
My car’s GPS can guide me to my destination, especially if I know the way anyhow, but it can also inform the police how fast I’m driving. (And if it can’t yet, rest assured it will be able to before long.) My mobile obviates the need to search for a public phone every time I need to tell Penelope I’ll be late, but it can also tell authorities where I am when I go about my lawful business.
A free country can use AI to protect civil liberties, a tyrannical one will use it to quash them. That’s where I begin to worry – not about AI as such but about its possible nefarious uses.
The history of potentially dangerous technologies shows that we won’t be deterred by their dangers: if something can be made, it will be made. Corollary to that is another historical observation: if technology can be put to wicked use, it will be.
Arguments about the morality of the nuclear bombing of Hiroshima and Nagasaki are still raging almost 80 years later. Personally, I’m on the side of those who believe that the bombing was justified because it saved hundreds of thousands of American lives that would otherwise have been lost in Japanese island hopping. Yet I can see the validity of the opposite argument as well.
Whether or not we agree not to chalk those incidents up in the rubric of wickedness, we can argue that the presence of nuclear weapons has so far managed to deter evil states from starting another world war.
Yet ‘so far’ are the operative words. I’m convinced that sooner or later those nuclear mushrooms will be planted by indisputably evil powers. Neither history nor any reliable reading of human nature offers many arguments against that possibility.
The same goes for AI. It may make us all freer and richer, or else, if it can, it may make us redundant and extinct. Let the boffins argue among themselves about the technical aspects of the problem. The rest of us ought to ponder human nature and, if such is our wont, divine providence.
The latter gives reason to hope, the former to tremble. I for one am afraid, but not unduly so. When all is said and done, God will provide.
Ah Mr Boot you have in your closing words here provided what I have (perhaps) been looking for throughout my adult lifetime. A reason for thanking God. Thank you!
We are all better off for the computing power that provides and analyzes data for us and allows us to make many decisions every day. Visions of a dystopian future hinge on allowing the machines to make the decisions and who is initially in charge of those machines/software. For example, the circuitry inside your refrigerator checks the internal temperature and turns on the compressor and heat exchanger when it reaches a certain level. Good. In the future, a fully automated power station may control if and how much energy is delivered to your home to heat it in the winter. If that software was written with an eye toward global warming, it may make the decision that restricting energy use to the detriment of human life is the better decision for the longevity of the planet. Bad.
The use of AI is already affecting institutions of higher learning, where students simply type in a few words and receive a full term paper.
As we are freed up from more and more mundane tasks, what we decide to do with that free time may determine the fate of the human race. Do we use it to watch inane videos on the internet? Do we ponder and explore the mysteries of life? Do we build and improve our relations with others? Do we watch more footie? Will humans even continue to play football? Will it be played by robots or perhaps just be computer generated images of a possible game? A quick study of human nature will provide the most likely scenarios.
Nothing but the morbid fantasies of geeks who have watched ‘The Matrix’ far too many times.
Collective annihilation is the desperate desire of all those seeking to sublimate their fear of death in a gotterdammerung. Whether it’s American Evangelicals or European eco-warriors, the underlying animus is the same: an aversion to ‘dying alone’ (whatever that is supposed to mean)
John Clauser, a Nobel Physics laureate, tried to use a ChatGPT to do some math, and discovered that AI made mistakes, and some of them were serious.
An earlier version of ChatGPT couldn’t be released to the public because it was generating pornographic and profane output; it had to be “re-trained” to ignore certain types of input. AI, like any computer, generates results that are only as good as the data that goes into it.