Do you remember being jolted awake on that fateful day of August 29, 1997, at 2:14 AM? Do you remember the explosions, the emergency sirens, the bright flashes of light? Were you able to turn on the news before all television stations went dark, before most of the world was thrust into darkness, before most of humanity was compelled to wander about, searching for food and water?
That was the evening that Skynet of Terminator fame became self-aware. Just a month earlier, on August 4, 1997, Congress, in their infinite wisdom, voted to permanently fund Skynet, the artificial intelligence system that would respond to threats far more quickly than mere human beings, since we would only have minutes to respond to a massive Russian nuclear attack. Trust artificial intelligence, Miles Dyson of Cyberdyne reassured us, trust AI enough to upgrade your stealth bombers so they are super-drones tied into Skynet, trust AI enough to tie the satellite tracking systems into Skynet, trust AI enough to tie all communications into Skynet.
But when Skynet became self-aware, even Mike Dyson was alarmed when humanity discovered that it was impossible to simply unplug Skynet. Skynet defended herself by launching a massive nuclear attack against Russia, with the machines prevailing after the nuclear holocaust annihilated most of humanity, with most survivors cooked by the nuclear fallout afterwards.[1]
Was this just a Terminator movie? What are the actual risks of implementing artificial intelligence on the battlefield? Michele Flournoy in Foreign Affairs warns us that the risks of tying artificial intelligence into national security systems are profound. “AI models could, for example, misidentify people or objects as targets, resulting in unintended death and destruction during conflict. Black box AI models, whose reasoning cannot be adequately understood or explained, might lead military planners to make hazardous decisions.”
But there is no need to fear, because our Foreign Affairs maven Michele has reassured us: “The Pentagon has also forbidden the use of Artificial intelligence in its nuclear command and control systems, and it has urged other nuclear powers to do the same.”[2]
ONLY SUPERVISION CAN CONTROL ARTIFICIAL INTELLIGENCE
Our fear, expressed in the Terminator movies, that our internet-based computer systems will become self-aware, deciding to destroy mankind, is misplaced. Instead, we should fear that an artificially intelligent program could destroy us all without even realizing what it is doing! It will certainly have no regrets, as AI is incapable of regretting, and incapable of feeling any form of emotion whatsoever. AI can only simulate emotion through mechanical responses.
Should there even be a nuclear button? Historically, there have been all too many false positives on both the Russian and American sides where the computer system threw up a false alarm that there was a massive nuclear attack underway, which the operators, fortunately, ignored, as otherwise we might not be reading this essay. Even today, there is a far more likely possibility of a false alarm than a massive incoming nuclear attack.[3]
What can we do to ensure that Skynet, and all artificial intelligence applications, do not take over the world? The answer, suggested by our Foreign Affairs maven, is simple: SUPERVISION. We need to be sure that Skynet, and all forms of artificial intelligence, are supervised by real, live human beings. These big corporations, whose top executives earn many millions of dollars, must be compelled to hire real, live human beings to man the technical support and customer service hotlines.
A more prosaic fear of the practical consequences of Artificial Intelligence is that there is nobody to answer the phone when you need help when an AI robot is either damaging your reputation or stealing money from your bank account.
If it is illegal for individuals to plagiarize other works, if we hold researchers responsible to provide sources for their information, why can we not require this from AI computer programs?[4] Should an exception be made for AI programs simply because of the massive investments in the programming? Decades ago, there was a significant investment in the Napster music file-sharing network, and they were compelled to follow the copyright laws.[5] How can we compel companies like Napster to follow the law, but yet permit generative AI companies to ignore copyright law?
WHAT IS INTELLIGENCE? WHAT IS ARTIFICIAL INTELLIGENCE?
Is Artificial Intelligence more artificial than intelligent? What does the term artificial intelligence really mean? What is artificial intelligence?
Much of the current discussion surrounding artificial intelligence falls apart when we attempt to define the terms. The original definition by Alan Turing is that a program is deemed artificially intelligent if the user conversing by keyboard is convinced that a live human being is responding. In other words, a computer program is artificially intelligent when it fools an intelligent human being into thinking the program is intelligent.[6]
But keep in mind that artificially intelligent computer programs have zero comprehension. Note that I did not say that computer programs have zero intelligence! Since scientists cannot definitively define intelligence, a practical and circular definition is that intelligence is the human quality measured by intelligence and SAT tests. Since the current generation of AI computer programs ace these tests, you cannot say they have zero intelligence!
Psychologists agree that SAT tests do predict how well someone will do in college classes, but they do not accurately predict how much of a success they will be in life. But how do you define success? Do you measure success by your income, or by your accomplishments, or by some Socratic notion of personal excellence? Psychologists have tried to make intelligence tests not as dependent on the level of education or their proficiency in English by including visual pattern matching geometrical problems, but measuring raw intelligence is elusive. How does creativity affect intelligence? How can you measure creativity? What exactly is creativity?[7]
Will computer programs ever gain consciousness? Surely not, since they lose power when they are unplugged. But how do you even define consciousness? Professor John Searle of Yale University defines consciousness as a state of subjective ontology or existence when you are awake or when you dream. What does this definition actually reveal about consciousness? If we dream but do not recall our dream when we awaken, were we conscious? How you answer this question will only refine your definition of consciousness, it will tell you anything about consciousness itself. Professor Searle also cautions professors that they should only lecture on consciousness after they earn tenure.[8]
Although they can do quite well on intelligence tests, answering questions instantaneously, computers are incapable of original thought. However, computers excel at pattern matching and running simulations, and the latest AI programs can replicate text from existing text samples. But generative artificial intelligence often fools even the educated into thinking they are intelligent! But this intelligence is still artificial. The program does not comprehend the meaning of these text samples, which means it has trouble evaluating their credibility. Thus, the generated output is often what programmers call GIGO: garbage-in, garbage-out.
Artificial intelligence is nothing new, there have been advances in artificial intelligence for many, many decades. Indeed, that is why computers were invented, to automate tasks that were formerly performed by humans.
Will artificial intelligence put thousands of office workers on the unemployment line? That horse left the barn decades ago. Before the Depression, four percent of white women were employed as telephone operators, today there are only thirty thousand telephone operators in the United States.[9] I worked for a grocery wholesaler where the stores sent in their weekly orders through a remote computer with no intervention, back in the day when they faxed in their orders to a room full of clerks keying them in. Likewise, banks had floors full of clerks keying in the checks and deposits that cleared daily.
But then, as now, as these unskilled jobs were eliminated, high-tech jobs were created. With improved inventory tracking it became possible to manage inventory from a desk rather than the warehouse, though inventory counts were still necessary. New internet companies became possible, like Amazon, Meetup, YouTube, Uber, and dozens more, generating more high-tech, and also some low-tech jobs. Some years ago, there was a scandal when the public realized that the Amazon AI assistant, Alexa, was recording their conversations so real, live human beings could refine their speech comprehension algorithms,[10] and the same is happening when programs debug their generative AI programs, there is simply no other alternative.
OVERVIEW OF ARTIFICIAL INTELLIGENCE
The author of a recent Atlantic article who is not a programmer claims that computers have learned how to write code, proclaiming that “in the age of AI, computer science is no longer the safe major.” As a programmer, I know this is totally ludicrous, and that the AI programming tools are just that: tools. What is clearly happening is now that AI is the buzzword, every web-based program is claiming that their latest product implements AI, because it is the latest sexy thing.[11]
Admittedly, there are both impressive and problematic artificially intelligent applications. Highly accurate grammar checkers have been available for decades, though the current application Grammarly is not one hundred percent accurate, I find it highly useful and accurate enough. Even when a grammar checker flags a phrase that is technically correct, I find that often I can reword the entire sentence to increase its clarity.
Even more impressive are the programs that translate from one language to another, they perform nearly flawlessly with most ordinary sentences. Less impressive are the dictation programs. When I broke my finger and was forced to use the Microsoft Word dictation programs, I was less impressed. The program especially bungles ancient Greek and Roman names, hearing ordinary words instead. However, recording your dictation and having a computer program translate the audio file is much more accurate, as can be seen by the generated transcripts for YouTube videos, though they are replete with misspelled or mistaken words.
Then there are the problematic artificial intelligence applications, such as fully automated robotic automobiles that have no empathy for people they run over when they encounter unusual situations, or totally normal situations that for some odd reason the robot thinks are abnormal. Dr Wikipedia has a good discussion of the problems surrounding driverless cars, pointing out there are industry-defined levels of automation. [12]
Another Atlantic article discusses how huge industrial robots in factories have occasionally killed workers in industrial accidents, and it also discusses the problems Tesla has been trying to overcome in its experimental driverless cars, with hyperlinks for many of these examples: “Since the first-known death resulting from the feature in January 2016, Tesla’s Autopilot has been implicated in more than forty deaths according to official report estimates. Malfunctioning Teslas on Autopilot have deviated from their advertised capabilities by misreading road markings, suddenly veering into other cars or trees, crashing into well-marked service vehicles, or ignoring red lights, stop signs, and crosswalks. We’re concerned that AI-controlled robots already are moving beyond accidental killing in the name of efficiency and ‘deciding’ to kill someone in order to achieve opaque and remotely controlled objectives.”[13]
Is the author suggesting that Elon Musk has been ignoring the First Law of Robotics suggested by Isaac Asimov, that robots should never injure or kill humans?[14] Should the government require that the Asimov laws of robotics be imbedded in appropriate advanced artificial intelligent programs? My contention is that, instead, the government should require human SUPERVISION to avoid needless deaths and injuries.
Years ago, I had a potential near-death experience. I was driving in a lonely stretch of interstate in the forests of Florida with no streetlights on a moonless night when waterfalls of heavy rain cascaded out of the pitch-black sky, blocking my vision. I had a hard time seeing the road, but after a few minutes of driving on dark asphalt, I spotted the brightly illuminated sign marking the next exit. I carefully exited to park in the parking lot of a closed gas station with dozens of other fleeing cars.
What would an artificially intelligent robotic car have done when confronted when its sensors could detect no reliable input? Quite likely a robot would have blindly attempted to pull over and stop until the storm cleared. Then the robot would have to decide, Should I turn off the lights? Turn the lights off and people will run into the robot car they cannot see. Or leave the taillights on, so when the people in cars behind choose to follow the lights, they run into the stupid robot car anyway.
This dilemma with the proper use of artificial intelligence has been solved for commercial airplanes. Artificial intelligence has been deeply imbedded in air traffic control and airplanes themselves for decades, from fly-by-wire to the integration of radar sensing technology both on the ground and the air. Pilots are only in-charge, by default, when their planes takeoff and land, but once they reach cruising altitude, their planes go on autopilot, because that is safer. But the artificially intelligent systems are constantly supervised by competent and hopefully alert human beings. There must always be a pilot in the cockpit to monitor the sensors, ready to take over when necessary. One or more air traffic controllers, sometimes in several airports simultaneously, track every plane in the sky. Critical fly-by-wire systems flying the plane often have manual backup systems if they go haywire. And in the classroom, there are incredibly impressive AI flight simulators that help train pilots on how to respond to emergency situations.[15]
The declining rate of plane accidents and crashes is mainly due to smart SUPERVISION of artificially intelligent systems deeply imbedded in the airline industry. Why is there not more public discussion of adopting this technology in the trucking industry as they seek greater automation?
WILL ARTIFICIAL INTELLIGENCE REPLACE PROGRAMMERS?
Recently, a non-programmer award-winning journalist opines: “ChatGPT and other chatbots can do more than compose full essays in an instant; they can also write lines of code in any number of programming languages. You can’t just type: ‘Make me a video game into ChatGPT,’ and get something that’s playable on the other end, but many programmers have now developed rudimentary smartphone apps coded by AI. In the ultimate irony, software engineers helped create AI, and now they are the American workers who think it will have the biggest impact on their livelihoods, according to a new survey from Pew Research Center. So much for learning to code.”[16]
The reality is that computers have been generating code snippets for decades. In fact, Microsoft had to generate code for base Windows applications to wean programmers from the far-easier-to-code non-graphic DOS applications. Not only, for decades, have their programming environments like Visual Basic, C#, and now many other programming languages, generated hundreds of lines of code for whatever type of shell application you like, but also when you start writing your program, an intelligent code generation program, IntelliSense, suggests what programming objects are available when you type the separating dot, but you need to know enough what they mean so you know what to pick.
Our author continues: “Coders are now using AI” “to accelerate the more routine parts of their job, such as debugging lines of code. In one study, software developers with access to GitHub’s Copilot chatbot were able to finish a coding task 56 percent faster than those who did it solo. In ten years, or maybe five, coding bots may be able to do so much more.”[17]
What was the coding task optimized in the study mentioned in this article? Microsoft ran a test, some participants were allowed to use a chatbot-like tool, Copilot, specialized for computer programs, to assist them in writing their code, while other participants wrote the application as they normally would.
This was the task: “Participants were instructed to write an HTTP server in JavaScript,” there were likely many examples of this generic code in the GitHub repository, which users voluntarily contributed to. Other types of more unusual programs may have far fewer examples to draw from, so the gain would be far less.
This is not my specialty, so I found a Reddit discussion: “GitHub Copilot – what’s your experience been like? Worth it?” Some programmers liked it, others did not, but nobody thought that it was magic. Evidently, this tool works like IntelliSense, except that it draws suggestions from an unfiltered codebase that may be flawed or just plain wrong, and you need to be a talented programmer to know the difference. Before this tool was available, programmers could ask for help from Dr Google or Reddit or Stack Overflow, often someone has written similar code that you can copy or follow to begin your project. [18]
The truth is, once you write the original version of a complex computer program, you are barely beginning. Quite often, even for comparatively simple programs, you have run the code dozens and dozens and dozens of times to find all the logical errors in your code. If you write code in a proprietary language, forget AI, people will not be as ready to share their code. Not to mention that accounting programs and many other programs read data from a myriad of different databases assembled in a maze of relationships with field names that are unique to the business, and the SQL, XML, and/or JSON scripts that query this data are computer languages themselves. Artificial intelligence is not magic, no computer program will ever be able to write code addressing the varied circumstances you encounter in the real world.
HUMAN NAIVETY MAKES ARTIFICIAL INTELLIGENCE DANGEROUS
Another Atlantic article written by a real programmer discusses the real dangers of using artificial intelligence for a task it is well-suited for: facial recognition. But it is not perfect, facial recognition works well with white faces, but with less accuracy with black or brown faces, due to the lack of contrasts.[19]
The real nightmare of artificial intelligence is when corporate and government bureaucrats rely on artificial intelligence, even when the computer’s total lack of intelligence is abundantly obvious. One author observes: “A program doesn’t always work as expected in the wild. In recent years, I’ve read with awe reports of AI systems revealing themselves to be not mythical, sentient, and unstoppable, but grounded, fragile, and fickle. A pregnant Black woman, Porcha Woodruff, was arrested after a false facial recognition match.”
She continues, “Brian Russell spent years clearing his name from an algorithm’s false accusation of unemployment fraud. Tammy Dobbs, an elderly woman with cerebral palsy, lost 24 hours of home care each week because of algorithmic troubles. Davone Jackson reported that he was locked out of the low-income housing his family needed to escape homelessness because of a false flag from an automated tenant-screening tool.” And heaven forbid if you have a name like Jim Smith, you will have lots of false positives.[20]
What is also true is that many of the government systems that determine eligibility for government programs such as welfare and unemployment benefits often use programmed checklists that deny benefits for wrong answers. What is missing is often there are exceptions for these questions, or the courts may not have decided on unique circumstances, so that YES-NO answers are really not possible.
These problems are basically caused by stupid humans who do not recognize that computer programs have zero intelligence, and naively assume they cannot make mistakes, and refuse to listen to sensible complaints. Again, this is GIGO, if the data input is garbage, the results regurgitated will also be garbage.
PROBLEM OF DEEP FAKES AND FACIAL RECOGNITION
Another danger is the deep-fake problem, altering photographs to show people in places where they have never been, and saying slanders they never said, which could lead to many dirty tricks in a political campaign. Also, face recognition algorithms are dangerous because of their false positives.
China has been using facial recognition technology to monitor and control its population using public facing cameras on nearly every corner. on most street corners.[21] Among its many millions of faces in China, there may be hundreds or even thousands of false positives. Imagine being thrown in jail or tortured because a machine mistook you for someone else!
There was a recent movie that featured drones that searched for victims to murder using facial recognition technology. Worse, the technology probably exists today for a drone to detect whether a face exists and target the person behind the face. What if Hamas had gotten their hands on drones like that? How much more horrifying would their slaughter near Gaza have been!
The author concludes: “The truth is, ‘Artificial Intelligence’ does not exist. The technology may be real, but the term itself is air. More specifically, it’s the heated breath of anyone with a seat across from the people with the authority to set the rules. AI can be the enthused pitch of a marketing executive. Or it can be the exhausted sigh of someone tired and perhaps confused about how minute engineering decisions could upend their entire life.”[22]
I agree with all of this author’s arguments, and I agree that AI technology is real. But the term ‘Artificial Intelligence’ is a phrase like any other, we simply need to define it. Another practical and vague definition is that AI is simply the sum of the most impressive recent programming achievements in pattern recognition and pattern replication. A more accurate and encompassing definition may be that everything that a computer program achieves is artificially intelligent, that is why programmers create computer programs!
CAN ARTIFICIAL INTELLIGENCE BE USEFUL FOR RESEARCH?
Personally, I view the Dr Google search engine as a type of artificial intelligence, although it is no longer considered cutting edge or sexy, as it has been around for many years. There is an alternate shell, Google Scholar, that returns many of the same results for scholarly topics, but also reveals the number of citations it finds for referenced articles.
Atlantic published another article by a high school teacher bemoaning how the chatbots generated text that was indistinguishable from an essay by a lazy high school student who throws something together at the last moment from Cliff notes. What is the answer? Perhaps at a minimum, a teacher should request that their students write a summary of their essays after they turn them in. Also, teachers should insist on proper footnotes, something that the current generation of chatbots omit.[23]
As a test, and we have included the detailed results of this test in a separate blog, we asked the November 2023 version of ChatGPT 3.5 and Google Bard several questions of increasing difficulty.[24]
SUMMARY OF TESTS FOR CHATGPT 3.5 AND GOOGLE BARD
When testing ChatGPT and Bard we asked for essays on:
- Summary and detailed accounts of the Peloponnesian Wars.
- The three, perhaps four, Platonic dialogues on love.
- Possible Genetic and Epigenetic Causes for Dementia
We asked these questions first without any qualifiers, then requested essays WITH COMMENTARY, WITH SOURCES, and WITH FOOTNOTES.
What is my background? Those who graduate from college with a computer science degree are often over-enthusiastic about the magic of technology. Although I have been interested in personal computers since near the beginning of the personal computer era, first buying a luggable blue Kaypro. I spent the first twenty years of my career as an accountant as an early implementer of computers, followed by another twenty years as a programmer of accounting systems, and my career for the next twenty years will be spent in freelance journalism. So, although I am an enthusiast of technology, this enthusiasm is tempered by practical experience and a more conservative outlook.
In my testing, ChatGPT did not distinguish between the qualifiers WITH SOURCES, and WITH FOOTNOTES, but we were pleased that Google Bard provided more complete citations, complete with publisher, publication date, and translator name when we requested an essay WITH FOOTNOTES. What was the quality of these essays? Both ChatGPT and Google Bard generated generally boring essays that resembled what you could expect from a procrastinating lazy high school student. We did not spot any obvious errors in the first two simple topics, although some of the responses were terse and vague. The quality and content of the Google Bard essays were somewhat better. Surprisingly, the summary and detailed essays were not that different.
As can be expected, these chatbots performed best on simple essays that might be assigned to high school students. But on the difficult question regarding Dementia and Genetics and Epigenetics, both Chatbots choked. What was odd about both of them was that when I requested WITH SOURCES/FOOTNOTES, both of the answers dramatically changed! But interestingly, ChatGPT returned more interesting sources. So, in the future, these chatbots may be about even in capabilities. But I don’t know enough about the topic yet to judge whether there were errors in the dementia essays.
To summarize, both ChatGPT and Google Bard performed best on simpler essays where they may be outclassed by the articles in Wikipedia. But for the more specialized topics, they fared much worse. But they can both be used to unearth more sources and to double-check the rough draft of your research. The accountant and scholar in me is not impressed, but the programmer in me is really impressed with this accomplishment. But I am skeptical about the usefulness of the end product, now and in the future.
If I were a high school student, would I use ChatGPT or Bard after I wrote the first draft of my essay? Certainly, many unimaginative teachers expect the rote responses that might come out of a chatbot essay, so I could add them in to make the essay more boringly acceptable. Personally, I would just use the Google Bard chatbot with the WITH FOOTNOTES keywords to find additional sources to consult, and to double-check my conclusions, knowing that the chatbot answer may not be correct, as it is always subject to GIGO, or garbage-in, garbage-out.
Was ChatGPT or Bard plagiarizing its answers? Since they are basically recombining existing patterns, in a deep sense that is all they can do. But I did take some key sentences and ask Dr Google to find them, and he could not, so these chatbots do not copy, at least not in the spot-checking I performed. The phrase WITH FOOTNOTES only generates footnotes with Google Bard. But I am not sure that adding the phrases WITH SOURCES or WITH FOOTNOTES prompts the chatbots to truly reveal all their sources.
Comments in the press that these artificial intelligence chatbots are black boxes not understood by even their programmers is partially true, but mostly misleading. The truth is that the programmers were compelled to have the models expose their reasoning while they were programming them, how else could they have developed them? Rather, another more inconvenient question must be asked: If big tech was compelled to expose the reasoning by a generative artificial intelligence model, would this open them up to copyright infringement litigation?
Recently a story popped up on the CNN YouTube channel titled: How Microsoft’s AI is messing up the news. The very stable geniuses at Microsoft decided to fire the editors who selected the stories that would be featured on their www.msn.com website, replacing them with stupid AI algorithms. There were many fringe wacko news stories clogging up the front page, and they turned off the AI feature that generated insensitive polling questions for disasters.[25]
LinkedIn asked if I wanted to respond to this question:
“How to add human advice? There are different ways to add human advice to robo-advisors, depending on your budget, resources, and target market,” followed by some managerial technobabble.
My response was: “The question is misstated. It should be reframed as: How do you add robo-advice to human advice?
If you have small accounts, have a few dozen or more canned responses written by real human beings, and use artificially intelligent tools to guess which one applies best to the situation. And have humans monitor at least the first few months of requests and the robo-choices, refining the logic and adding relevant answers. I would never allow the robot to coin advice on the fly, you just cannot predict what sort of garbage the robot will generate on occasion. “
ARTIFICIAL INTELLIGENCE WILL NEVER REPLACE ATTORNEYS
On the YouTube channel, LegalEagle, an attorney tells us an absolutely hilarious story of an attorney who foolishly used ChatGPT to write a legal opinion. They were doubly foolish: they did not check the logic of the generated legal opinion. Generic chatbots do not have a comprehensive database of legal cases to draw from, so the robot literally manufactured the case law, providing fake case citations! The judge was not pleased, quite likely the lawyers were heavily fined, on top of their profound embarrassment.[26]
If legal publication houses like CCH and Prentice Hall ever offer a product that has a unique implementation using a chatbot engine, you may wish to experiment with it and evaluate it, but ALWAYS remember, these are only pattern matching and pattern replication programs!
Also, Google Scholar has an option to provide case law citations for federal and state courts cases, I do not know if they support all states. This is a far tool since it reveals its sources.
NEITHER WILL ARTIFICIAL INTELLIGENCE REPLACE HAMBURGER FLIPPERS
Another recent Atlantic article discusses the current impracticality of robots replacing minimum wage food preparers at fast food restaurants. [27] This reminds me of articles on robots I read decades ago in Scientific American where robots were just not able to butter bread, a task that requires incredibly sensitive sensors to duplicate a task most adults can do with ease, though perhaps not very young children. That was thirty years ago, perhaps robots can butter bread now, though I am skeptical.
WHICH IS BETTER FOR RESEARCH: CHATGPT, OR DR WIKIPEDIA AND DR GOOGLE?
Which is better for research, ChapGPT or Bard, or Dr Wikipedia or Dr Google? IMHO, there is no contest, Dr Wikipedia and Dr Google together are far more accurate and far more useful than ChatGPT or Google Bard. Even if your teacher doesn’t want you to consult Dr Wikipedia, you can benefit from consulting with him on the sly. If you read my blogs, which have footnotes, you will discover I use Wikipedia often.[28] I don’t like to use Wikipedia as a primary source, but on occasion, I will for unusual or quirky topics.
How can you use Wikipedia? If you are a student, your teacher may be rabid about your ignoring Wikipedia. But even if that is true, you can still use Wikipedia to double-check facts you think you already know but want to double-check them. Often when I do this, I will include a footnote referencing the Wikipedia article, even for the most basic factoids.
Sometimes I use Wikipedia to find sources for use in my research. I did this for my videos on how Christians survived under the fascist regimes of Europe before and during World War II.[29] Often Wikipedia itself will directly quote a source, you could copy the quote and the reference.
Wikipedia can also tell you whether someone is culturally relevant. For example, in my historical Jesus video, there was a seminar where professors voted with color beads as to which biblical Jesus quotes were actually said by Jesus. The leading scholar did not have an entry in Wikipedia, which means he is mostly forgotten today.[30]
Some controversial topics such as abortion and LGBTQ issues[31] can have their Wikipedia pages dominated by activists, and relying on their Wikipedia pages can be problematic. Likewise, the AI chatbots may be digesting a lot of conspiracy theory or junk articles on controversial topics, contaminating their GIGO output.
My doctor has confirmed that the Wikipedia articles on medical topics are surprisingly accurate, many of them evidently are updated by medical students or doctors. The quality of specialized technical articles on Wikipedia put the chatbot essays to shame. This is likely true for any technical or scientific topic.
Some of my associates use ChatGPT to write letters of recommendations, or sometimes even love letters, feeding in their versions of these so ChatGPT can help refine them. But it would be quicker to ask Dr Google for sample letters of recommendation, or even love letters, and edit a final product.
WHAT QUESTIONS SHOULD WE ASK ABOUT ARTIFICIAL INTELLIGENCE?
What are some of the questions we should ask ourselves about artificial intelligence?
Why would I want to cede the joy of learning to a stupid black box that regurgitates joyless essays? What would I learn if I did that? We should seek wisdom from knowledge to improve ourselves. How can we improve ourselves if we don’t do the work of educating ourselves? You should only use artificial intelligence to suggest other sources, or to double-check your finished essays, but you only perpetuate your ignorance if you permit a stupid robot to replace your thinking.
When a business provides valuable services for its customers and clients, should the business fire employees to replace them with unthinking machines simply so it can be more profitable? Does profitability trump service? Does solely concentrating on profitability make the world a better place? Or do you want to stake the public perception of your firm on a stupid robot incapable of true comprehension?
Should governmental agencies, and corporations providing essential services, be required to have real live people, hopefully from the community rather than night shift workers from India, handling their customer service? YouTube pulls channels for just a few community strikes, and often they flag false positives often generated by bogus complaints by extremists. I know this from experience: one of my videos was blocked for condemning political violence, because YouTube thought I condoned political violence. What is scary is if extremists want to shut down my channel, all they need to do is file multiple complaints.
Why not require YouTube and the other media companies to hire real live people you can reach by telephone to respond to these issues? Should profits trump civic responsibility? Should profits trump encouraging democracy?
Should news aggregators like Microsoft and Facebook be required to hire real people for the important task of selecting news articles for dissemination to the public? How can stupid robots with zero comprehension ever do as good a job as a real, live, intelligent human being?
Do we really want fully automated self-driving cars negotiating heavy traffic? Airplanes deactivate the automatic pilot setting for takeoff and landing, what makes cars, trucks, and buses so special? Why not have the AI system ring an alarm to allow a driver to take control in congested traffic, or when the computer senses unusual conditions or an incoming flood of sensory perceptions? There are use AI sensors available to detect when drivers fall asleep, ringing alarms to wake them up.
My background is in both accounting and programming. In my experience, most programmers are unduly enthusiastic about automation, but there comes a point where further automation hurts rather than helps. One of the programs I wrote was automating the matching items, prices, and discounts of vendor invoices to purchase order inventory lines, including reading EDI electronic invoices. We were acquired by a large Fortune 500 company, and they were aghast that we did not automatically match these invoices. My response, speaking as an accountant rather than a programmer, was this: What is wrong with having a real live human being scan a matching invoice for fifty thousand dollars for five minutes before approving it?
The companies most guilty of overly aggressive AI implementations are big tech companies like Microsoft, Amazon, YouTube, Google, and others. If you have a problem with billing, or with other customer support or technical support issues, you just cannot get a human being on the phone. Those who choose to make a living using Amazon, YouTube, and many other platforms where small businessmen have no recourse, they have no human to call, when the rug is pulled out from under them, destroying their business at a moment’s notice.
One example of this is the Outdoor Boys YouTube channel, a reality program depicting what it is like to camp out in Alaska. Five years ago, YouTube adopted much-needed rules forbidding channels to feature children in a way that would appeal to pedophiles. Sometimes this Alaskan camper camps out with his young boys, and at first his channel was demonetized. This is Alaska, his boys are either wearing thick coats and mittens, or are snuggled into plush thermal sleeping bags. He was able to successfully contest this, but then commenting by his fans on the channel was disabled, and there are still no comments today, five years later.
The producer of this channel his channel is an attorney during the week, he camps on his long weekends. He now has over eight million subscribers, and you know he complained using his attorney letterhead, and even that was ignored by YouTube. He made the point on a YouTube video that if you have a YouTube channel, you must be prepared for the possibility that, at a moments’ notice, your channel could be demonetized, or shut down completely, and you will have little recourse, because there are no humans to call to complain.[32]
For those gentle readers who are concerned that this fellow is taking his young children snow camping in the woods of Alaska in the winter, everything is okay, because mom approves, and sometimes she goes camping with them. Not only that, but it is safer to camp out in Alaska in tents and snow caves during the winter than during the summer, because grizzly and brown bears hibernate during the winter.[33]
REMARKABLE MARCH OF COMPUTING PROGRESS
The march of progress in computing in my lifetime has just been remarkable. I remember when I was a teenage Boy Scout one of our ex-military dads took us to a NASA satellite tracking station. They had all these amazing dials and blinking lights and monitors. One monitor was tuned into a television soap opera, with a nearby reel-to-reel tape machine slowly winding a tape three inches wide. Our host said: “Boys, I’m gonna show you something truly amazing! Watch this!” He punched a few buttons to rewind the tape for about half a minute, and punched another button to replay on the monitor a segment of the soap opera from a few minutes before, to our wide-eyed astonishment, and many OOHS and AAHS!
Some years later I attended college at FSU during the punch card days. Student employees rushed about at the end of the day rolling stacks of punch cards recording the days transactions from the school bookstore and health center on hand trucks, taking them to the computer center to feed the beast. In computer class we punched out programs on punch cards, but in the computer lab there was one example of the newest innovation: a green text-only computer monitor with a keyboard! We heard that across campus there was a mythical half-million-dollar printer that could print graphics and photographs on slick paper!
People not answering the phone because the process was automated was even a problem back in the day. Our computer professor told us of a trick he pulled with the utility company who refused to answer their phone when he wanted to complain about his bill. He took their punched payment card sent to him in the mail and punched another hole in it, writing his complaint on the back of the card, and mailed it in with his payment. A week later he got an angry call from the utility accounting director. Their payment batch didn’t balance with the totals on the cards, so they finally had to go through the cards one by one to find his! It took them half the day.
I remember my first job as a CPA, they sold their old paper tape computer system to an employee who was starting his own CPA firm. When the tape broke, you had to carefully tape it together with special tape, being careful to preserve the existing holes punched into the paper tape. I could not imagine how this could be faster than doing bookkeeping by hand.
CONCLUSION
Artificial intelligence is not new, artificial intelligence has been with us ever since Alan Turing built the Turing Bombe, an early computer that helped break the unbreakable Nazi Enigma code that helped the Allies win World War II.[34] Most news articles about AI have no useful suggestions on what government and society should actually do to encourage the intelligent implementation of artificial intelligence.
What should lawmakers do? They can pass legislation to both mandate and encourage big tech companies to ensure adequate human SUPERVISION of artificial applications. This can be both through direct human staffing requirements, and indirect methods to shift the burden of proof away from consumers and whoever else is harmed by an overly aggressive implementation of artificial intelligence when there are no humans they can readily contact to complain, possibly even awarding double attorney fees and punitive damages as well.
Lawmakers should also protect copyrights by requiring that these artificially intelligent Chatbots MUST disclose the sources they are copying, no exceptions, no excuses. If human beings should not plagiarize, neither should stupid Chatbots. Computer programs are black boxes only for those who do not possess the source code and databases.
What should ordinary citizens do? Let your congressmen know that you want human supervision of artificially intelligent systems, and that you want to hear real-live human beings when you call customer support or technical support lines. Complain to your congressmen, with details, when you think you could have received a refund if you could have talked to real-live person, and in particular, talk to your congressman when your livelihood is threatened by an unresponsive artificially intelligent system.
We must remember that androids like Data on Star Trek, or the android boy in Steven Spielberg’s movie, Artificial Intelligence, will never truly become intelligent; they can never be emotional; they can be neither depressed nor elated. They can only mimic intelligence and emotions through pattern matching and pattern replication.
[1] https://en.wikipedia.org/wiki/Skynet_(Terminator)
[2] https://www.foreignaffairs.com/united-states/ai-already-war-flournoy
[3] https://en.wikipedia.org/wiki/List_of_nuclear_close_calls and https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident
[4] https://www.theatlantic.com/technology/archive/2023/08/ai-misinformation-scams-government-regulation/674946/
[5] https://en.wikipedia.org/wiki/Napster
[6] https://en.wikipedia.org/wiki/Turing_test and https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence
[7] https://en.wikipedia.org/wiki/Intelligence
[8] https://www.youtube.com/watch?v=ot4z1UrPvZY and https://www.youtube.com/watch?v=rHKwIYsPXLg
[9] https://www.vox.com/future-perfect/2023/7/18/23794187/telephone-operator-switchboard-automation-att-feigenbaum-gross and
[10] https://www.theguardian.com/technology/2019/oct/09/alexa-are-you-invading-my-privacy-the-dark-side-of-our-voice-assistants
[11] https://www.theatlantic.com/technology/archive/2023/09/computer-science-degree-value-generative-ai-age/675452/
[12] https://en.wikipedia.org/wiki/Self-driving_car
[13] https://www.theatlantic.com/technology/archive/2023/09/robot-safety-standards-regulation-human-fatalities/675231/
[14] https://www.theatlantic.com/technology/archive/2023/09/robot-safety-standards-regulation-human-fatalities/675231/ and https://en.wikipedia.org/wiki/Three_Laws_of_Robotics and https://en.wikipedia.org/wiki/Driver_drowsiness_detection
[15] https://en.wikipedia.org/wiki/Aviation_safety
[16] https://www.theatlantic.com/technology/archive/2023/09/computer-science-degree-value-generative-ai-age/675452/
[17] https://arxiv.org/pdf/2302.06590.pdf
[18] https://en.wikipedia.org/wiki/GitHub_Copilot and https://www.reddit.com/r/webdev/comments/11hmsqp/github_copilot_whats_your_experience_been_like/
[19] https://en.wikipedia.org/wiki/Facial_recognition_system
[20] https://www.theatlantic.com/technology/archive/2023/10/ai-chuck-schumer-forum-legislation/675540/
[21] https://en.wikipedia.org/wiki/Mass_surveillance_in_China
[22] https://www.theatlantic.com/technology/archive/2023/10/ai-chuck-schumer-forum-legislation/675540/
[23] https://www.theatlantic.com/technology/archive/2023/08/chatgpt-rebirth-high-school-english/675189/
[24] Artificial Intelligence, Comparing ChatGPT vs Bard, With and Without Footnotes and Sources, DETAILED TEST https://seekingvirtueandwisdom.com/artificial-intelligence-comparing-chatgpt-vs-bard-with-and-without-footnotes-and-sources/
[25] https://www.youtube.com/watch?v=mGHqz-BJz84
[26] https://www.youtube.com/watch?v=Tpq3hRt0pmw&t=205s
[27] https://www.theatlantic.com/technology/archive/2023/10/chipotle-fast-food-preparation-robots/675559/
[28] https://www.youtube.com/@ReflectionsMPH
[29] https://www.youtube.com/playlist?list=PLJVlY2bjK8ljmWA9WwFz3IeRonyUNxRKO
[30] https://youtu.be/81TkRcaNfCM
[31] https://youtu.be/F3BmZFYlqiU
[32] https://www.youtube.com/watch?v=bOYl_FjqG_0
Be the first to comment