Posts Tagged ‘AI’

When You Wish Upon a Star

Friday, November 8th, 2024

                               Friday, Nov. 8, 2024

By Bob Gaydos

 44B287EA-13D6-4491-A548-136A5A1D7F85    I saw a shooting star last night. Spoiler alert: Yes, this is going to be one of those “synchronicity strikes again, isn’t that something and it can only be a good sign” columns. 

     To start with, I’ve never seen a shooting star before. The only reason I saw this one is that Prince, our resident beagle/Australian shepherd mix decided he needed to take one more trip outside around midnight. Since he was already over his usual daily allotment of such outings, this was rare.

   We walked outside, I looked up at the sky, and said to myself, Wow, that is spectacular, referring to all the stars visible. When you live in the country, the lights of so-called civilization don’t interfere. Then I looked off to to my right, to the east, and sonofagun: shooting star. Cool.

     Of course, when I came back inside, I immediately posted my experience on Facebook. A good omen, I called it. 

     But of course, I checked with a reliable source. The Farmers Almanac told me: “With many people of all cultures looking to the heavens for signs, symbols, and answers for eons, it is no surprise that shooting star superstitions exist. The most prevalent superstition is that it is good luck to wish upon a star.  …

    “In the second century, the Greek astronomer Ptolemy hypothesized that they were a result of the gods peering down from heaven, having parted the heavens to do so and therefore dislodging a star in the process. Because a shooting star was a tangible symbol of the gods looking down at that moment, it was believed that a wish or request made upon seeing the star was more likely to be heard and granted. …

    “In the 1830s, the idea of wishing upon stars became even more prominent in modern beliefs. Seeing a meteor was believed to be a sign of promise, luck, and good fortune.”

      Looking for a second source, I turned to current science. Google AI told me this: “Some believe that seeing a shooting star is a sign of good fortune or luck. An old superstition suggests that wishing upon a shooting star will grant your wish.”

      Well, one man’s superstition is another man’s omen. And what some call coincidence, others see as synchronicity. It is all connected. One need only pay attention.

      Friends, trying to be helpful, pointed out to me that what I had seen was actually a meteor, part of a meteor shower expected last night. And scientists point out that if I were to go outside and lie down on my back and stare up at the sky for 15 minutes on a dark night, I might well see a dozen “shooting stars.“

     But I didn’t. I went out on this particular night, looked to my right (to the East, the good luck direction) and saw something I had never seen before, even out here in the country. It was like that black squirrel I wrote about a while back. Apparently, just not as rare.

      Anyway, I had a couple of wishes. I made them. I guess you’re supposed to keep the wishes secret so as not to jinx them. I will say that one of them concerned a legendary New York City baseball team located in the Bronx and a stroke of fortune that might befall them concerning another kind of shooting star if they look to the right.

    The other wish was political in nature. Any regular reader of my column could probably look to the right and voice some version of it. In fact, go ahead and do it on my star.

     Meteor, shmeteor, who am I to argue with Ptolemy?

    The gods are with us.

rjgaydos@gmail.com





Everybody, Even AI, Needs an Editor

Wednesday, August 28th, 2024

By Bob Gaydos

Image from Storybench, Northeastern University School of Journalism

Image from Storybench, Northeastern University School of Journalism

  That was fast. A while back, I wrote a column about how AI was coming to take my job and the jobs of maybe millions of other people lovingly referred to as “knowledge workers” by the CEOs of the companies who are rushing to make it happen.

     Well, it happened, in of all places, Wyoming.

      A reporter, new to the trade and no longer with the paper, admitted to using artificial intelligence to create quotes, even whole stories, for the Cody Enterprise, a newspaper founded by Buffalo Bill Cody, who needed no genius computer to create his legendary story.

      The phony reporter was busted by a veteran reporter for a competing newspaper, the Powell Tribune, who said he started asking around when he noted some of the phrases in the other guy’s stories seemed to be a bit off, or robotic. Bad writing.

       No surprise there. YouTube is replete with documentaries and special reports full of inappropriate or outdated or trite, slightly off phrasing narrated by “people” who mispronounce basic words. 

       At such times, I can be heard complaining agitatedly, “AI!”

       Also, preaching: “Everybody needs an editor.”

       It’s my favorite response and basic rule for any writer. But the YouTube videos go on, their producers seemingly unaware or unconcerned with the amateurish product they’re presenting. Artificial mediocrity suffices, probably because it draws an audience and it’s cheaper than employing the real thing. People.

         Which brings me back to Wyoming. Things were different in Wyoming. The governor and other people were saying they never said what the newspaper said they said, although they admitted it sounded like something they might have said.

          Classic AI. Scan the past and take a plausible shot at recreating it in the present. Chatbots always aim to please.

          But unlike YouTube shows, newspapers can get into trouble making stuff up, with or without AI. The publisher of The Enterprise said AI is “the new, advanced form of plagiarism and in the field of media and writing, plagiarism is something every media outlet has had to correct at some point or another.”

           She said the paper now has a policy in place to recognize AI-generated stories. That’s good. With no official controls on this new, still-developing technology, all news media should have a policy on the proper and improper use of artificial intelligence and make it known to the public as well as their staff.

           The editor of the Enterprise, Chris Bacon, said, “The Enterprise didn’t have an AI policy because it seemed obvious that journalists shouldn’t use it to write stories.”

          Yeah, one would think, right? But these are different times. Times of stolen user names, online dating scams, spam emails. Progress. While the recognized practice in journalism always has been not to steal other people’s writing and not to make stuff up, some have tried and some have been caught. Newspapers have been sued. But AI apparently makes it harder to spot, especially for less-experienced eyes.

        The AP says Bacon is “a military veteran and former air ambulance pilot who was named editor in May after a few months working as a reporter.” Swift promotion. 

        He said he “failed to catch” the AI copy and false quotes and apologized that “AI was allowed to put words that were never spoken” into stories in his newspaper. At least seven stories, seven people falsely quoted.

      I don’t know. Apparently one AI-generated story about a shooting in Yellowstone National Park included this sentence: “This incident serves as a stark reminder of the unpredictable nature of human behavior, even in the most serene settings.”

       In nearly half a century working in newspapers, I can’t recall a more unlikely sentence in a news story to have been allowed to pass unquestioned by a copy editor. No way Moe or Dennis or Linda or Tim lets me get away with that hackneyed life lesson without at least a, “Hey, Bob …” 

       Maybe my basic rule for writers needs to be modified: Everybody needs a really fussy human editor. 

rjgaydos@gmail.com

Yikes! AI Wants My Job!

Monday, June 3rd, 2024

By Bob Gaydos

How will AI affect knowledge workers?

How will AI affect knowledge workers?

 I accidentally (by not being in charge of the remote) wandered into a YouTube Ted Talk by Cathie Wood the other day and, realizing I was a hostage, I half-listened for a while.

      Wood is founder and CEO of ARK, an investment company that in recent years has made her millions as well as making her the darling genius of every stock market/investment show on regular TV and YouTube. Tesla was her not-so-secret word. She’s soured on Nvidia. But that’s not what grabbed my attention this night. This talk wasn’t about what stock to buy. It was about artificial intelligence. AI.

    “Did she just say ‘knowledge workers,’?” I asked the person in charge of the remote.

      “Uh huh.”

      “What the heck are ‘knowledge workers’?” I said quietly to myself, so as not to disturb anyone actually listening to the talk. Google will know.

       And it did.

       A variety of Human Resources sources told me pretty much the same thing. “Knowledge work” requires a high degree of cognitive skill, competence, knowledge, curiosity, expertise and creativity in problem-solving, critical thinking, gathering data, analyzing trends and decision-making. The work involves solving issues, making judgments. Applying knowledge.

     It sounded important.

     “Heck,” I thought to myself, “I was a knowledge worker.”

      One source# confirmed that with this list of  professional knowledge workers:

  • Accountant
  • Computer Programmer
  • Consultant
  • Data/Systems Analyst
  • Designer
  • Engineer
  • Lawyer
  • Marketing/Financial Analyst
  • Pharmacist
  • Physician
  • Researcher
  • Scientist
  • Software Developer
  • Web Designer
  • Writer/Author

    There I was. At the bottom of the list, but it was alphabetical. I was and still am a knowledge worker, at least in the words and world of Cathie Wood and all those other CEOs of hedge funds and Big Tech companies. 

      I used to be content being identified as a newspaperman or journalist. It was simple and understandable to everyone for about half a century. I wrote stuff to let people know what was going on in the world and maybe help them make sense of it. I tried.

      But the Internet introduced a new brand of people doing the same thing. Sort of. First, there came “influencers.” These are people who post information on social media platforms for others to view or read and react to. Well, I did that. Still do. But I didn’t get any contracts from companies to push their jeans or sneakers or other products. I guess I was not a very influential influencer.

      Then came the most insulting of all terms, the one so many professional HR people on Linkedin seem to be looking for daily: “Content creators.”

        The operating philosophy here seems to be, “We don’t really care how good or accurate or timely or well-written or even creative your content is, as much as we care that there’s enough of it to occupy our platform daily. Click bait is acceptable.”

        Some of the “content” is readable. Much is not, at least in the judgment of this knowledge worker.

        However, the salient point in this discussion is not so much who is or who is not a knowledge worker, but rather, is this a job title in danger of disappearing, not because the titans of industry have figured out yet another way to label mere mortals in a condescending manner, but because their seemingly vital jobs will be filled by computer chips.

      Wood, remember, was talking about AI. The question being, how will AI affect the need for all these knowledge workers in the future? Can these big firms save a bundle of money by having AI do the work of knowledgeable, creative people who are good at solving problems and decision-making? 

    To which I reply, “How can such a knowledge worker today even recommend a change that may eliminate his or her job?”

     AI is far from there, as anyone who watches some of the prepared programming on YouTube about how to make your life better, or what country to move to or Medieval history is aware. The content is often comparable to a poorly written fifth-grade essay plagiarized from a variety of sources and a “narrator” who often can’t pronounce the words correctly.

   It’s clear no human had a hand in presenting this program and, apparently, no human ever bothered to edit it to make it less amateurish. Because, you know, money saved. The lure of AI.

Cathie Wood

Cathie Wood

           But this is just the beginning, as Wood reminds us, and the Big Techs will go as far as they can, unless someone (Congress?) says “That’s too far.”

      The HR specialists I found in my knowledge worker capacity noted that “knowledge work” is intangible. This means it does not include physical labor or manual tasks. But if you work with your hands and you’re good at it, don’t get too cocky regarding artificial intelligence and your future. Wood has another scary word in her vocabulary: Robots. She loves them.

      Now, to be fair and thorough, I must note that there’s also another word that has been applied to people who do what I do, which included writing daily newspaper editorials for 23 years: Pundit.

       Here’s how Wikipedia, defines it: “A pundit is a learned person who offers opinion in an authoritative manner on a particular subject area (typically politics, the social sciences, technology or sport), usually through the mass media.”

        I’m not trying to beef up my obituary, but I think that fits me and this pundit suggests that other knowledge workers pay close attention when millionaire influencers like Cathie Wood start talking about replacing them with computerized content creators. Eventually it won’t be just rising stock prices and amateurish YouTube shows.

       And that’s my Ted Talk today.

(# Much of the information on knowledge workers in this column is from a piece by Robin Modell for Flexjobs. She is an experienced journalist, author and corporate writer and a contributor to the On Careers section of U.S. News & World Report. Clearly, a knowledge worker.)

rjgaydos@gmail.com

Yikes! AI Wants My Job!

Wednesday, May 29th, 2024

By Bob Gaydos

How will AI affect knowledge workers?

How will AI affect knowledge workers?

 I accidentally (by not being in charge of the remote) wandered into a YouTube Ted Talk by Cathie Wood the other day and, realizing I was a hostage, I half-listened for a while.

      Wood is founder and CEO of ARK, an investment company that in recent years has made her millions as well as making her the darling genius of every stock market/investment show on regular TV and YouTube. Tesla was her not-so-secret word. She’s soured on Nvidia. But that’s not what grabbed my attention this night. This talk wasn’t about what stock to buy. It was about artificial intelligence. AI.

    “Did she just say ‘knowledge workers,’?” I asked the person in charge of the remote.

      “Uh huh.”

      “What the heck are ‘knowledge workers’?” I said quietly to myself, so as not to disturb anyone actually listening to the talk. Google will know.

       And it did.

       A variety of Human Resources sources told me pretty much the same thing. “Knowledge work” requires a high degree of cognitive skill, competence, knowledge, curiosity, expertise and creativity in problem-solving, critical thinking, gathering data, analyzing trends and decision-making. The work involves solving issues, making judgments. Applying knowledge.

     It sounded important.

     “Heck,” I thought to myself, “I was a knowledge worker.”

      One source# confirmed that with this list of  professional knowledge workers:

  • Accountant
  • Computer Programmer
  • Consultant
  • Data/Systems Analyst
  • Designer
  • Engineer
  • Lawyer
  • Marketing/Financial Analyst
  • Pharmacist
  • Physician
  • Researcher
  • Scientist
  • Software Developer
  • Web Designer
  • Writer/Author

    There I was. At the bottom of the list, but it was alphabetical. I was and still am a knowledge worker, at least in the words and world of Cathie Wood and all those other CEOs of hedge funds and Big Tech companies. 

      I used to be content being identified as a newspaperman or journalist. It was simple and understandable to everyone for about half a century. I wrote stuff to let people know what was going on in the world and maybe help them make sense of it. I tried.

      But the Internet introduced a new brand of people doing the same thing. Sort of. First, there came “influencers.” These are people who post information on social media platforms for others to view or read and react to. Well, I did that. Still do. But I didn’t get any contracts from companies to push their jeans or sneakers or other products. I guess I was not a very influential influencer.

      Then came the most insulting of all terms, the one so many professional HR people on Linkedin seem to be looking for daily: “Content creators.”

        The operating philosophy here seems to be, “We don’t really care how good or accurate or timely or well-written or even creative your content is, as much as we care that there’s enough of it to occupy our platform daily. Click bait is acceptable.”

        Some of the “content” is readable. Much is not, at least in the judgment of this knowledge worker.

        However, the salient point in this discussion is not so much who is or who is not a knowledge worker, but rather, is this a job title in danger of disappearing, not because the titans of industry have figured out yet another way to label mere mortals in a condescending manner, but because their seemingly vital jobs will be filled by computer chips.

      Wood, remember, was talking about AI. The question being, how will AI affect the need for all these knowledge workers in the future? Can these big firms save a bundle of money by having AI do the work of knowledgeable, creative people who are good at solving problems and decision-making? 

    To which I reply, “How can such a knowledge worker today even recommend a change that may eliminate his or her job?”

     AI is far from there, as anyone who watches some of the prepared programming on YouTube about how to make your life better, or what country to move to or Medieval history is aware. The content is often comparable to a poorly written fifth-grade essay plagiarized from a variety of sources and a “narrator” who often can’t pronounce the words correctly.

   It’s clear no human had a hand in presenting this program and, apparently, no human ever bothered to edit it to make it less amateurish. Because, you know, money saved. The lure of AI.

Cathie Wood

Cathie Wood

           But this is just the beginning, as Wood reminds us, and the Big Techs will go as far as they can, unless someone (Congress?) says “That’s too far.”

      The HR specialists I found in my knowledge worker capacity noted that “knowledge work” is intangible. This means it does not include physical labor or manual tasks. But if you work with your hands and you’re good at it, don’t get too cocky regarding artificial intelligence and your future. Wood has another scary word in her vocabulary: Robots. She loves them.

      Now, to be fair and thorough, I must note that there’s also another word that has been applied to people who do what I do, which included writing daily newspaper editorials for 23 years: Pundit.

       Here’s how Wikipedia, defines it: “A pundit is a learned person who offers opinion in an authoritative manner on a particular subject area (typically politics, the social sciences, technology or sport), usually through the mass media.”

        I’m not trying to beef up my obituary, but I think that fits me and this pundit suggests that other knowledge workers pay close attention when millionaire influencers like Cathie Wood start talking about replacing them with computerized content creators. Eventually it won’t be just rising stock prices and amateurish YouTube shows.

       And that’s my Ted Talk today.

(# Much of the information on knowledge workers in this column is from a piece by Robin Modell for Flexjobs. She is an experienced journalist, author and corporate writer and a contributor to the On Careers section of U.S. News & World Report. Clearly, a knowledge worker.)

rjgaydos@gmail.com

 

The Economy? None of Your Business

Wednesday, February 28th, 2024

By Bob Gaydos

My “smart” TV

My “smart” TV. RJ Photography

   So the very smart TV made an unscheduled stop the other night on one of those “business” news shows with a bunch of well-dressed, middle-aged men and younger women talking to each other about money. I think. 

    They were talking about the day on Wall Street and they all sounded very smart, like the TV, but, I don’t know, maybe something got lost in the translation for me.

     What I can recall of their stream of consciousness conversation that day went something like this: “Nvidia … AI … Magnificent Seven … Tesla … Earnings … Inflation … Nvidia … Kathy Wood … Tesla … Fed … Rates … AI … Microsoft … Shorts … Inflation … Techs… Bubble… AI … Nvidia … Fed … Tesla … Apple … Trillion … Inflation … Fed … Nvidia … Over-Priced … Tesla … AI … China … Apple … Nvidia … Price Target… Shorts … Rates … Inflation … Amazon … Fed … Techs… Index… AI … Dow … Tesla … Kathy Wood … Nvidia … Google … Shorts … Inflation … Earnings… Recession … Fed … AI … META … Index … Fed … Nvidia.”

     That’s pretty accurate, I think. So it sounds like something to do with money, right? But not the economy because that word was never mentioned. Well, maybe someone said “consumer” one time in a passing remark on inflation.

     The thing is, they all seemed to understand each other and mostly agreed with each other, especially about Nvidia and Tesla and AI and Kathy Wood. But after listening, I wasn’t sure how the economy was doing or even what stock I should buy or sell, if I were in the market to do so and maybe couldn’t afford Nvidia. Or maybe I couldn’t afford not to afford Nvidia.

      Confused, I looked around and heard pretty much the same conversation on every TV business show, so I figured they got paid to talk to each other about Nvidia and inflation, but weren’t interested in telling me anything useful. Certainly not about business.

       Luckily, I finally found the “I-know-every-stock-out -there” savant, Jim Cramer, whose message, as usual, was clear: “Buy! Buy! Buy!” or “Sell! Sell! Sell!” But don’t trade Apple. Still. Oh, and the economy’s doing fine.

       There’s something quietly reassuring about being talked to directly, rather than eavesdropping on some private conversation. Especially about money.

      Smart TV take note.

rjgaydos@gmail.com

      

Artificial Ethics and Artificial Intelligence

Sunday, November 26th, 2023

       By Bob Gaydos 

Justice Clarence Thomas … the reason for the Supreme Court’s new code of conduct.

Justice Clarence Thomas, the reason for the code.

     There used to be a regular newspaper feature called “Ripley’s Believe It Or Not,” which some younger people might not be aware of, given (1.) the rapid disappearance of community newspapers across the country, but (2.) there are still about 20 museums of the same name scattered across the United States in tourist areas, from New York to Los Angeles, although (3.) the ones in Atlantic City and Baltimore have permanently closed, presumably because of economic factors, not the absence of unusual stories people might have trouble believing, or, in this era of “fake news,” simply accepting as true, which would be the case with (4.) the U.S. Supreme Court making a big deal recently about finally adopting a code of ethics for the nine justices, who hitherto have been bound only by their own sense of morality in rendering opinions, unlike all other judges in the country, the code being a step the high court took only because of real news stories about (5.) Justice Clarence Thomas getting expensive gifts, vacations, education expenses for a young relative, all from individuals with issues coming before the court and (6.) his wife, Ginny, being financed by ultra-conservative groups as she actively fought the phony Trump fight to undo the legitimate 2020 election results, (7.) which did not stop her hubby from sitting in court and hearing cases about the legitimacy of the “stop the steal” campaign, apparently not seeing any conflict of interest, which was the most glaring, but not only, reason for a need for a code of ethics for the justices, which would be legitimately good news if it were, well, real, which (8.) it is not because there is no official process for an individual citizen to file a complaint nor any clear way given for justices to enforce the code among themselves, relying strictly on each justice’s own, ahem, sense of honor to recuse him or herself from a case in which there could be a conflict of interest or to avoid accepting expensive favors or doing anything else that could cast doubt on the court’s independence, all of which (9.) argues for Congress to set some legitimate ethics standards for the justices, given its power of approval of appointments to the court and control of its budget, two factors which apparently didn’t matter (10.) to the geniuses at OpenAI, the makers of the artificial intelligence product ChatGPT, when the non-profit board that governs the for-profit company (a system set up supposedly to protect against greed driving the new technology into dangerous territory) voted (11.) to fire Sam Altman, the genuine brain behind OpenAI and the company’s chief executive, a decision that was unexpected and laid to Altman not being fully forthcoming with the board, but not even AI could predict that (12.), in less than a week, Altman would be back as the boss of OpenAI and the nonprofit board of directors had been replaced by a whole new board, a development that was inevitable when Microsoft, sensing a way to dominate AI, quickly hired Altman after his firing and the next top Open AI executive and a bunch of employees all quit, also being hired by Microsoft, leaving the non-profit board with pretty much nothing to direct, so the members resigned and Altman and everyone else came back to OpenAI, signaling (13.) a victory for greed over prudent concern and (14.) giving more credence and urgency to the Biden administration’s creating a team to study how to deal with artificial intelligence before it’s too late and the whole human race winds up (15.) as an exhibit in an AI robot-built version of Believe It or Not.

    It’ll be big on AI Tik-Tok.

Bob Gaydos is writer-in-residence at zestoforange.com.

Musk, Killer Robots, Trump, the Eclipse

Wednesday, August 23rd, 2017

By Bob Gaydos

Donald Trump looking at the solar eclipse.

Donald Trump looking at the solar eclipse.

Elon Musk and Donald Trump made significant scientific statements this week. Digest that sentence for a second. …

OK, it’s not as strange as it sounds because each man was true to himself. That is, neither message was surprising, considering the source, but each was important, also considering the source.

Monday, Musk and 115 other prominent scientists in the field of robotics and artificial intelligence attending a conference in Melbourne, Australia, delivered a letter to the United Nations urging a ban on development and use of killer robots. This is not science fiction.

Responding to previous urging by members of the group of AI and robotics specialists, the UN had recently voted to hold formal discussions on so-called autonomous weapons. With their open letter, Musk and the others, coming from 26 countries, wanted the UN to be clear about their position — these are uniquely dangerous weapons and not so far off in the future.

Also on Monday, on the other side of the planet, as millions of Americans, equipped with special glasses or cardboard box viewers,  marveled at the rare site of a solar eclipse, Trump, accompanied by his wife, Melania, and their son, Barron, walked out onto a balcony at the White House and stared directly at the sun. No glasses. No cardboard box. No problem. I’m Trump. Watch me give the middle finger to science.

Of course, the only reason Trump shows up in the same sentence as Musk in a scientific discussion is that the man with the orange hair holds the title of president of the United States and, as such, has the power to decide what kind of weapons this nation employs and when to use them. Also, the president — any president — has the power, through words and actions, to exert profound influence on the beliefs, attitudes and opinions of people used to looking to the holder of the office to set an example. Hey, if it’s good enough for the president, it’s good enough for me. This is science fiction.

Please, fellow Americans, don’t stare at the sun during the next eclipse.

Trump’s disdain for science (for knowledge of any kind, really) and his apparently pathological need to do the opposite of what more knowledgeable people recommend, regardless of the topic, are a dangerous combination. When you’re talking about killer robots, it’s a potentially deadly one.

The U.S.Army Crusher robotic weapon.

The U.S.Army Crusher robotic weapon.

How deadly? Here’s a quote from the letter the AI specialists wrote: “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.

“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

In fact, it’s already opened. On the Korean peninsula — brimming with diplomatic tension, the rattling of nuclear weapons by the North Koreans and the corresponding threats of “fire and fury” from Trump — a fixed-place sentry gun, reportedly capable of firing autonomously, is in place along the South Korean side of the Demilitarized Zone.

Developed by Samsung for South Korea, the gun reportedly has an autonomous system capable of surveillance up to two miles, voice-recognition, tracking and firing with mounted machine gun or grenade launcher. There is disagreement over whether the weapon is actually deployed to operate on its own, but it can. Currently, the gun and other autonomous weapons being developed by the U.S., Russia, Germany, China, the United Kingdom and others require a human to approve their actions, but usually in a split-second decision. There is little time to weigh the consequences and the human will likely assume the robot is correct rather than risk the consequences of an incorrect second-guess.

But it is precisely the removal of the human element from warfare that Musk and the other AI developers are worried about. Removing the calculation of deaths on “our side” makes deciding to use a killer robot against humans on the other side much easier. Too easy perhaps. And robots that can actually make that decision remove the human factor entirely. A machine will not agonize over causing the deaths of thousands of “enemies.”

And make no mistake, the robots will be used to kill humans as well as destroy enemy machines. Imagine a commander-in-chief who talks cavalierly about using nuclear weapons against a nation also being able to deploy robots that will think for themselves about who and what to attack. No second-guessing generals.

Musk, a pioneer in the AI field, has also been consistent with regard to his respect for the potential danger posed to humans by machines that think for themselves or by intelligences — artificial or otherwise — that are infinitely superior to ours. The Tesla CEO has regularly spoken out, for example, against earthlings sending messages into space to try to contact other societies, lest they deploy their technology to destroy us. One may take issue with him on solar energy, space exploration, driverless cars, but one dismisses his warnings on killer robots at one’s own risk. He knows whereof he speaks.

Trump is another matter. His showboating stunt of a brief look at the sun, sans glasses, will probably not harm his eyes. But the image lingers and the warnings, including one from his own daughter, Ivanka, were explicit: Staring directly at the sun during the eclipse can damage your retina and damage your vision. Considering the blind faith some of his followers display in his words and actions, it was yet another incredibly irresponsible display of ego and another insult to science.

Artificial intelligence is not going away. It has the potential for enormous benefit. If you want an example of its effect on daily life just look at the impact autonomous computer programs have on the financial markets. Having weapons that can think for themselves may also sound like a good idea, especially when a commander-in-chief displays erratic judgment, but their own creators — and several human rights groups — urge the U.N. to ban their use as weapons, in the same way chemical weapons and land mines are banned.

It may be one of the few remaining autonomous decisions humans can make in this area, and the most important one. We dare not wait until the next eclipse to make it.

rjgaydos@gmail.com