Posts Tagged ‘robots’

Yikes! AI Wants My Job!

Monday, June 3rd, 2024

By Bob Gaydos

How will AI affect knowledge workers?

How will AI affect knowledge workers?

 I accidentally (by not being in charge of the remote) wandered into a YouTube Ted Talk by Cathie Wood the other day and, realizing I was a hostage, I half-listened for a while.

      Wood is founder and CEO of ARK, an investment company that in recent years has made her millions as well as making her the darling genius of every stock market/investment show on regular TV and YouTube. Tesla was her not-so-secret word. She’s soured on Nvidia. But that’s not what grabbed my attention this night. This talk wasn’t about what stock to buy. It was about artificial intelligence. AI.

    “Did she just say ‘knowledge workers,’?” I asked the person in charge of the remote.

      “Uh huh.”

      “What the heck are ‘knowledge workers’?” I said quietly to myself, so as not to disturb anyone actually listening to the talk. Google will know.

       And it did.

       A variety of Human Resources sources told me pretty much the same thing. “Knowledge work” requires a high degree of cognitive skill, competence, knowledge, curiosity, expertise and creativity in problem-solving, critical thinking, gathering data, analyzing trends and decision-making. The work involves solving issues, making judgments. Applying knowledge.

     It sounded important.

     “Heck,” I thought to myself, “I was a knowledge worker.”

      One source# confirmed that with this list of  professional knowledge workers:

  • Accountant
  • Computer Programmer
  • Consultant
  • Data/Systems Analyst
  • Designer
  • Engineer
  • Lawyer
  • Marketing/Financial Analyst
  • Pharmacist
  • Physician
  • Researcher
  • Scientist
  • Software Developer
  • Web Designer
  • Writer/Author

    There I was. At the bottom of the list, but it was alphabetical. I was and still am a knowledge worker, at least in the words and world of Cathie Wood and all those other CEOs of hedge funds and Big Tech companies. 

      I used to be content being identified as a newspaperman or journalist. It was simple and understandable to everyone for about half a century. I wrote stuff to let people know what was going on in the world and maybe help them make sense of it. I tried.

      But the Internet introduced a new brand of people doing the same thing. Sort of. First, there came “influencers.” These are people who post information on social media platforms for others to view or read and react to. Well, I did that. Still do. But I didn’t get any contracts from companies to push their jeans or sneakers or other products. I guess I was not a very influential influencer.

      Then came the most insulting of all terms, the one so many professional HR people on Linkedin seem to be looking for daily: “Content creators.”

        The operating philosophy here seems to be, “We don’t really care how good or accurate or timely or well-written or even creative your content is, as much as we care that there’s enough of it to occupy our platform daily. Click bait is acceptable.”

        Some of the “content” is readable. Much is not, at least in the judgment of this knowledge worker.

        However, the salient point in this discussion is not so much who is or who is not a knowledge worker, but rather, is this a job title in danger of disappearing, not because the titans of industry have figured out yet another way to label mere mortals in a condescending manner, but because their seemingly vital jobs will be filled by computer chips.

      Wood, remember, was talking about AI. The question being, how will AI affect the need for all these knowledge workers in the future? Can these big firms save a bundle of money by having AI do the work of knowledgeable, creative people who are good at solving problems and decision-making? 

    To which I reply, “How can such a knowledge worker today even recommend a change that may eliminate his or her job?”

     AI is far from there, as anyone who watches some of the prepared programming on YouTube about how to make your life better, or what country to move to or Medieval history is aware. The content is often comparable to a poorly written fifth-grade essay plagiarized from a variety of sources and a “narrator” who often can’t pronounce the words correctly.

   It’s clear no human had a hand in presenting this program and, apparently, no human ever bothered to edit it to make it less amateurish. Because, you know, money saved. The lure of AI.

Cathie Wood

Cathie Wood

           But this is just the beginning, as Wood reminds us, and the Big Techs will go as far as they can, unless someone (Congress?) says “That’s too far.”

      The HR specialists I found in my knowledge worker capacity noted that “knowledge work” is intangible. This means it does not include physical labor or manual tasks. But if you work with your hands and you’re good at it, don’t get too cocky regarding artificial intelligence and your future. Wood has another scary word in her vocabulary: Robots. She loves them.

      Now, to be fair and thorough, I must note that there’s also another word that has been applied to people who do what I do, which included writing daily newspaper editorials for 23 years: Pundit.

       Here’s how Wikipedia, defines it: “A pundit is a learned person who offers opinion in an authoritative manner on a particular subject area (typically politics, the social sciences, technology or sport), usually through the mass media.”

        I’m not trying to beef up my obituary, but I think that fits me and this pundit suggests that other knowledge workers pay close attention when millionaire influencers like Cathie Wood start talking about replacing them with computerized content creators. Eventually it won’t be just rising stock prices and amateurish YouTube shows.

       And that’s my Ted Talk today.

(# Much of the information on knowledge workers in this column is from a piece by Robin Modell for Flexjobs. She is an experienced journalist, author and corporate writer and a contributor to the On Careers section of U.S. News & World Report. Clearly, a knowledge worker.)

rjgaydos@gmail.com

Yikes! AI Wants My Job!

Wednesday, May 29th, 2024

By Bob Gaydos

How will AI affect knowledge workers?

How will AI affect knowledge workers?

 I accidentally (by not being in charge of the remote) wandered into a YouTube Ted Talk by Cathie Wood the other day and, realizing I was a hostage, I half-listened for a while.

      Wood is founder and CEO of ARK, an investment company that in recent years has made her millions as well as making her the darling genius of every stock market/investment show on regular TV and YouTube. Tesla was her not-so-secret word. She’s soured on Nvidia. But that’s not what grabbed my attention this night. This talk wasn’t about what stock to buy. It was about artificial intelligence. AI.

    “Did she just say ‘knowledge workers,’?” I asked the person in charge of the remote.

      “Uh huh.”

      “What the heck are ‘knowledge workers’?” I said quietly to myself, so as not to disturb anyone actually listening to the talk. Google will know.

       And it did.

       A variety of Human Resources sources told me pretty much the same thing. “Knowledge work” requires a high degree of cognitive skill, competence, knowledge, curiosity, expertise and creativity in problem-solving, critical thinking, gathering data, analyzing trends and decision-making. The work involves solving issues, making judgments. Applying knowledge.

     It sounded important.

     “Heck,” I thought to myself, “I was a knowledge worker.”

      One source# confirmed that with this list of  professional knowledge workers:

  • Accountant
  • Computer Programmer
  • Consultant
  • Data/Systems Analyst
  • Designer
  • Engineer
  • Lawyer
  • Marketing/Financial Analyst
  • Pharmacist
  • Physician
  • Researcher
  • Scientist
  • Software Developer
  • Web Designer
  • Writer/Author

    There I was. At the bottom of the list, but it was alphabetical. I was and still am a knowledge worker, at least in the words and world of Cathie Wood and all those other CEOs of hedge funds and Big Tech companies. 

      I used to be content being identified as a newspaperman or journalist. It was simple and understandable to everyone for about half a century. I wrote stuff to let people know what was going on in the world and maybe help them make sense of it. I tried.

      But the Internet introduced a new brand of people doing the same thing. Sort of. First, there came “influencers.” These are people who post information on social media platforms for others to view or read and react to. Well, I did that. Still do. But I didn’t get any contracts from companies to push their jeans or sneakers or other products. I guess I was not a very influential influencer.

      Then came the most insulting of all terms, the one so many professional HR people on Linkedin seem to be looking for daily: “Content creators.”

        The operating philosophy here seems to be, “We don’t really care how good or accurate or timely or well-written or even creative your content is, as much as we care that there’s enough of it to occupy our platform daily. Click bait is acceptable.”

        Some of the “content” is readable. Much is not, at least in the judgment of this knowledge worker.

        However, the salient point in this discussion is not so much who is or who is not a knowledge worker, but rather, is this a job title in danger of disappearing, not because the titans of industry have figured out yet another way to label mere mortals in a condescending manner, but because their seemingly vital jobs will be filled by computer chips.

      Wood, remember, was talking about AI. The question being, how will AI affect the need for all these knowledge workers in the future? Can these big firms save a bundle of money by having AI do the work of knowledgeable, creative people who are good at solving problems and decision-making? 

    To which I reply, “How can such a knowledge worker today even recommend a change that may eliminate his or her job?”

     AI is far from there, as anyone who watches some of the prepared programming on YouTube about how to make your life better, or what country to move to or Medieval history is aware. The content is often comparable to a poorly written fifth-grade essay plagiarized from a variety of sources and a “narrator” who often can’t pronounce the words correctly.

   It’s clear no human had a hand in presenting this program and, apparently, no human ever bothered to edit it to make it less amateurish. Because, you know, money saved. The lure of AI.

Cathie Wood

Cathie Wood

           But this is just the beginning, as Wood reminds us, and the Big Techs will go as far as they can, unless someone (Congress?) says “That’s too far.”

      The HR specialists I found in my knowledge worker capacity noted that “knowledge work” is intangible. This means it does not include physical labor or manual tasks. But if you work with your hands and you’re good at it, don’t get too cocky regarding artificial intelligence and your future. Wood has another scary word in her vocabulary: Robots. She loves them.

      Now, to be fair and thorough, I must note that there’s also another word that has been applied to people who do what I do, which included writing daily newspaper editorials for 23 years: Pundit.

       Here’s how Wikipedia, defines it: “A pundit is a learned person who offers opinion in an authoritative manner on a particular subject area (typically politics, the social sciences, technology or sport), usually through the mass media.”

        I’m not trying to beef up my obituary, but I think that fits me and this pundit suggests that other knowledge workers pay close attention when millionaire influencers like Cathie Wood start talking about replacing them with computerized content creators. Eventually it won’t be just rising stock prices and amateurish YouTube shows.

       And that’s my Ted Talk today.

(# Much of the information on knowledge workers in this column is from a piece by Robin Modell for Flexjobs. She is an experienced journalist, author and corporate writer and a contributor to the On Careers section of U.S. News & World Report. Clearly, a knowledge worker.)

rjgaydos@gmail.com

 

Musk, Killer Robots, Trump, the Eclipse

Wednesday, August 23rd, 2017

By Bob Gaydos

Donald Trump looking at the solar eclipse.

Donald Trump looking at the solar eclipse.

Elon Musk and Donald Trump made significant scientific statements this week. Digest that sentence for a second. …

OK, it’s not as strange as it sounds because each man was true to himself. That is, neither message was surprising, considering the source, but each was important, also considering the source.

Monday, Musk and 115 other prominent scientists in the field of robotics and artificial intelligence attending a conference in Melbourne, Australia, delivered a letter to the United Nations urging a ban on development and use of killer robots. This is not science fiction.

Responding to previous urging by members of the group of AI and robotics specialists, the UN had recently voted to hold formal discussions on so-called autonomous weapons. With their open letter, Musk and the others, coming from 26 countries, wanted the UN to be clear about their position — these are uniquely dangerous weapons and not so far off in the future.

Also on Monday, on the other side of the planet, as millions of Americans, equipped with special glasses or cardboard box viewers,  marveled at the rare site of a solar eclipse, Trump, accompanied by his wife, Melania, and their son, Barron, walked out onto a balcony at the White House and stared directly at the sun. No glasses. No cardboard box. No problem. I’m Trump. Watch me give the middle finger to science.

Of course, the only reason Trump shows up in the same sentence as Musk in a scientific discussion is that the man with the orange hair holds the title of president of the United States and, as such, has the power to decide what kind of weapons this nation employs and when to use them. Also, the president — any president — has the power, through words and actions, to exert profound influence on the beliefs, attitudes and opinions of people used to looking to the holder of the office to set an example. Hey, if it’s good enough for the president, it’s good enough for me. This is science fiction.

Please, fellow Americans, don’t stare at the sun during the next eclipse.

Trump’s disdain for science (for knowledge of any kind, really) and his apparently pathological need to do the opposite of what more knowledgeable people recommend, regardless of the topic, are a dangerous combination. When you’re talking about killer robots, it’s a potentially deadly one.

The U.S.Army Crusher robotic weapon.

The U.S.Army Crusher robotic weapon.

How deadly? Here’s a quote from the letter the AI specialists wrote: “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.

“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

In fact, it’s already opened. On the Korean peninsula — brimming with diplomatic tension, the rattling of nuclear weapons by the North Koreans and the corresponding threats of “fire and fury” from Trump — a fixed-place sentry gun, reportedly capable of firing autonomously, is in place along the South Korean side of the Demilitarized Zone.

Developed by Samsung for South Korea, the gun reportedly has an autonomous system capable of surveillance up to two miles, voice-recognition, tracking and firing with mounted machine gun or grenade launcher. There is disagreement over whether the weapon is actually deployed to operate on its own, but it can. Currently, the gun and other autonomous weapons being developed by the U.S., Russia, Germany, China, the United Kingdom and others require a human to approve their actions, but usually in a split-second decision. There is little time to weigh the consequences and the human will likely assume the robot is correct rather than risk the consequences of an incorrect second-guess.

But it is precisely the removal of the human element from warfare that Musk and the other AI developers are worried about. Removing the calculation of deaths on “our side” makes deciding to use a killer robot against humans on the other side much easier. Too easy perhaps. And robots that can actually make that decision remove the human factor entirely. A machine will not agonize over causing the deaths of thousands of “enemies.”

And make no mistake, the robots will be used to kill humans as well as destroy enemy machines. Imagine a commander-in-chief who talks cavalierly about using nuclear weapons against a nation also being able to deploy robots that will think for themselves about who and what to attack. No second-guessing generals.

Musk, a pioneer in the AI field, has also been consistent with regard to his respect for the potential danger posed to humans by machines that think for themselves or by intelligences — artificial or otherwise — that are infinitely superior to ours. The Tesla CEO has regularly spoken out, for example, against earthlings sending messages into space to try to contact other societies, lest they deploy their technology to destroy us. One may take issue with him on solar energy, space exploration, driverless cars, but one dismisses his warnings on killer robots at one’s own risk. He knows whereof he speaks.

Trump is another matter. His showboating stunt of a brief look at the sun, sans glasses, will probably not harm his eyes. But the image lingers and the warnings, including one from his own daughter, Ivanka, were explicit: Staring directly at the sun during the eclipse can damage your retina and damage your vision. Considering the blind faith some of his followers display in his words and actions, it was yet another incredibly irresponsible display of ego and another insult to science.

Artificial intelligence is not going away. It has the potential for enormous benefit. If you want an example of its effect on daily life just look at the impact autonomous computer programs have on the financial markets. Having weapons that can think for themselves may also sound like a good idea, especially when a commander-in-chief displays erratic judgment, but their own creators — and several human rights groups — urge the U.N. to ban their use as weapons, in the same way chemical weapons and land mines are banned.

It may be one of the few remaining autonomous decisions humans can make in this area, and the most important one. We dare not wait until the next eclipse to make it.

rjgaydos@gmail.com