Posts Tagged ‘AI’

The Economy? None of Your Business

Wednesday, February 28th, 2024

By Bob Gaydos

My “smart” TV

My “smart” TV. RJ Photography

   So the very smart TV made an unscheduled stop the other night on one of those “business” news shows with a bunch of well-dressed, middle-aged men and younger women talking to each other about money. I think. 

    They were talking about the day on Wall Street and they all sounded very smart, like the TV, but, I don’t know, maybe something got lost in the translation for me.

     What I can recall of their stream of consciousness conversation that day went something like this: “Nvidia … AI … Magnificent Seven … Tesla … Earnings … Inflation … Nvidia … Kathy Wood … Tesla … Fed … Rates … AI … Microsoft … Shorts … Inflation … Techs… Bubble… AI … Nvidia … Fed … Tesla … Apple … Trillion … Inflation … Fed … Nvidia … Over-Priced … Tesla … AI … China … Apple … Nvidia … Price Target… Shorts … Rates … Inflation … Amazon … Fed … Techs… Index… AI … Dow … Tesla … Kathy Wood … Nvidia … Google … Shorts … Inflation … Earnings… Recession … Fed … AI … META … Index … Fed … Nvidia.”

     That’s pretty accurate, I think. So it sounds like something to do with money, right? But not the economy because that word was never mentioned. Well, maybe someone said “consumer” one time in a passing remark on inflation.

     The thing is, they all seemed to understand each other and mostly agreed with each other, especially about Nvidia and Tesla and AI and Kathy Wood. But after listening, I wasn’t sure how the economy was doing or even what stock I should buy or sell, if I were in the market to do so and maybe couldn’t afford Nvidia. Or maybe I couldn’t afford not to afford Nvidia.

      Confused, I looked around and heard pretty much the same conversation on every TV business show, so I figured they got paid to talk to each other about Nvidia and inflation, but weren’t interested in telling me anything useful. Certainly not about business.

       Luckily, I finally found the “I-know-every-stock-out -there” savant, Jim Cramer, whose message, as usual, was clear: “Buy! Buy! Buy!” or “Sell! Sell! Sell!” But don’t trade Apple. Still. Oh, and the economy’s doing fine.

       There’s something quietly reassuring about being talked to directly, rather than eavesdropping on some private conversation. Especially about money.

      Smart TV take note.

rjgaydos@gmail.com

      

Artificial Ethics and Artificial Intelligence

Sunday, November 26th, 2023

       By Bob Gaydos 

Justice Clarence Thomas … the reason for the Supreme Court’s new code of conduct.

Justice Clarence Thomas, the reason for the code.

     There used to be a regular newspaper feature called “Ripley’s Believe It Or Not,” which some younger people might not be aware of, given (1.) the rapid disappearance of community newspapers across the country, but (2.) there are still about 20 museums of the same name scattered across the United States in tourist areas, from New York to Los Angeles, although (3.) the ones in Atlantic City and Baltimore have permanently closed, presumably because of economic factors, not the absence of unusual stories people might have trouble believing, or, in this era of “fake news,” simply accepting as true, which would be the case with (4.) the U.S. Supreme Court making a big deal recently about finally adopting a code of ethics for the nine justices, who hitherto have been bound only by their own sense of morality in rendering opinions, unlike all other judges in the country, the code being a step the high court took only because of real news stories about (5.) Justice Clarence Thomas getting expensive gifts, vacations, education expenses for a young relative, all from individuals with issues coming before the court and (6.) his wife, Ginny, being financed by ultra-conservative groups as she actively fought the phony Trump fight to undo the legitimate 2020 election results, (7.) which did not stop her hubby from sitting in court and hearing cases about the legitimacy of the “stop the steal” campaign, apparently not seeing any conflict of interest, which was the most glaring, but not only, reason for a need for a code of ethics for the justices, which would be legitimately good news if it were, well, real, which (8.) it is not because there is no official process for an individual citizen to file a complaint nor any clear way given for justices to enforce the code among themselves, relying strictly on each justice’s own, ahem, sense of honor to recuse him or herself from a case in which there could be a conflict of interest or to avoid accepting expensive favors or doing anything else that could cast doubt on the court’s independence, all of which (9.) argues for Congress to set some legitimate ethics standards for the justices, given its power of approval of appointments to the court and control of its budget, two factors which apparently didn’t matter (10.) to the geniuses at OpenAI, the makers of the artificial intelligence product ChatGPT, when the non-profit board that governs the for-profit company (a system set up supposedly to protect against greed driving the new technology into dangerous territory) voted (11.) to fire Sam Altman, the genuine brain behind OpenAI and the company’s chief executive, a decision that was unexpected and laid to Altman not being fully forthcoming with the board, but not even AI could predict that (12.), in less than a week, Altman would be back as the boss of OpenAI and the nonprofit board of directors had been replaced by a whole new board, a development that was inevitable when Microsoft, sensing a way to dominate AI, quickly hired Altman after his firing and the next top Open AI executive and a bunch of employees all quit, also being hired by Microsoft, leaving the non-profit board with pretty much nothing to direct, so the members resigned and Altman and everyone else came back to OpenAI, signaling (13.) a victory for greed over prudent concern and (14.) giving more credence and urgency to the Biden administration’s creating a team to study how to deal with artificial intelligence before it’s too late and the whole human race winds up (15.) as an exhibit in an AI robot-built version of Believe It or Not.

    It’ll be big on AI Tik-Tok.

Bob Gaydos is writer-in-residence at zestoforange.com.

Musk, Killer Robots, Trump, the Eclipse

Wednesday, August 23rd, 2017

By Bob Gaydos

Donald Trump looking at the solar eclipse.

Donald Trump looking at the solar eclipse.

Elon Musk and Donald Trump made significant scientific statements this week. Digest that sentence for a second. …

OK, it’s not as strange as it sounds because each man was true to himself. That is, neither message was surprising, considering the source, but each was important, also considering the source.

Monday, Musk and 115 other prominent scientists in the field of robotics and artificial intelligence attending a conference in Melbourne, Australia, delivered a letter to the United Nations urging a ban on development and use of killer robots. This is not science fiction.

Responding to previous urging by members of the group of AI and robotics specialists, the UN had recently voted to hold formal discussions on so-called autonomous weapons. With their open letter, Musk and the others, coming from 26 countries, wanted the UN to be clear about their position — these are uniquely dangerous weapons and not so far off in the future.

Also on Monday, on the other side of the planet, as millions of Americans, equipped with special glasses or cardboard box viewers,  marveled at the rare site of a solar eclipse, Trump, accompanied by his wife, Melania, and their son, Barron, walked out onto a balcony at the White House and stared directly at the sun. No glasses. No cardboard box. No problem. I’m Trump. Watch me give the middle finger to science.

Of course, the only reason Trump shows up in the same sentence as Musk in a scientific discussion is that the man with the orange hair holds the title of president of the United States and, as such, has the power to decide what kind of weapons this nation employs and when to use them. Also, the president — any president — has the power, through words and actions, to exert profound influence on the beliefs, attitudes and opinions of people used to looking to the holder of the office to set an example. Hey, if it’s good enough for the president, it’s good enough for me. This is science fiction.

Please, fellow Americans, don’t stare at the sun during the next eclipse.

Trump’s disdain for science (for knowledge of any kind, really) and his apparently pathological need to do the opposite of what more knowledgeable people recommend, regardless of the topic, are a dangerous combination. When you’re talking about killer robots, it’s a potentially deadly one.

The U.S.Army Crusher robotic weapon.

The U.S.Army Crusher robotic weapon.

How deadly? Here’s a quote from the letter the AI specialists wrote: “Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.

“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

In fact, it’s already opened. On the Korean peninsula — brimming with diplomatic tension, the rattling of nuclear weapons by the North Koreans and the corresponding threats of “fire and fury” from Trump — a fixed-place sentry gun, reportedly capable of firing autonomously, is in place along the South Korean side of the Demilitarized Zone.

Developed by Samsung for South Korea, the gun reportedly has an autonomous system capable of surveillance up to two miles, voice-recognition, tracking and firing with mounted machine gun or grenade launcher. There is disagreement over whether the weapon is actually deployed to operate on its own, but it can. Currently, the gun and other autonomous weapons being developed by the U.S., Russia, Germany, China, the United Kingdom and others require a human to approve their actions, but usually in a split-second decision. There is little time to weigh the consequences and the human will likely assume the robot is correct rather than risk the consequences of an incorrect second-guess.

But it is precisely the removal of the human element from warfare that Musk and the other AI developers are worried about. Removing the calculation of deaths on “our side” makes deciding to use a killer robot against humans on the other side much easier. Too easy perhaps. And robots that can actually make that decision remove the human factor entirely. A machine will not agonize over causing the deaths of thousands of “enemies.”

And make no mistake, the robots will be used to kill humans as well as destroy enemy machines. Imagine a commander-in-chief who talks cavalierly about using nuclear weapons against a nation also being able to deploy robots that will think for themselves about who and what to attack. No second-guessing generals.

Musk, a pioneer in the AI field, has also been consistent with regard to his respect for the potential danger posed to humans by machines that think for themselves or by intelligences — artificial or otherwise — that are infinitely superior to ours. The Tesla CEO has regularly spoken out, for example, against earthlings sending messages into space to try to contact other societies, lest they deploy their technology to destroy us. One may take issue with him on solar energy, space exploration, driverless cars, but one dismisses his warnings on killer robots at one’s own risk. He knows whereof he speaks.

Trump is another matter. His showboating stunt of a brief look at the sun, sans glasses, will probably not harm his eyes. But the image lingers and the warnings, including one from his own daughter, Ivanka, were explicit: Staring directly at the sun during the eclipse can damage your retina and damage your vision. Considering the blind faith some of his followers display in his words and actions, it was yet another incredibly irresponsible display of ego and another insult to science.

Artificial intelligence is not going away. It has the potential for enormous benefit. If you want an example of its effect on daily life just look at the impact autonomous computer programs have on the financial markets. Having weapons that can think for themselves may also sound like a good idea, especially when a commander-in-chief displays erratic judgment, but their own creators — and several human rights groups — urge the U.N. to ban their use as weapons, in the same way chemical weapons and land mines are banned.

It may be one of the few remaining autonomous decisions humans can make in this area, and the most important one. We dare not wait until the next eclipse to make it.

rjgaydos@gmail.com