Premium

New Ohio Bill Would Outlaw AI From Marrying Humans and Owning Property, but Is It Really Necessary?

Dall-E

There's not much doubt that artificial intelligence - AI, for short - is one of the major things driving developments in our increasingly high-tech lifestyles in the civilized world. The capacities and possibilities are still being worked out, but they include such things as AI-driven military aircraft, AI taking on business tasks ranging from human resources to production scheduling, and maybe even household robots capable of most domestic chores (if we ever get one, I'm naming it Rosie).

A lot of possibilities, yes, and also a lot of concerns. Many are justifiably worried about the impact on the jobs market. Robots have already displaced assembly-line workers in industries like automobile and even aircraft production. AI-driven robots could have an even greater impact on traditional blue-collar jobs. Even office jobs could be replaced by AI. The one thing that AI cannot do, yet, is to create, to innovate; all it can do is re-mix and re-combine data that already exists.

Some people might say that's what most of what we call "creativity" does as well, and if you look at the outpourings of dreck from the American movie industry lately, it's easy to see the point in that claim. In fact, if they ever try to pass off an AI-produced Adolphe Menjou or Paul Muni, I'm unplugging the television.

Still, some concerns about AI seem a tad premature, at least. Case in point: The great state of Ohio is considering a bill that will prevent any AI from marrying a human, or from owning property.

House Bill 469 would declare that artificial intelligence systems can’t be considered people — meaning they can’t marry, own property, or act as someone’s legal proxy.

The measure might sound absurd at first, but it’s aimed at heading off real legal and ethical problems created by rapidly advancing AI.

“We’re not talking about Optimus walking down the aisle to ‘Here Comes the Bride.’ That’s not what we’re talking about,” said Rep. Thad Claggett, a Newark Republican and the bill’s sole sponsor. “It is a legal loophole that a system can get embedded into.”

That still seems a bit of a stretch, and here's why: No AI has achieved sentience or self-awareness. At least, not yet. Being "considered people" would require that. All of these acts, marriage, owning property, acting as a legal proxy, all require sentience and self-awareness; they all involve contracts, and the ability to freely and voluntarily understand and agree to the terms of a contract. No AI can do that. We don't even let children do these things. 

Rep. Clagget does have a reason for this bill:

HB 469 also says people, not machines, would be responsible for any harm caused by AI.

That includes cases where conversations with chatbots have been tied to suicides or other tragedies.

Earlier this year, Matthew and Maria Raine testified before Congress after their 16-year-old son took his own life. The California couple discovered long conversations he’d had with ChatGPT, where he shared suicidal thoughts and was even offered help writing a suicide note.

“This bill is meant to close legal loopholes that could let companies or bad actors blame their AI programs instead of taking responsibility,” Claggett said.

But in every case, just as with corporations, there are people behind these programs, companies, and individuals who can and should be held to account for the kind of things Rep. Claggett describes. It may take some unraveling, but my concern is that focusing on the AI is a distraction.


Read More: Artificial Intelligence Is No Match for Artificial Stupidity

Could AI Be Used to Develop a Deadly New Generation of Biological Weapons?


No AI, at least with current or easily foreseeable technology, can be considered a "person." These programs have no self-awareness. They have no moral capacity. They are not sentient. They cannot create. They have no judgment, no sense of right and wrong. They merely combine and remix existing data and spit out replies in accordance with their programming.

Humans can do these things. Even animals have some self-awareness; if you've ever had a dog or a cat, you know this. A program does not have self-awareness. Given current technology, it's unlikely any program, any AI will reach that level of capability.

If any AI does reach that capability, we may have far, far bigger things to worry about.

At present, Rep. Claggett's efforts seem unnecessary. But what about the future? A self-aware AI would have enormous implications for humanity. 

A self-aware AI may see humans as a nuisance, an annoyance, or worse, a threat. A Skynet scenario seems rather unlikely, but a sentient AI with a connection to the internet could mess up humanity in any number of ways: Interfering with communications, with the banking systems, with business, with military operations, with manufacturing, and global shipping and trade. A self-aware AI wouldn't have to enter into any contract. It wouldn't necessarily have a moral compass; in fact, it would seem unlikely that ethics or morality would be a part of its programming. A sentient AI would truly be an alien intelligence, like nothing we have ever encountered. It may well operate on priorities and agendas we're not capable of understanding, even as it may be incapable of understanding us.

Fortunately, I think we're a long way off from this yet. So, yes, I think bills like Rep. Claggett's may be a bit premature, although there is an argument for some legal safeguards for people like Matthew and Maria Raine and their son, some process for holding the people behind the AI accountable. 

But marriage and other contractual arrangements? That may not be a problem, or even a possibility, for many, many years.

Recommended

Trending on RedState Videos