AI – the end of humans?

Everyone is talking about ChatGPT and AI generally. It has come close to passing the multiple-choice part of the US bar exam and has earned passing grades on law school essays that resemble ones written for the exam. Microsoft has invested billions and has added it to its search engine, Bing, in the hope that it will catch up ground lost to Google.
As AI becomes more pervasive and more integrated into everyday life, it can become a force for good. It could help solve the world’s problems such as climate change. But what if AI determines that humans are the biggest threat to climate change and that humans should be eliminated? What are the chances that, coupled with Boston Dynamics robots, it launches are a suite of terminator droids, like the famous Cyberdyne Systems Model 101 character played by Arnold Schwarzenegger? Far fetched? Well, ChatGPt has already threatened nuclear war and even Elon Musk says Bing ChatGPT is “eerily like” an AI system that “goes haywire and kills everyone”.
This leads to the obvious question, for a lawyer at least: who is responsible for the output? This has a number of facets.

1. Is the output useful?

There have been numerous reports of people pushing ChatGPT to its limits with mixed results. For example, as a secretarychat up lines for dateswriting sermonsto write condolence emails after a shooting. In fact, even the creator of ChatGPT says it’s a horrible product.
So, its usefulness is a bit hit-and-miss.

2. Is the output unbiased?

Some of the output has been questionable. Its creator acknowledges that it has become “biased, offensive and objectionable”. As a result it has pledged to fix it. The difficulty is due to the datasets the AI has been trained on.  Remember, it wasn’t that long ago that Microsoft had to apologise for the offensive output of its previous chatbot, Tay.
So, AI still has the capacity to produce biased or offensive output.

3. Is the output accurate?

In one example, ChatGPT invented a quote for the BBC. So, an original piece of content, great! But it had attributed it to a non-existent person as if it were all real. It was only because the reporter directly asked it that they were able to identify it had made it up. In another example, it failed to accurately answer the questions “What was the first TV cartoon” and who was responsible for the iPod. Separately, a professor concluded that, although ChatGPT may provide a coherent response, “coherence is not synonymous with accuracy”. And let’s not forget that when Google’s rival AI, Bard, got an answer wrong, the value of its shares plummeted by $100bn.
So, we’re not getting consistently accurate output yet.

4. Who owns the rights in the output?

The ChatGPT terms say they will assign to you all rights they have in the output. But, without having access to the datasets, how do you know whether the content is original? Indeed, those same terms say it might produce similar output for other users. Getty, no stranger to litigation, is reportedly suing the creators of an AI art tool, Stable Diffusion, for scraping its content. In other words, it was real content, but being used without authorisation.
So, you can’t guarantee the output is uniquely yours to use without restriction.

5. Who is liable?

Obviously, many AI tools – ChatGPT being no exception – will say it is provided “as is” with no warranty over quality and all liability capped at $100. That’s as much as you would expect for a free service. But what about a paid-for service? What about the many chatbots? How about driving cars? Tesla charges $15,000 to use its much-criticised autopilot function to respond to traffic lights and stop signs on top of features such as cruise control and steering. It has been forced to issue an update to the software to 363,000 vehicles over concerns that it could drive through a yellow light, travel straight through an intersection from a turn-only lane or not come to full stop at a stop sign. Tesla – and the regulator – still insist that a human driver must pay attention and take over if the software fails to perform properly. This is a key defence where it is being sued for crashes caused by the so-called “autopilot”. Indeed, this perception of autonomy has given rise to separate claims for false advertising.
So, even paid-for AI has its constraints. At present, it’s very much at the “buyer beware” stage and must be used under human supervision.

Where next?

Skynet and rampaging Terminator droids are not here, so it’s not the end for humans. Yet. But, already 77% of the devices we use feature one form of AI or another with 8 billion AI-powered voice assistants predicted for this year alone. And the global AI market is projected to reach $1,812bn by 2030.
If you need advice, contact me f.jennings@teacherstern.com or +44 (0) 20 7611 2338.
By |2023-12-07T16:34:56+00:00February 20th, 2023|Commercial|

Share This Story, Choose Your Platform!

Teacher Stern accreditation
Teacher Stern accreditation
Teacher Stern accreditation
Teacher Stern accreditation
Teacher Stern accreditation
Teacher Stern accreditation