ARTIFICIAL Intelligence (AI) or machine learning is a process of programming computers to make decisions for us. It has been around for centuries in one form or another but has only recently become a topic of widespread public discussion.
John McCarthy, an American computer scientist, coined the term “AI” in 1956 at a conference at Dartmouth College, defining AI as “the science and engineering of making intelligent machines”.
This is mainly due to the rapid pace of technological advancement in recent years, which has made AI seem like a more imminent reality than ever before.
AI will become even more important in the future as businesses look to automate more tasks. For example, in 2023, it’s estimated that AI will be responsible for managing 30% of all customer service interactions.
As AI becomes increasingly prevalent in our society, a question arises: Can AI make any mistakes?
Yes, AI can make mistakes. In fact, AI is more likely to make mistakes than humans because it relies on often incomplete or inaccurate data.
As a result, AI constantly learns and evolves as it interacts with more data. The more data it has to work with, the more accurate its predictions and recommendations.
This is why businesses are always looking for ways to collect more data. One way they do this is by using AI-powered chatbots to interact with customers.
Chatbots can collect data about customer preferences and behaviour. This data can improve the customer experience by making better recommendations or providing more personalised service.
More accurate data with better algorithms may replace (at least to some extent) AI’s mistakes or inaccuracy.

The tensest battle is who will be held liable if the AI system makes any mistake. The user, programmer, owner, or AI itself?
Sometimes, the AI system may be solely responsible for the mistake. In other cases, the humans who created or are using the AI system may be partially or fully responsible for the mistake.
Determining who is responsible for an AI mistake can be difficult, and it may require legal experts to determine liability on a case-by-case basis.
Arguably, it may be difficult to hold individual persons without their direct link between AI mistakes and individuals. As a result, it is rational and fair to hold AI liable instead of individuals.
How can we hold AI liable? Can we file lawsuits against AI? We can, only when it is undisputed that AI is a legal person.
The law or legal system permits filing lawsuits against persons, either legal or natural. It is also a disputed area to consider whether AI acts as a legal entity like a company or works as an agent.
Proponent argues that legal personhood is a legal concept that grants certain rights and responsibilities to entities, such as corporations or natural persons.
But AI systems are considered property and do not have the same legal rights and responsibilities as humans or legal entities.
They believe that AI should not be held liable for its mistakes because it is not a conscious being and, therefore, cannot be held responsible for its actions in the same way that a human can.
On another side of the argument, some believe that AI should be held accountable for its actions just like any other entity.
After all, if AI is capable of making decisions, then it should also be responsible for the consequences of those decisions.
But can AI make any decision without the help of a natural person working behind it? If not, why will AI take responsibility for its mistake?
Instead, we frequently see that the principals or employers are responsible (with some exceptions) for the actions of their agents or employees.
It is called vicarious liability theory, developed in the UK in 1842 in the case of R vs Birmingham & Gloucester Railway Co Ltd. This oldest case was the first to hold a corporation liable for the action of its employees.
Can we consider AI as a corporation or company? It is widely understood that AI involves the use of machine learning, in which a scientist or programmer has established certain code for.

AI will never work without systematic coding set by programmers. An instance of this would be how ChatGPT currently holds the highest position globally as an AI tool.
Human curiosity about the immense capabilities of ChatGPT is what drives its popularity. As a relatively new technology that can be applied to a broad range of industries, including healthcare.
If AI’s algorithms misdiagnose diseases, is it not reasonable to consider taking legal action against Open AI, the parent company of ChatGPT?
In line with this message, some jurisdictions are starting to explore the concept of granting legal personhood to AI systems in certain circumstances.
In addition to the legal entity’s liability, responsibility may be extended to natural persons where mistakes or errors are attributable to their explicit consent, connivance, or neglect.
The case of Solomon vs Solomon & Co., [1897] UKHL 1, [1897] AC 22 established that a corporation is viewed as a legal person, distinct from the individuals who own or operate it.
Whatever the liability goes to AI or individuals, it is a versatile tool with numerous benefits such as automation, productivity and decision making.
However, there are also some risks associated with its use. It is crucial to weigh both the benefits and drawbacks of AI carefully and implement it responsibly to maximise its opportunities while minimising the risks. – April 7, 2023
Robayet Ferdous Syed is a PhD candidate in the Department of Business Law and Taxation at Monash University Malaysia.
The views expressed are solely of the author and do not necessarily reflect those of Focus Malaysia.
Main pic credit: Geospatial World