The rise of artificial intelligence and its impact on various aspects of our lives has been one of the most hotly debated issues in the recent past. This essay is an endeavor to look at these aspects at a deeper level. This essay makes an attempt to look at two issues primarily. The first relates to the separate legal identity of artificial intelligence, and the second, the extent of liability of AI. In the first part, I shall be arguing that AI should be given a separate legal identity. I shall be looking at how legal identity has been conferred upon in the past, and how changing social conditions call for AI to have a separate identity.
In the second part, I shall be arguing that “risk-utility” or “negligence” based approach for ascertaining liability in cases involving AI would be flawed. I shall be arguing in favor of a “strict liability” approach, and pointing out reasons as to why it is bound to succeed in comparison to the older approaches employed.
Separate Legal Identity of AI
The very essence of the AI identity has been a matter of significant controversy. Often acclaimed as an oxymoron, thinking machines are attributed with theories of communication, internal knowledge, external knowledge, goal-driven behavior, and creativity. On the other hand, ‘person’ has been defined to include both natural persons as well as juridical persons, such as corporations that are recognized by law as having the rights and duties of a human being. As Salmond puts it, a person is any being “whom the law regards as capable of rights and duties”. Further, any being that is so capable “whether a human being or not, and no being that is not so capable is a person”. However, such duties are not necessary, as is illustrated from the examples of minors etc.
Arguably, rights have been bestowed on humans because they possess an inherent interest in liberty that is not found in any other entity. However, it is argued that rights such as Habeas Corpus have been conferred upon different entities, such as Chimpanzees in the recent past. This implies that a lot of entities not previously recognized are now seen as ‘non-human’ persons and rights have been conferred upon them. It is argued that the possible intersection between rapidly changing society and its legal needs has led to the recognition of various “non-human” entities as ‘persons’. This analogy points out to the fact that changing social conditions confer legal rights over different entities, often classifying them as ‘legal persons’.
Companies, ships or temples despite not having an intelligence of their own are regarded as non-human legal persons due to their association with human owners or representatives who display those attributes.
It has been often argued that no unitary theory can explain all the possible instances of legal personality. Further, one of the measures of looking at the same is from an “entity-centric or a consequence based point of view”. With an entity entailing legal rights and obligations with it, the condition-based perspective describes the obligations as the “essence of juristic personality”. Legal obligations are imposed on an entity by reason of the fact that the law takes the entity as a legal personality. This comes from the fact that every legal person has free will, and legal obligations are attached to this free will.
However, this does not successfully explain why a Hindu idol, not possessing a philosophical personal identity, has been attributed free will, but minors possessing philosophical personality deprived of the same. Yet, all those identities are recognized as legal persons and subjected to legal obligations. Even though it is questionable whether the idol ‘enjoys’ or ‘exercises’ the rights conferred on it, the rights still have been afforded and cannot be taken away. Thus, this establishes that a legal entity need not necessarily entail legal rights and obligations.
With Artificial Intelligence being autonomous, and possessing the abilities of phenomenal experience and higher intelligence (Sentience and Sapience), coupled with the fact that artificial agents can be imagined to have moral sense, in the sense that these respond to threat of punishment by modifying behavior, AI can be attributed separate legal existence.
Corporations, which have been accorded legal personality, although cannot be incarcerated, can be subject to financial penalties. Thus, the growing intelligence of Artificial Intelligence entities should also subject them to legal social control, as any other legal entity.
Extent of Liability
The question of liability of autonomous thinking machines has been one of the most complicated questions. It is often argued that ways in which machines perform their functions and make decisions can be directly traced back to things like the design of the machine, programming of the machine, and the knowledge “humans (have) embedded in the machine”. Further, it is asserted that it is the human hand that “guides, controls and defines the process”. As stated by the 3rd Circuit court in the case of United States v. Athlone Indus, Inc., robots cannot be sued even though “they can cause devastating damage”. Thus, it was asserted that the ultimate authority rests with this hand that has the final powers to override the machine and seize control. However, it is important to note here that the underlying presumption of the court in ruling in such cases was that the machines do not form a separate legal identity.
In the opinion of the researcher, in instances where the involvement of the human hand in the decision making process of the machine is so evident, there would not be a requirement to reexamine the liability rules. Additionally, where humans play an important role in the decision-making process of the machine, they can be held liable for the wrongful acts– these acts can be either negligent or intentional. This is so because these machines do not have a separate legal identity, as against what has been argued previously.
However, it is important to highlight the fact that artificial intelligence machines have more autonomy of function than the machines pointed out previously. It is argued that the liability of auto-pilot mode in aircrafts cannot be extended in the same manner as the liability of drone aircrafts. This is so because these systems act independently on the basis of their own information and analysis, devoid of the instruction of humans. Consequently, the decisions involved in such cases cannot be anticipated by the manufacturers of these machines. Thus, the entire analysis of liability hinges on the question of taking machines as individual or entity, or whether the legal system will need to decide these issues on basis other than agency.
It is argued that the relationship of principle-agent between an autonomous machine and a human being is often terminated at the very instance of the autonomous machine itself taking a course of action (other than the manner in which it was planned for).
Thus, in the opinion of the researcher, there are two principles of ascertaining liability. The first concerns itself with the products liability approach. In such an approach, and attributes the causality to some act of error or omission as the reason behind the said accident. In a normal fault-based system, drivers will bear the losses for the damages, and manufacturers will, in cases where the design fails. In such cases, it is often easier to avoid the question of agency because it is some human being responsible behind these acts, as a causal link in the events that led to the final injury.
In the second set of cases, it is to be assumed that it is not possible to reasonably assign the responsibility for the accident to a human being. In the absence of proper jurisprudence and judicial precedents on the issue, the researcher wishes to draw an analogy between those cases where causality cannot be attributed to human factors. In the recent cases of Toyota recalling its cars on grounds of ‘unintended acceleration’, the cars accelerated accidently through no acts of the driver. This resulted in the company not contesting the suit and paying a liability of $1.3 billion. Thus, in another similar lawsuit against the company, the principle applied was that of Les ipsa loquitor, with the jury ruling that even though the cause of this sudden acceleration could not be singled out, the accident was more likely to be caused by a car than the driver. This leads us to the conclusion that liability is still ascribed or attributed to the manufacturer.
In both these set of cases, it is hard to take into consideration the possibility of different kinds of scenarios that may arise, more commonly in the cases of auto-driving cars where the artificial intelligence system has to make a choice between injuring a person on one hand, and preventing this accident and injuring the passengers in the car. In this hypothetical, it is often hard to ascertain if the artificial intelligence mechanism has been taught to choose between the two.
In the opinion of the researcher, it is unreasonable to attribute the failure of the vehicle to the manufacturer, as the law needs to be fashioned in such a manner that it best serves the “collective interests of the affected parties”. Thus, a system of strict liability would be better placed, for the very reason that the vehicles are supposed to be technologically advanced, and thus causing less errors than their human counterparts. Further, a strict liability regime wins over the other tests including the “risk-utility” test and the “negligence standard” test, which are often very difficult to prove in such cases.
It is argued that the three reasons this strict liability test is efficacious are: first, redressing concerns for persons in cases where fault is being attributed to them for no fault of theirs (as stated in the Toyota case) is often problematic as the causal failure seems to be incomprehensible. Additionally, it runs counter to the basic notions of justice, fairness and risk allocation/sharing in the society.
Second, it is often easier for the manufacturer in such cases to absorb the shocks that arise from lawsuits, in contrast to the victims, thus distributing the burden of loss widely. It would be unfair to put the costs of such inexplicable acts on either of these parties.
Third, a stable regime would have two fold benefits: (a) There shall be more certainty to the innovators as the predictability rises, and consequently leading to a spur in the innovation (b) Litigation costs would reduce in cases where it is tough for any of the parties to establish the fault.
In such a strict liability regime, it is argued that the costs should be borne by the artificial intelligence powered cars, in terms of the insurance costs to cover against such damages, This stems from the idea that such vehicles would remain separate legal identities, and thus capable of bearing costs for themselves.
The possibilities of creating machines that have the ability to think for themselves raise a lot of ethical issues. These questions often range from ensuring that such machines do not harm human beings and other morally beings, to their liability in cases where such machines cause harm to individuals. This essay was an attempt to answer two such questions.
In the first part, the researcher concludes that the artificial intelligence machines can be said to have their separate legal identities. This stems from the idea that legal personhood has been influenced from the prevalent social conditions. Thus, from slaves and fetuses not possessing legal personhood to Hindu idols being recognized, the social conditions and the rise of Artificial Intelligence call for them to be covered within the ambit of separate legal personality.
In the second and final part, it is argued that since the transition from the normal machines to artificial intelligence machines, it is not feasible to continue using the same “risk-utility” or “negligence” tests, and instead use a strict liability test, with the costs borne by the artificial intelligence itself.
It would be immature to conclude that these tests shall work as definite, for the simpler reason that the jurisprudence regarding artificial intelligence is limited, and it would be interesting to see how law deals with various questions that arise in the future.
 Gabriel Hallevy, The Criminal Liability of Artificially Intelligence Entities, 4(2) Akron Intellectual Property Journal 170, 178 (2010).
 Bryan. A. Garner, Black’s Law Dictionary, 1510 (7th edn. 1999).
 P.J Fitzgerald, Salmond on Jurisprudence, 299.
 Alasdair Cochrane, Do animals have an interest in liberty?, 57 (3) Political studies 660, 679 (2009).
 Alan Yuhas, Chimpanzees granted petition to hear ‘legal persons’ status in court, 22nd April, 2015 The Guardian available at: http://www.theguardian.com/world/2015/apr/21/chimpanzees-granted-legal-persons-status-unlawful-imprisonment (Last visited on 23rd May, 2016).
 Benjamin D Allgrove, Legal Personality for Artificial Intellects: Pragmatic Solution or Science Fiction? 1, 48 (2004).
 Allgrove, Supra note 7, at 44.
 Allgrove, Supra note 7, at 46.
 Allgrove, Supra note 7, at 45.
 Gerald Dworkin, The Theory and Practice of Autonomy 62 (1988).
 Nick Bostrom, The Ethics of Artificial Intelligence 6, 20 (2011).
 Samir Chopra & Laurence White, Artificial Agents – Personhood in Law and Philosophy, 16 ECAI 1, 4 (2004).
 John C. Coffee, Jr., No Soul to Damn: No Body to Kick: An Unscandalised Inquiry Into the Problem of Corporate Punishment, 79 Mich. L. Rev. 386 (1981); Steven Box, Power, Crime and Mystification 16-79 (1st ed. Routledge 1983); Brent Fisse and John Braithwaite, The Allocation of Responsibility for Corporate Crime: Individualism, Collectivism and Accountability, (1988) SydLawRw 3.
 Thorne L. McCarty, Reflections on Taxman: An Experiment in Artificial Intelligence and Legal Reasoning, 90 Harvard Law Review 831, 837 (1977).
 There has been a lot of litigation in cases involving surgical robots and the safety of their operations; See O’Brien v. Intuitive Surgical, Inc., No. 10 C 3005, 2011 WL 304079.
 David C. Vladeck, Machines Without Principles, 89(3) Washington Law Review 117, 120 (2014).
 United States v. Athlone Indus., Inc., 746 F.2d 977, 979 (3d Cir.1984) (3rd Circuit Court, United States).
 Dylan LeValley, Autonomous Vehicle Liability–Application of Common Carrier Liability 5(7) Seattle University Law Review 1, 36 (2013).
 Vladeck, Supra note 18, at 142.
 Bob Fredericks, Toyota to pay $1.2B for hiding deadly ‘unintended acceleration’, March 19, 2014 ABC News available at: http://abcnews.go.com/Blotter/toyota-pay-12b-hiding-deadly-unintended-acceleration/story?id=22972214 (Last visited on May 23, 2016).
 Sharon Carty, Toyota’s sudden acceleration problem may have been triggered by tin whiskers, January 22, 2014 Huffington Post available at: http://www.huffing tonpost.com/2012/01/21/toyota-sudden-acceleration-tin-whiskers_n_1221076.html (Last visited on May 23, 2016).
 Vladeck, Supra note 18, at 142.
 Vladeck, Supra note 18, at 146.
 Vladeck, Supra note 18, at 146.
 James M. Anderson et al., The U.S. Experience with No-fault Automobile insurance: A retrospective, xiii (2010).