Hasta la vista, baby: reflections on the risks of algocracy, killer robots, and artificial superintelligence

Contenido principal del artículo

Pedro Rubim Borges Fortes

Resumen

The neologism Algocracy may mean government or governance by algorithms. Architects of artificial intelligence have perspectives on killer robots and government by artificial superintelligence and are engaged in public debate on both themes. The risks of being dominated by artificial superintelligence and of being subjected to undemocratic, unconstitutional or illegal algo norms inspires our reflection. Institutions should organize rules of the game that prevent machine learning algorithms from learning how to dominate humans. Algorithms need new design requirements to incorporate responsibility, transparency, auditability, incorruptibility, and predictability. The algorithmic responsibility of the state, national public policies for developing a trustworthy AI, and the algorithmic law of killer robots and artificial superintelligence could reduce the risks of algocracy. The particular character of algorithms demands a special discipline to control their power, architecture, and commands. law and government can channel the development and use of killer robots, eventually even setting a global prohibition of autonomous weapons. Likewise, the threat of government by algorithms posed by the emergence of an artificial superintelligence that dominates humankind also requires the development of a new algorithmic law that establishes checks and balances and controls the technological system.

Descargas

Los datos de descargas todavía no están disponibles.

Detalles del artículo

Cómo citar
Borges Fortes, P. R. (2021). Hasta la vista, baby: reflections on the risks of algocracy, killer robots, and artificial superintelligence. Revista De La Facultad De Derecho De México, 70(279-1), 45–72. https://doi.org/10.22201/fder.24488933e.2021.279-1.78811
Biografía del autor/a

Pedro Rubim Borges Fortes, University of Oxford

DPHIL (Oxford), JSM (Stanford), LLM (Harvard), MBE (Coppe-UFRJ), BA (PUC-Rio), LLB (UFRJ). Visiting Professor at the Doctoral Program of the National Law School at UFRJ. E-mail: pfortes@alumni.stanford.edu. ORCID ID: 0000-0003-0548-4537. Chair of the Working Group Law and Development at the Research Committee of Sociology of Law. Chair of the Collaborative Research Network Law and Development at the Law and Society Association. Convenor of the stream Exploring Legal Borderlands: Empirical and Interdisciplinary Approaches at the Socio-Legal Studies Association. International Director of the Brazilian Institute for Studies of Tort Law (IBERC). Research Associate at the Laboratory of Institutional Studies (LETACI).

Citas

BORGES FORTES, Pedro Rubim, “How Legal Indicators influence a justice system and judicial behavior: The Brazilian National Council of Justice and ‘Justice in Numbers’”, The Journal of Legal Pluralism and Unofficial Law, vol. 47, n. 1, 2015.

BORGES FORTES, Pedro Rubim, “Paths to Digital Justice: Judicial Robots, Algorithmic Decision-Making, and Due Process” Asian Journal of Law and Society, 1-17, 2020.

BORGES FORTES, Pedro Rubim, MAGALHÃES MARTINS, Guilherme and FARIAS OLIVEIRA, Pedro, A Case Study of Digital Geodiscrimination: How Algorithms May Discriminate Based on the Geographical Location of Consumers, Droit et Société, forthcoming.

BORGES FORTES, Pedro Rubim, Responsabilidade Algorítmica do Estado: Como as Instituições Devem Proteger Direitos dos Usuários nas Sociedades Digitais?, in MAGALHÃES MARTINS, Guilherme, and ROSENVALD, Nelson, (eds), Responsabilidade Civil e Novas Tecnologias, Indaiatuba, Foco, 2020.

BORGES FORTES, RUBIM, Pedro, AI Policy in Portugal: Ambitious Yet Laconic About Legal Routes Towards Trustworthy AI, in LARSSON, Stefan, BOGUSZ, Claire Ingram, and Andersson SCHWARZ, Jonas (eds.), Human-Centred AI in the EU: Trustworthiness as a Strategic Priority in the European Member States, Elf, 2020.

BOSTROM, Nick and YUDKOWSKY, Eliezer, Ethics of Artificial Intelligence, in RAMSEY, William and FRANKISH, Keith (eds.), Cambridge Handbook of Artificial Intelligence, CUP, 2011.

BOSTROM, Nick, Superintelligence: Paths, Dangers, Strategies, Oxford, Oxford University Press, 2014

CORCOS, Christine A., "More Human Than Human: How Some SF Presents AI's Claims to the Right to Life and Self-Determination", Oxford Journal of Socio-Economic Studies, Hilary Term, 2017.

CROOTOF, Rebecca, “The Killer Robots are Here: Legal and Policy Implications” Cardozo Law Review, vol. 36, 2015.

DANAHER, John, "The threat of algocracy: Reality, resistance and accommodation", Philosophy & Technology, 29.3, 2016.

DANAHER, John, “Freedom in an age of Algocracy,” in VALLOR Shannon (ed.), Oxford Handbook on the Philosophy of Technology, Oxford, Oxford University Press, forthcoming.

ECO, Umberto, Apocalittici e integrati, vol. 27, T. Bompiani, 1984.

ELLIOT, Anthony, Automated Mobilities: From Weaponized Drones to Killer Bots. Journal of Sociology, vol.55, num. 1, 2019.

FORD, Martin, Architects of Intelligence: The truth about AI from the people building it, Packt Publishing Ltd, 2018.

FRANKE, Ulrike Esther, “Drones, Drone Strikes, and U.S. Policy: The Politics of Unmaned Aerial Vehicles”, Parameters, vol. 44, n. 1, 2014; A World of Killer Apps, Nature, vol. 477, 2011.

GORDON, John-Stewart, "Artificial moral and legal personhood", AI & SOCIETY, 2020.

HAKAN Hydén, “Sociology of digital law and artificial intelligence”, in PRIBAN, Jiri, Research Handbook of Sociology of Law, Cheltenham, Edward Elgar Publishing, 2020.

LARSSON, Stefan, INGRAM BOGUSZ, Claire and SCHWARZ, Jonas Andersson (eds.), Human-Centred AI in the EU: Trustworthiness as a Strategic Priority in the European Member States, Elf, 2020.

LAWRENCE LESSIG, Code and Other Laws of Cyberspace, New York, Basic Books, 1999.

MAYER, Michael, The New Killer Drones: Understanding the Strategic Implications of Next-Generation Unmanned Combat Aerial Vehicles, International Affairs, vol. 91, 2015.

Michael C. HOROWITZ, Public Opinion and the Politics of the Killer Robots Debate, Research and Politics, 2016.

MULLER, Vincent, “Autonomous Killer Robots Are Probably Good News”, in DI NUCCI, Ezio and Filippo

RAMANAZI, Vaheed, Killer Drones, Legal Ethics, and the Inconvenient Referent. Lateral, vol. 7, n. 2, 2018.

SANTONI DI SIO (eds.), Drones and Responsibility: Legal, Philosophical, and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons, London, Ashgate, 2016.

SANDVIK, Kristin Bergtora, “The Political and Moral Economies of Dual Technology Transfers: Arming Police Drones”, in A. ZAVRSNIK (ed.), Drones and Unmanned Aerial Systems, Cham, Springer, 2016.

O’CONNELL, Mary Ellen, 21st Century Arm Control Challenges: Drones, Cyber Weapons, Killer Robots, and WMDs, Washington University Global Studies Law Review, vol.13, 2015.

SCHIRMER, Jan-Erik, "Artificial Intelligence and Legal Personality: Introducing “Teilrechtsfähigkeit”: A Partial Legal Status Made in Germany."Regulating artificial intelligence. Springer, Cham, 2020.

SPARROW, Robert, “Killer Robots”, Journal of Applied Philosophy, vol. 27, n. 1, 2007, pp. 69-73.

SPARROW, Robert, “Killer Robots”, Journal of Applied Philosophy, vol.24, n. 1, 2007.

STATMAN, Daniel, “Drones and Robots: On the Changing Practice of Warfare”, LAZAR Seth and Helen Frowe (eds.), The Oxford Handbook of Ethics and War, Oxford, Oxford University Press, 2015.

TURNER, Jacob, "Legal personality for AI."Robot Rules. Palgrave Macmillan, Cham, 2019.

TURNER, Jacob, Robot Rules: Regulating Artificial Intelligence, Cham, Palgrave Macmillan, 2019; LIN, Patrick, JENKINS Ryan & ABNEY, Keith, (eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, Oxford, Oxford University Press, 2017.

VAN GENDEREN, Robert van den Hoven, "Do we need new legal personhood in the age of robots and AI?." Robotics, AI and the Future of Law. Springer, Singapore, 2018.

WHETHAM, David, “Killer Drones: The Moral Ups and Downs”, RUSI Journal, vol. 158, n. 3, 2013.