Skip Content
The Future of the Corporation: Law and AI

The Future of the Corporation: Law and AI

In Conversation with Professor Richard Susskind and Professor Colin Mayer, 20 March 2018; co-hosted by the British Academy Future of the Corporation and the European Corporate Governance Institute at Hogan Lovells in Brussels

LawBusiness and managementPhilosophy • Henry Richards

"If you ask the question, what is the future of the professions, you assume there is a future for professions.  In the same way, if you ask the question, what is the future of the corporation, you assume there is a future for the corporation."  Richard Susskind explained his premise that people need to change the way they think about the impact of Artificial Intelligence, or rather, "increasingly capable systems", and look at outcomes.  Instead of deliberating over whether computers will replace people and the tasks they currently carry out, ask what outcomes people want and consider how computers might deliver these outcomes in better even if radically different ways.  When it comes to business, ask what social and economic roles corporations play and how these can be improved upon through AI.

Professor Susskind highlighted four areas in which machines are influential: systems that can answer questions; systems that can make better predictions; robotics; and systems that can detect and express human emotions.  He highlighted two examples: the court room and the board room.

For low value problems and disputes, Susskind said there must be better ways to find resolution than using the courts: "Is court a service or a place? Do you need to congregate together to have a state-based dispute resolution system?".  He talked about systems that deliver similar outcomes.  For example, the Lex Machina system which claims that it can predict the outcome of a patent dispute in the US more accurately that a patent lawyer.  Although this may sound dangerous to many lawyers, it does provide a better outcome.

In terms of board rooms, in 2014, a Hong Kong venture capital fund appointed an algorithm to its board and allowed it a vote on investment decisions.  Susskind stressed that people often wonder what it would look and feel like.  But if you take a step back and consider that there is an entity that can more reliably and accurately review data and make predictions, you can imagine shareholders and decision-makers wanting this input.

The conversation moved to judgement and discretion and Professor Mayer asked whether an algorithm - with its vast store of data - can be relied on to exercise judgement and discretion.  Susskind responded with a question. "But to what problem is judgement the solution?  Currently the best way to solve certain categories of problem is to ask a human to exercise judgement.  But if you ask the question why you need to call on judgement, then it's because you're uncertain about your situation - you have an issue you are uncertain about and you don't have the knowledge, expertise and inside experience to find the best answer.  So the question should be how can a computer programme handle uncertainty.  Can we deliver the outcome in different ways?"

Professor Susskind picked up the discussion by talking about the difference between narrow AI and general AI - a system that is capable of solving a specific category of problem, with one that is generally intelligent.  He explained that commentators are worried about general AI.  If a system is recursively self-improving, what happens the day after it achieves the same level of intelligence as humans?  He concluded that a lot of the thinking has been in the field of science, but we need more philosophers, economists, lawyers and sociologists as a 'rapid response team' to give answers to these questions.

In the Q&A that followed, there was a lively discussion covering data, ownership, incentives and the rules of Go.  Professor Mayer wrapped up with a question about policy recommendations for the European Commission and Professor Susskind clearly described the challenge, referring to his policy paper for the House of Lords Committee on AI.  Firstly, European industrial strategy should be to lead the world in developing systems that replace human workers - focusing on building machines rather than competing with them.  And secondly, Europe should think about what is being taught and how it is being taught.  Europe is still generating 20th century graduates and many degree courses are in subjects machines are already better at.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close