The Edge of Risk Menu Search
New thinking on corporate risk and resilience in the global economy.
Technology

Will AI Board Members Run the Companies of the Future?

An interview with Founder and CEO of Datafloq

This is the fourth article in a weeklong series about artificial intelligence. The previous installments can be read herehere, and here.

Much has been written about the impact of the Fourth Industrial Revolution on blue-collar jobs—citing the impact of the Internet of Things, automation, and artificial intelligence on manufacturing, service industries and other sectors. Comparatively less time has been spent thinking about how these technologies will disrupt the upper echelons of corporate hierarchies. Some companies have already installed AI board members to interpret vast amounts of data and help make more informed decisions. Already, AI can be found in boardrooms around the world, helping to inform key decisions about organizational structure and governance.

Mark van Rijmenam, founder and CEO of Datafloq.com, author of Think Bigger: Developing a Successful Big Data Strategy for Your Business, and co-author of Blockchain: Transforming Your Business and Our World, has given keynote addresses on big data and artificial intelligence and has written at length about technology trends. BRINK spoke with Mr. Van Rijmenam about how AI will reshape the boardrooms of the future.

BRINK: Do you feel that AI is going to have a major impact on corporate governance?

Mark van Rijmenam: Yes, especially because AI will be better able to understand the context of an environment of an organization. Generally, when a decision is made within an organization, it is made based on a limited amount of data. But, with AI you are capable of including more data sources to get a much better picture of what’s going on and how the context of the organization is changing and influencing decision-making.

So, I think on one hand, it will help you improve decision-making within an organization. When AI is involved in decision-making, it might be that decisions will become less emotional and more based on the facts of what’s going on.

BRINK: The idea of “less emotional” decision-making is interesting. How will AI address questions that are related to difficult human issues, such as harassment or gender equality?

Mr. Van Rijmenam: AI will have difficulty making these kinds of decisions in the coming years. First of all, we need to teach AI ethics, which is very, very difficult. It’s called machine ethics, and it’s a highly challenging and even philosophical debate about what ethics are today, and how ethics change over time, and what happens if AI is more ethical than us humans.

These are all very philosophical questions. Unless AI is capable of being ethical—which at the moment it’s not, because it’s just logical—I don’t think it will be able to address these questions. I think we should leave those kinds of decisions to humans.

BRINK: In one of your articles, you described how there are actually AI entities and robots on boards, sitting as members of the board. Is that the case?

Mr. Van Rijmenam: There is a company in Hong Kong, which I think in early 2014 introduced an algorithm that takes into account many more data sources than us humans can and, based on that, makes a decision about whether or not the organization should invest in a certain company. That algorithm is on the board of directors and, just like any other member of the board, has a vote, which I think is quite fascinating. The AI doesn’t make a decision for the entire board, but it has some influence on what’s going on. I think that’s such a great way forward, because you are able to incorporate a lot more data into decisions than you would otherwise be able to do.

We already see AI that creates its own language and develops its own AI. That may lead to AI that’s impossible for humans to understand—and we should aim to prevent that.

BRINK: You’ve also written about examples in which AI is used as an assistant to the CEO. How is it used in that context?

Mr. Van Rijmenam: It’s all about broadening context. In this case, Einstein, the AI assistant, was hired by Marc Benioff from Salesforce. An AI assistant helps him do whatever he needs to do. The advantage is that such an AI can better see, can take into account things that the human cannot; if an AI can help you understand a broader context better, you’re better able to make a decision.

BRINK: Do you think there will be pushback from boards as AI starts to become more common in the boardroom?

Mr. Van Rijmenam: I don’t think so. I think the benefits they can have for boards are quite significant: If you can make better decisions as a board, why wouldn’t you? I think there will be pushback if it will be taken over completely by AI, but I don’t see it happening anytime soon.

BRINK: Is this the beginning of a trend of handing over the management or the running of a company to AI?

Mr. Van Rijmenam: I do see that happening. It’s called a decentralized autonomous organization, where you have an organization that is completely run by the code. That’s definitely coming. There are a lot of developments around that, but I don’t see, for example, large banks or large telecom organizations completely handing over their board of directors to AI. I don’t think that will happen anytime soon.

BRINK: It sounds like you’re an AI optimist. Would you call yourself that?

Mr. Van Rijmenam: Yes, definitely, but I also see the pitfalls. AI doesn’t always do what we expect it to do. There are a lot of examples already where AI behaved differently from what we wanted, and that can cause significant harm to an organization, or even worse, to humans. So, AI in itself requires governance, and I predict that the more AI will be implemented in organizations, the more AI governance will become important as well.

BRINK: How do you implement AI governance? What needs to be done to ensure there is effective governance of AI?

Mr. Van Rijmenam: In the course of my research, I’ve interviewed roughly 20 organizations who have built AI. What came forward out of that is that the monitoring, controlling and supervising that normal corporate governance includes can be applied to AI. Controlling is understanding what kind of code is being written and how the code is being made. Monitoring is using analytics to understand what’s going on and improving AI based on that. Supervising is using supervised learning instead of unsupervised learning to help the AI improve over time.

In addition to that, what you really want is to include some sort of explainable AI. So, you have an AI that can explain its actions, which at the moment, is very limited because it’s very difficult to have an AI explain its own actions. If a self-driving car drives into a wall, I want it to say “AI drove into a wall because there’s a problem in the code on line 335,000. If you change that, then it will be fine again.” That’s ideally what you want to achieve, but it’s highly challenging to achieve that.

BRINK: Is that because we are reaching a position where it may not be possible for humans to understand why AI did a certain thing in a certain way?

Mr. Van Rijmenam: Yes. There are already some examples online or in research papers where AI created its own language or used tactics that we didn’t expect and then have difficulty understanding. We already see AI developing its own AI. You combine those two and you have an AI that creates its own language and then develops its own AI. It will be impossible for humans to understand what’s going on and I think we should aim to prevent that.

For optimal delivery, please select your region:
Please enter a valid email address.
Success! Thank you for signing up.