Black box, black box in my hand, who is the fairest in the land?

Some technologies do not make transparency easy, but perhaps this is the least complex part of the problem. More often, a Black Box is the engineering artefact of an intellectual property. Isn’t the Coca-Cola formula one of the most popular Black Boxes in history? However, moving to a future where reputation, reliability, privacy, credit risk and employability (just to name a few) will depend on what algorithms say, Black Boxes represent the dark side of the force. And since avoiding Black Boxes isn’t always feasible, we must at least be aware of their side effects and commit to reduce them. In a word, we need responsibility, which is the foundation of sustainability and value. We will also need new tools to help us: one example is a new “sociological” approach aimed at studying Black Boxes in the same way we studied the most mysterious behavioural agent in the world: our brain.

Riccardo Zanardelli
5 min readNov 27, 2019

The increasing interest in Artificial Intelligence has brought to evidence all the risks of the “Black Box”. ⟶ If you don’t know what we’re talking about, click here to learn more.

We were saying, the algorithm working in a Black Box mode is risky, first of all because it doesn’t allow to understand how it works. Why does your supermarket stops you for inspection at the self-service checkout? And why do you see this sponsored message and not another one on Instagram? And again, why does Spotify offer you this particular playlist? Apparently trivial questions, maybe complex if approached through the lens of computer science applied to algorithms and machine learning.

But let’s stop for a while… hasn’t the Black Box always existed? When 20 years ago we were denied that bank loan, did we perhaps interacted with an equation or -simply- with a human being? Why does Alice likes this film and Bob not at all? There are lots of situations where human beings simply fail to be transparent or they even they cannot be. And so it was even before the advent of machine learning.

Yes, but now there is a new problem… when decisions were exclusively made by human beings, who are bound by values, conventions and laws, then it all seemed more acceptable. Yes, because the pace of decisions and their consequences has always been compatible with man, and therefore more easily controllable.

Now that we are progressively delegating more important decisions to machines, the frequency has scaled to thousands of decisions per second and the domino effect of speed made decisions difficult to track and understand, therefore to control.

“Huston, we have a problem!” …yes, but Huston is still us, so what do we do?

What happens if an opaque and wrong algorithm believes that I have a disease that I don’t really have? And what if this algorithm would be connected to the dashboard of my next recruiting specialist? The consequences of the Black Box can be catastrophic, but even without thinking of apocalyptic scenarios, if we want to rely more and more on Artificial Intelligence we need to find a way to deal with this.

We will progressively delegate more and more to algorithms and, inevitably, even to Black Boxes. The risk that humans feel no responsibility on decisions just because they are delegated is real, and we have to avoid this to happen.

With this in mind, maybe we can start considering these 3 opportunities:

  1. Define logics of responsibility. The creator of a Black Box must be responsible for its effects and must take action to minimise the bad outcomes over time. Do Black Boxes need an insurance for civil liability? Maybe.
  2. Cookies for the Black Box? (hey, I didn’t say blockchain ;-) The output of a Black Box should be traceable through “space” and time. Assigning each outcome a unique ID and creating a standard for logging Black Box behavioural metadata can be useful to enable internal and external audits. The history of processing is the “cognitive history” of the Black Box algorithm … losing it means giving up in understanding autonomous agents and then abandoning ourselves un responsibly in their arms.
  3. Study the behavior of the Black Box more than the Black Box itself. Through the Turing Box project, the MIT promotes a super interesting approach: to analyse artificial agents in terms of behaviors -instead of engineering artifacts- to identify possible bias and to understand how they operate and co-operate. This is a very practical way to look at this issue, democratising audit methodology and tools to everyone, not just to the researchers and developers commuinty.

The challenge is clear. We have ideas. Now it’s time to move on to practice and to ensure that every company or developer who creates an intelligent agent takes care of the consequences of their work. This isn’t a secondary duty, but an integral part of the creation process.

If tomorrow our safety and well-being will depend on some Black Boxes, we may also not accept the irresponsible and socially unsustainable ones.

What’s the point of developing an artificial intelligence that is not going to be allowed to run?

If a Black Box out of control is worth nothing, then maybe it’s the case for an upgrade.

And this upgrade is called Responsibility.

Riccardo is Beretta’s Digital Business Development Manager. Graduated in Engineering, he has served in various marketing roles before focusing on business transformation and digital platforms since 2016. In the last decade, he has developed a personal interest in exploring the potential of computational privacy/trust towards a more effective and sustainable data driven society. With the aim of contributing to a wide and open conversation about MIT’s OPAL project, he published “The end of Personalinvasion” (2019) and “OPAL and Code-Contract: a model of responsible and efficient data ownership for citizens and business” (2018). He is a member of the advisory board of “Quota 8000 — Service Innovation Hub” at TEH Ambrosetti. Since 2000 he experiments with digital art as an independent researcher. Some of his projects have been acquired from the permanent ArtBase collection of Rhizome.org — NY (2002) and exhibited at the Montreal Biennial of Contemporary Art (2004), as well as at Interface Monthly (London, 2016, by The Trampery and Barbican). In 2015, he released FAC3, one of the first artworks in the world to use artificial intelligence. He is married and father of two. Want to drop a line? → riccardo [d ot) zanardelli {at} gmail [ do t} com

Cover Photo by Noah Buscher / Unsplash

--

--

Riccardo Zanardelli

Digital Platforms @ Beretta | PhD student in Statistics & Data Science @ AEM, UNIBS | Engineer | Only personal opinions here | Code is Law (cit.)