Are we turning a blind eye to who really is responsible for whatever AI can do?There is a lot of talk about the dangers of AI. From AI controlling critical infrastructure systems to eliminating humanity, the fantasies are without limits.

As of today, and that is a critical distinction, any AI that is being deployed, whether it is in a self driving car or in an unmanned, weaponised, drone, needs to be commissioned by a human.

There may come a time, when even the decision to deploy or enhance AI algorithms will be made by AI themselves. But we are not there yet.

What this means is, that the dangers of AI come down to responsible and accountable leadership.

Yes, as of today most algorithms are a product of a piece of code being written by a software developer. But like in any well-managed IT project, the quality of such code, needs to be checked and approved before it gets deployed.

In the end, responsibility and accountability comes down to the person commissioning the deployment.

The problem lies in the complex capabilities that these AI possess. Since they are built to learn, nobody can tell, what kind of conclusion they may come up with in the future. So far, any AI is merely a very smart engine to project probabilities. However, such probabilities can change, depending on the data that such AI is fed. Therefore, the data, that an AI is trained with, becomes a very important variable.

This also means that whoever decides about what data an AI is trained with, ultimately decides over the foundation that underlies any decision.

In summary, there are three kinds of decision that impact the potential danger:
1. The algorithm, i.e. the model / Neural Network that is used to calculate probabilities
2. The data, that the algorithm is trained with
3. The commissioning process, deciding over the extent of the AI’s deployment.

In the end, the final decision still comes down to leadership.

Tomorrow’s leadership will have two consider algorithms as critical and efficient sources of labour. There will be no way around using AIs since they provide a clear and present advantage.

Ultimately, this means that we need a new breed of leadership, one that is capable of gauging the kinds of implications that the use of deployment of new technology brings.

If you find this inspiring, send an IM and let’s discuss.

#ai #technology #leadership #data #future #change #algorithms #infrastructure #artificialintelligence #algorithms #growth #digital #motivation #society #collaboration #neuralnetwork #antropomorphism