Martin Heidegger already told us. In his masterpiece ‘Die Frage Nach der Techniek‘ (1954), he shows that technology is never neutral. It distorts the way we live and what expectations we have of ourselves, nature and life. It takes little imagination to foresee that the current digital transformation will have major consequences in the economic domain. Smart technology is important here, with a leading role for algorithm-based Artificial Intelligence (AI). Thoughtful and innovative leadership is needed to deal with this matter in a meaningful way, with strong moral foundations as a starting point. The question should therefore not be how we shape the digital transformation in an organization, but why.
For several decades there has been a broad debate about the social responsibility of the business community. There’s not only a responsibility to make a profit, but also a responsibility to take care of people and the environment. For many companies, terms such as ‘The Triple Bottom Line’, ‘Sustainability’ and ‘Integrated Reporting’ have become commonplace. However, we often see such terms more as part of a business model than as guiding principles. A complicating factor in that debate is that the abstract relationship of people who make up a company is sometimes too far removed from the moral experience of individuals. This becomes even more difficult when increasingly complex technology is used in the interaction with people: AI has no moral awareness but can express moral preferences. As a result, there is a danger of a double moral alienation between company and individual.
Subscribe to our Newsletter!
Moral challenges in the economic domain
It is important to think about an organization’s relationship with its various stakeholders when digital transformation leads to a renewed way of doing business. Not everyone understands how an algorithm works or recognizes the hand of AI in a particular application. Digital transformation can create a knowledge gap between the person who deploys the technology and the person who uses it or experiences the consequences of it. This creates an unequal relationship in which the consumer in particular can be increasingly vulnerable. Leaders and managers must deal with this vulnerability in a morally good way.
“Not everyone understands how an algorithm works or recognizes the hand of AI in a certain application.
We could say that the person who deploys the technology bears responsibility for the interests of, for example, the consumer. The average Volkswagen driver will not know much about the software used to measure emissions and the average Facebook user will not be able to fully understand how to determine what passes in their timeline. And how do like training a chatbot to come across as humane or reliable as possible? Could this affect the autonomy of the person on the other side of the line?
At the same time, the good intention of taking responsibility is also a serious pitfall: it can lead to undesirable paternalistic relationships between the person developing the technology and the person who has to deal with it. Can you impose sustainability on a consumer by showing greener alternatives earlier in a webshop sales process? And who decides when a chatbot may or may not tell a lie for their own sake when this leads to better results for the conversation partner? Can an algorithm block information that is not in line with certain scientific insights? We need to think about this carefully and not only after we have been digitally transformed.
So we will have to look for ways to map out the moral expectations surrounding digital transformation. Not only inside, but especially outside that commercial organization. When technology has a major impact on the life of a user, that user will have to be able to do something with that knowledge. When the impact does not only concern the user but also has a broader social impact, then it is important to involve political processes in the design process of technological innovation.
“Precisely companies with a strong moral foundation will be better able to cope with the rapid technological progress and achieve a more sustainable economic success.”
It will be a challenge to find out in a good way which values we want to pursue within which relationships and to implement this input in an honest way in the vision and management of a commercial organization. This requires much more than formulating a well-meaning code of conduct and adding a nice annex to the annual figures in which a company indicates that it is working on a positive social contribution. In practice, this is a bit like the butcher who inspects her own meat, which can lead to users of technologically high-quality products or services having a ‘take it or leave it’ feeling, with the choice of ‘leave it’ sometimes being difficult. For example, in some sectors, you simply cannot survive without using some established social media channels. If you want to create ‘shared value’ you will have to do serious research into what that desired value could be. Only then can you build a relationship based on trust, because you show that you understand the interests of users and always take these interests as a starting point. That is even the essence of trust.
Digital transformation is not an end but a means. It can lead to fundamental changes in customer interaction, customer experience, value propositions and business models. A question that is asked too often is how to do this, while we should mainly ask ourselves why we want to embrace such changes. What is your ‘Massive Transformative Purpose (MTP)’, as Singularity University puts it: companies should not reason from their product or service, but from the function or ‘role’ they want to have in society. You don’t supply power, but energy to society. You don’t sell cars, but improve mobility in society. You don’t rent out DVD’s but you contribute to a movie experience. And, in our own work: we don’t offer education, but knowledge development and personal development to people in society.
“Companies should not reason on the basis of their product or service, but on the basis of the function or ‘role’ they want to have in society.
So the question is what goal we have in mind when applying smart technology. Determining this goal requires a joint effort of relevant stakeholders within an organization. At the helm of this process must be innovative leaders who are brave enough to sail on a moral rather than a digital compass. It is precisely then that a company can become an inspiring, digital leader. The idea of seeing values not in the context of a business model, but as an unshakeable foundation may sound like a contradiction to some. What if your core values become too expensive? But in our opinion it is only a contradiction in terms: companies with a strong moral foundation will be better able to cope with rapid technological progress and achieve a more sustainable economic success.
Digital transformation also has consequences for our education. Universities and colleges need to engage with companies to discuss the implications for training future business professionals. We are affiliated with the Fontys School of Business and Communication and see that our economics, marketing, accountancy, and real estate students need more than just professional training. On the contrary, they need to be given a moral framework for thinking and acting about what ethical AI means for their field and how they can achieve this strategically within an organization. After all, successful Digital Transformation is not realized by technology, but by people. And that requires a widely shared trust.
Innovation Origins is an independent news platform that has an unconventional revenue model. We are sponsored by companies that support our mission: to spread the story of innovation. Read more.
At Innovation Origins, you can always read our articles for free. We want to keep it that way. Have you enjoyed our articles so much that you want support our mission? Then use the button below: