About Immanence
- Founders: Luna Bianchi, Dilletta Huyskes
- Founded in: 2023
- Employees: 2
- Money raised: -
- Ultimate goal: Sustaining tech companies making AI ethics accessible, modular, personalized and easy to plan to finally make business responsible of their impact, with ethics being a key performance indicator
Immanence is defined as “the state of being present as a natural and permanent part of something”. And that is how ethics should find space in artificial intelligence, claims Luna Bianchi, co-founder of the company whose aim is to assist business and public administrations in making their AI systems as ethical as possible. “Respectful of the people, respectful of what surrounds them”, she specifies. AI and AI ethics have become the hot topic of the year. Since ChatGpt became available to the public, questions have aroused: should it be used in schools? Is it a dangerous crafter of fake news? The issues are many. Why, if you ask Midjourney to draw a maid, the software likely comes up with the image of a non-white woman?It’s important to ask these questions, but it is also important to understand that these are specific issues stemming from the same root: AI amplifies human biases, meaning racism and sexism as well. However, just because the problems have the same source, it does not mean the solution is universal.
A little more than a year ago, in Rome, Immanence was born, as a concept, to try to find a way to make AI systems ethical. One of their founders, Diletta Huyskes, had just held a speech to the Italian House of Representatives. It was the 26th of February 2022, Luna Bianchi walked up to her and they discussed the idea. A year later they are in business with Immanence. Innovation Origins spoke to Luna Bianchi, to understand their mission.
What’s your guiding principle?
“We want to change the idea that the outcomes of technology depend on how we use it. Today they rather depend on how we create it. That is why we assist businesses as they craft their algorithms. You can also intervene later – we call it ‘ethical maintenance’ – but prevention is better than treatment. Right now there is not much legislation about AI algorithms. Internationally, it’s mostly soft law and it’s regarded as a checklist to fulfill, not as something fundamental.”
It sounds like Immanence is a consultancy company, even though you never used this word. What do you think?
“If we want to frame it in a type of service that we are used to, we should actually call it consultancy. But I don’t like this term very much, because it seems like I tell you how to do something and that I’m just solving a problem you have. However, what we do is more about reasoning on things and finding potential problems that entrepreneurs sometimes don’t know they might have. We guide and explain, rather than just solve. We create, in the entrepreneur’s head, an understanding of ethics in technology and why we all need them. So yeah, technically it’s a consultancy. But I think a more correct term is ‘co-design’.”
So, what is the method you follow?
“We follow businesses since the beginning of their development. We like to talk to the different departments of the company to understand what everyone’s priority is. As far as the practicalities go, some companies just ask for an ethics assessment, and that is – to go back to the previous question – very much like a consultancy. However, most of our work is providing companies with an option to outsource their AI ethics development. We take care of that on a continuous basis as long as the company wants. And we assess the situation every time something changes. Such as when the company receives reports from users, if they change the algorithm, or if they hire somebody new for a certain position. An EU law is due to come, and we are trying to anticipate it. At the moment I cannot share much about specific issues, because we are just starting all of our projects. In all cases, our experts revise the algorithm and they define whether that is the right kind the company should use. Then we reason upon which AI model could work best. Sometimes, for example, people go for machine learning without thinking it through too much. But they don’t necessarily need it, and that means they need to put more effort into keeping it under control than with another model. We also research into who developed the algorithm, who tested it, and – of course – we go through datasets to ensure they are fair to people. We want to be sure to evaluate all potential problems, together with solutions and consequences.”
You said there is a common misconception about AI ethics, which one?
“We all tend to think about AI ethics as something general. As if they were one-size-fits-all ever applying principles. And that is true to an extent. What we forget though, is how much AI relies on context and human inputs, and how fast this world moves. The systems change, the context changes, and people’s conscience changes. That’s why you cannot rely on fixed formulas, but you need to constantly monitor and adapt. Ethics must be present in AI as a natural part of it. That’s why we called our company ‘Immanence’.”