© Jan Nijman/ Pixabay
Author profile picture
About this column:

In a weekly column, alternately written by Eveline van Zeeland, Eugène Franken, PG Kroeger, Katleen Gabriels, Carina Weijma, Bernd Maier-Leppla, Willemijn Brouwer and Colinda de Beer, Innovation Origins tries to figure out what the future will look like. These columnists, sometimes joined by guest bloggers, are all working in their own way to find solutions to the problems of our time. So tomorrow will be good. Here are all the previous articles.

It was recently launched as part of the Dutch national AI course: a curriculum on ‘AI and Ethics.’ Like the previous publications about AI for the healthcare and agricultural sectors, among others, the aim is to give every Dutch citizen the chance to learn more about the impact of AI on their lives. After all, this technology is going to have an impact on all of us and it seems to me that you should therefore have a better understanding of it.

You tend not to see AI on the ‘front end’ of things’ but rather as part of a product or service. For example, Spotify uses AI to suggest songs to you that it expects you will like. While Meta shows us ads based on the same sort of model. We often have no idea how these kinds of models work or why we are presented with that very ad. The question is, how important is it to know what models are behind these various services?

Models are not new

By the way, the issue about those models and whether we want and need to fully understand them is nothing new. In order to make heating as economical as possible, the Econaut CTI scheme was developed for greenhouse agriculture by Hoogendoorn late last century. By making use of the fact that a plant has a so-called ‘temperature integration buffer’, it is possible to work with average 24-hour temperatures in which a lower and at other times a higher heating temperature can be maintained at certain times of the day. The advantage of this is that it allows you to provide heating at times when that energy can be used with the greatest amount of efficiency. For example, at night, when an energy-conserving double screen is closed and less energy is lost as a result. Given current conditions and the energy crisis, that could be at those times when the price of gas is at its lowest.

Assimilation lamps

The integration buffer works depending on the type of crop and also within a certain minimum and maximum set range over a number of days. What the system did was to calculate several times a day and for each hour what the savings would be if different temperatures were maintained within the minimum and maximum range set by the grower.

For this, the multi-day weather forecast was used to predict outdoor sunshine and temperatures. It also took into account for each point in time whether, for example, the greenhouse screens were closed, the assimilation lamps were turned on, or would be turned on soon, and when a temperature change was imminent because the grower wanted to keep a higher daytime temperature than overnight, for example. The assimilation lamps used then gave off a lot of heat and you could subtract that from the amount of heat you would have otherwise needed to bring into the greenhouse.

All in all, it was a rather intricate model for the grower to fully understand if the regulation was actually the right one. Just letting go of the helm and letting the computer decide was often fairly difficult for them at first!

Making things explainable

Therefore, it was important to be able to explain as clearly as possible the grounds on which certain decisions were made so that the growers were able to square it with their own line of reasoning. If they understood how it worked and it corresponded to their gut feelings, then the step to change temperature regulation from being done manually under their own control to being ‘automated’ was often soon readily taken.

But what if things are much more complicated and the best outcome cannot be rationalized in line with your own gut feelings after all?

Vision technology versus green fingers

Take as an example a tomato grower’s gut feeling about when and how to adjust their climate computer settings. The grower ‘sees’ abnormalities and acts accordingly. For instance, when growing tomatoes, you would look at the ‘purpleness‘ of the top of the tomato. A purple top part says something about the vigor of the growth of a crop. When it comes to cultivating tomatoes, it is important to find the right balance between the number of fruit and the amount of leaves. Only the actual tomatoes are sold, but to produce good tomatoes, there has to be enough foliage. It takes the observations of an experienced grower to see that this balance is correct. Up until now, this ‘gut feeling’ of the grower could not be converted into an objective measurement.

Recently, two tech suppliers partnered up with a grower to use a multispectral camera to measure the purpleness of their crop. The idea being that this would enable the grower’s knowledge to be captured in a model to decide how to adjust a climate strategy based on objective measurements.

Correlation

But what if it turns out that there is no clear correlation at all between the purpleness of the tomato top and the vigor of its growth? In that case, has the grower always been wrong and is the model flawed?  Are they biased and think that if they see a purplish color they ought to adjust their climate settings in some way? That could be the case, but it is also possible that there are other things that the grower notices besides that purple top on account of their many years of experience. Apart from noticing that purplish color, for example, feeling and spotting fuzz on the plant and the slightly thicker or thinner stems can also be taken into account.

But yes, it is also possible that a model can establish a much better growing strategy and yield based on lots of other parameters. Then what do you do? You don’t get it, and the feedback on whether things are really going well or not is also not so straightforward or immediate. A ‘mistake’ at the outset of the cultivation of a crop can have long-lasting effects on production and quality.

Trust in models

How important is it to fully grasp the rationale behind the model before accepting it? That depends, of course, on what the consequences are. If Netflix ever gives a wrong suggestion for a movie or if you have to make a detour because of an old map in your navigational system, it is usually not such a big deal. However, when it comes to judicial or healthcare decisions, for example, the consequences are significantly different!

Because of this, especially in AI models for matters such as healthcare and jurisprudence, it is crucial to avoid bias as much as possible. Suppose choices have to be made about who gets what type of healthcare and the model used for that purpose was created based on input from data collected on men only. The moment that the outcomes are markedly different for women, then the decisions can turn out very unfavorably for them.

Preventing bias

Obviously, these kinds of situations are already occurring now. For example, a great deal of medical research has not included women and will now also be affected by flawed models. That said, AI does make it possible to ‘scale’ the consequences very quickly to remedy them. Whereas before, for example, one employer may have been prejudiced against women in technical occupations, an AI system that automatically selects resumes canteen that out in an instant! On the other hand, AI models can also be used to avoid bias that people were sometimes unaware of.

Therefore, where it was already complicated 25 years ago to determine whether you could trust the model when these were comparatively ‘simple’ models, things have not become any simpler in recent years. AI and machine learning have led to models only becoming more complicated and often impossible or difficult to explain – even to and by scientists.

Ethical dilemmas

We all have to deal with these models. Whether it’s the Inland Revenue Service, the healthcare system or the music that Spotify presents to you. 

By taking the AI and Ethics module of the Dutch National AI Course, you will already learn to have a better understanding of how AI is impacting our daily lives. You can find the AI and Ethics course here (in Dutch). In this course, you will also learn what technology companies can do to mitigate the risk of bias.

Also, would you like to tackle ethical dilemmas surrounding self-driving cars yourself right away? Then check out the ‘Moral Machine’ website – this is available in various languages.