But to get rid of that, I have to give unknown companies insight into everything that does interest me. By allowing cookies to be set, I consent to having my movements on the Internet - business and private - tracked. I have no idea what kind of data this collects, and I have no idea who or what is then done with it. At any rate, sifting through page-long, unnecessarily (and deliberately) complicated terms of use on every site is not among my interests. The only thing I noticed was that I was getting advertisements for stuff or services I had already ordered in the meantime, sometimes for weeks or months afterwards. Since then, I block every cookie I can block.
Call for Explainable AI
But suppose I could better understand the mechanisms by which an ad appears on my screen? What data goes to whom and why? And how does the algorithm that determines which ad I see work? The call for Explainable AI (XAI) is ringing in more and more industries. Medical specialists were the first to use AI for image analysis and then for a second opinion when making a diagnosis. Soon they wanted to know how the algorithms arrive at their opinions. In ethical issues, understanding the steps an algorithm makes is essential. This is also why XAI is indispensable for the use of artificial intelligence in police work, legal investigations and fraud detection. Without explanation, evidence does not hold up in court. But XAI is also important in aviation. Think of engineers and pilots who want to know how AI systems arrive at certain recommendations, for example, when testing systems in simulated environments before they are deployed operationally.
Still little light in Black box
Full steam ahead, then, for XAI, you might think. Yet there are a few snags. Making AI explainable requires additional steps in an already energy-consuming process. Also, explainability increases the complexity of AI models, making implementation and maintenance more difficult. Moreover, the performance of models decreases as they become larger and more complex. And then the popular Large Language Models (LLM), on which basically all chat apps are based, are so-called black box models where creating transparency is inherently difficult. There are developments such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), but I am yet to encounter their results as consumers. Perhaps the European AI Act, with its stricter transparency requirements, is encouraging Web companies to integrate XAI methods into their LLMs. At any rate, until I know how the ads get on my screen, I'll keep my cookie jar shut.