In the context of connected mobility and artificial intelligence, data is our primary point of reference.
We collect millions of data points every day—from vehicles, sensors, user feedback, and external sources. This is the raw material we use to build predictive models, craft business strategies, design more tailored insurance products, and develop fairer scoring systems. Yet, this same material can mislead us. Because while data doesn’t lie, it can easily take us down the wrong path if we don’t approach it with a critical mindset.
One of the most insidious problems is bias—those hidden prejudices in data or models that end up distorting the conclusions we reach. Sometimes, the issue stems from placing too much trust in numbers. For example, we might notice that a particular user profile scores lower in risk assessment systems and hastily conclude that they are worse drivers. Only a deeper analysis might reveal that these lower scores are due not to behavior, but simply to the number of miles driven. Such distortion can unfairly penalize people who drive a lot for work or necessity, even if they maintain impeccable driving styles.
Then there are situations where context completely changes the meaning of data. Models developed on historical bases can become unreliable if they fail to account for the evolution of human behavior over time or regional differences. An extreme example was the pandemic, which disrupted mobility habits and rendered many pre-existing predictive models obsolete.
The sample we use to build our analyses can also act as a misleading filter. If we focus only on users participating in certain programs—such as traditional loyalty schemes—we risk excluding entire population segments, thereby compromising the model’s representativeness and, with it, its fairness. This is a classic case of “survivorship bias”: focusing only on those already in the system, while ignoring those who remain outside.
All of this brings us to a key point: artificial intelligence is never truly neutral. Algorithms learn from what we feed them, and if our data is flawed or biased, the outputs will be too. In some cases, models can even amplify these biases, creating a sort of loop where initial prejudices become systemic.
To avoid these risks, we need to adopt a more conscious and critical approach to model development. This means integrating diverse data sources, regularly comparing predictions to actual outcomes to spot drift, ensuring algorithmic decision-making is traceable, and—most importantly—maintaining human oversight in the most critical processes.
A positive example comes from the insurance sector, where a provider recently revised its scoring system by introducing a corrective algorithm to compensate for geographic bias. Upon deep analysis of urban mobility data, the team noticed that users in more congested areas were unfairly penalized in risk scores, even though they weren’t responsible for frequent stop-and-go patterns or sudden braking. By adopting a dynamic metric capable of reading the urban context in real-time, the system began to reward virtuous behavior—even in complex conditions. The result: a fairer model, greater user satisfaction, and improved claim predictiveness.
Building fair, useful, and adaptive AI means accepting that models will never be perfect—but they can and should improve over time. Only with a mindset of continuous improvement and constant vigilance toward bias can we transform data into a real competitive advantage. Recognizing and managing data bias is not just a technical challenge; it’s a strategic opportunity. Companies that face it with transparency, flexibility, and critical vision will have a true edge in the market.
When interrogated properly, data becomes a valuable ally for making better decisions, building more effective products, and earning stakeholder trust. Artificial intelligence doesn’t replace human judgment—but it can amplify its positive impact if fueled by method, rigor, and collective intelligence. It is this combination that will shape the future of data-driven businesses.