Machine Learning

Machine Learning and Strategic Behaviour

The machine learning (ML) training pipeline is becoming increasingly fragmented, as more and more stakeholders (companies, institutions, individuals) become involved in one or multiple stages of the model production process. As these participants have various incentives, the end ML model may not be equally beneficial to all parties. Additionally, conflicting incentives may lead to actions that may harm the downstream ML model.

At INSAIT, we work on making machine learning resilient to gaming and strategic behaviour. How can one ensure that data owners are incentivized to provide their data for model training? How do we moderate interactions between data owners and model trainers, in order to ensure the quality of the downstream ML model? How do we build models resilient to gaming from consumers? These are some questions that researchers at INSAIT are currently working on.

Researchers involved in this area:

 

 

Social and Long-Term Impact of Machine Learning

Machine learning models are now part of numerous real-world systems and therefore have an increasing impact on our society. For example, ML-based recommender systems control the content we see on social media and large language models like ChatGPT increasingly shape everyday workflows.

At INSAIT, we seek to understand how ML models perform in a stateful world, which is shaped by the very models we deploy. How can we build models that perform well in a stateful environment and comply with formal ethical requirements? What is the impact of AI models on the future data we observe and how can this data be used to create accurate ML models in the future? These are some questions that researchers at INSAIT are currently working on.

Researchers involved in this area: