Speaker
Description
Machine Learning (and especially Deep Learning) algorithms often require large amounts of data to accomplish their tasks. However, a common problem when such approaches are applied in business contexts is that only relatively small datasets are initially accessible, leading to a fundamental question: how to apply ML tools when there is apparently not enough data available? In this talk, I will discuss why those "small data" projects are actually of high relevance and discuss three concrete strategies data scientists can utilise to alleviate data deficiency and their strengths/weaknesses. For each strategy, I will present a business case scenario from the portfolio B12 Consulting, illustrating how to partially overcome analysis issues with insufficient data.