My research on "Economic Machine Learning" focuses on developing predictive induction techniques that account for the economic constraints and objectives of business context in which they are applied. My co-authors and I developed novel and generic active learning algorithms that reason intelligently about opportunities to cost-effectively acquire information that is particularly beneficial to data-driven learning as well as for the decisions the machine-learned predictive models inform.
“Collaborative Information Acquisition for Data-Driven Decisions”, with Danxia Kong, Machine Learning, 2014.
In this paper, my former Ph.D. student, Danxia Kong, and I identified and developed a method for a new information acquisition problem, inspired by efforts around the world to recover billions in losses due fraud and abuse. This research addresses challenges that significantly impact societies, it develops a novel framework for active learning and demonstrates that its application to sales tax audit decisions significantly improves revenue recovery; the method we develop applies more broadly to similar decision problems.
The field of Active Learning has received a good deal of attention in machine learning. Active learning considers settings where training instances from which predictive models are induced are costly to acquire. For example, learning models that predict which firms under-report their earnings or by how much, requires training instances for which whether or not the firm has accurately reported its earnings is know. Yet, such information requires (costly) auditing of firms to acquire.
"Traditional” active learning considers the problem of acquiring particularly informative training examples so as to cost-effectively improve the predictive accuracy of a predictive model trained on the acquired training instances. In contrast to the current active learning paradigm, in this paper we proposed an entirely new framework for active learning, Collaborative Information Acquisitions (CIA), which address settings where multiple predictive models are induced from the acquired data, and when these models are used together to inform a common decision-making task. For example, in the example above, it would be useful to learn a (classification) model that estimates the likelihood of non-compliance, and another (regression) model that estimates the amount that can be recovered from an audit to allow a tax authority target the firms with the largest expected value from an audit. However, different training instances may be more informative for each of the models.
The goal of our CIA framework is to allow multiple predictive, machine learning techniques to collaboratively identify advantageous training instances for acquisition, that benefit the decisions they jointly inform, rather than benefit the average accuracy of any one of the models (as do traditional active learning methods). CIA departs from the existing active learning paradigm’s focus on improving a single model's average predictive accuracy; instead, we proposed and demonstrated the benefits of an active learning algorithm that acquires raining instances that cost-effectively improve the decisions the models inform. Using data from the tax authority of a large U.S. state, we show that the use of CIA to select from which to learn the predictive models leads to significantly higher sales tax revenues than possible with alternative approach for selecting audits.
More broadly, our research also underscores that standard machine learning objectives and measures of performance, such as average predictive accuracy, do not always correlate with meaningful goals in practice. We demonstrate how traditional active learning algorithms that aim to improve standard machine learning metrics, may not only fail to achieve desirable benefits in practice, but even undermine them.
“Active Learning with Multiple Localized Models for Heterogeneous, Dyadic Data” (INFORMS Journal on Computing, 2017), with Meghana Deodhar and Joydeep Ghosh.
While almost all work on information acquisition has focused on global predictive modeling—i.e., using the entire training data for building a single predictive model—a growing number of key business intelligence tasks have been shown to benefit from partitioning heterogeneous data into cohesive subsets, with the objective of either improve predictive performance via local modeling of each subset, or simply of deriving managerial insights. In some of my current work we develop new methods to exploit opportunities to acquire information and improve these business intelligence tasks cost-effectively. We found that the naïve application of existing policies performs poorly in these settings, and developed new policies that account for the unique properties of local predictive modeling and clustering to identify informative acquisitions. Local predictive modeling has received much attention recently for its improved predictability for large, heterogeneous data sets, such as data used for collaborative-filtering-based recommender systems. The success with which product recommendations have boosted online retail sales has made retailers eager to improve the accuracy of consumer-preferences predictions (WSJ, 2008; Forbes, 2007). While efforts have focused exclusively on improving predictive models, such improvements are limited by sparse data (Forbes, 2007)—many consumers are unlikely to rate items without clear incentives. Cost-effective acquisition of consumer rating is therefore a promising venue to explore next. We developed new information-acquisition policies that cost-effectively improve partitioning and local regression modeling in these settings. This problem offers computational challenges but also new opportunities for information acquisition that we show can be effectively exploited. For example, in addition to acquiring informative data, one of the policies we developed dynamically adapts the model complexity to improve its predictive accuracy when newly acquired data becomes available. We show empirically that the policies are advantageous when applied to recommender system and customer purchase choice data.
Comments