Home > Seminars > Learning to Prospect---New Algorithms for Modeling Risk-Sensitive Decision-Making

Learning to Prospect---New Algorithms for Modeling Risk-Sensitive Decision-Making

Start:

3/24/2017 at 2:00PM

End:

3/24/2017 at 3:00PM

Location:

117 DeBartolo Hall

Host:

College of Engineering close button
headerbottom

Vijay Gupta

Vijay Gupta

VIEW FULL PROFILE Email: vgupta2@nd.edu
Phone: 574-631-2294
Website: http://ee.nd.edu/faculty/vgupta/
Office: 270 Fitzpatrick Hall

Affiliations

Department of Electrical Engineering Professor and Associate Chair of Graduate Studies
Research Interests: Dr. Gupta's current research interests are in the analysis and design of cyberphysical systems. Such systems are the next generation of engineering systems and involve tightly coupled control, communication, and processing algorithms. Applications include structural health ...
Click for more information about Vijay
574-631-2294
Add to calendar:
iCal vCal

The next generation urban ecosystem empowered by the internet of things has at its core a shared economy where physical resources and data are easily aggregated and exchanged. Technological advances have led to the proliferation of smart devices that provide access to streaming data and platforms for novel sharing mechanisms. One of the primary effects of greater connectivity, sensing, and actuation is the transition of humans from passive to active participants in a number of municipal services. For instance, they often impact both supply and demand as is the case in new ride-sharing markets where participants are both passengers and drivers and in energy markets where participants both consume and produce (via, e.g., solar panels) energy.

It is well known that humans often are less-than-rational; they are sensitive to risk and rely on perceptions and past experience or other reference points in making decisions. In this talk we build on recent work that aims to tie together models from behavioral economics/psychology (e.g., prospect theory) and neuroscience with classical learning and control techniques. We will discuss learning approaches to modeling human decision-makers amidst automation. In particular, we will present a new gradient-based risk-sensitive inverse reinforcement learning algorithm for estimating both policies and value function parameters. A key feature of the technique is that it applies non-linear transformations to temporal differences arising in the agent's forward learning procedure that derive from classical behavioral models.  In addition, we will discuss an estimation scheme for recovering reference points from observations of decisions. Building on the results in this talk, we aim to develop incentive/control algorithms that account for salient features of human decision-making such as loss-aversion and risk-sensitivity. To demonstrate the performance of the learning algorithms, we apply them to a handful of examples including both a passenger's and driver's view of ride-sharing.

Seminar Speaker:

Dr. Lillian Ratliff

Dr. Lillian Ratliff

University of Washington

Lillian Ratliff is currently an Assistant Professor in Electrical Engineering at the University of Washington, Seattle. Prior to joining UW EE she was a Postdoctoral Researcher in Electrical Engineering and Computer Sciences at UC Berkeley where she also obtained her Ph.D. in 2015.  Her research interests lie at the intersection of game theory, optimization, and statistical learning. She applies tools from these domains to address inefficiencies and vulnerabilities in next-generation urban infrastructure systems.