Skip to main content

Bayesian Inverse Reinforcement Learning for Collective Animal Movement

Date:
Location:
https://uky.zoom.us/j/89799450783?pwd=em80U0ZsWm5hbm9rMGxDT2Q5bVh4dz09
Speaker(s) / Presenter(s):
Dr. Toryn Schafer - Cornell University

 

Abstract: The estimation of the spatio-temporal dynamics of animal behavior processes is complicated by nonlinear interactions among individuals and with the environment. Agent-based methods allow for defining simple rules that generate complex group behaviors, but are statistically challenging to estimate and assume behavioral rules are known a priori. Instead of making simplifying assumptions across all anticipated scenarios, inverse reinforcement learning provides inference on the short-term (local) rules governing long term behavior policies or choices by using properties of a Markov decision process. We use the computationally efficient linearly-solvable Markov decision process (LMDP) to learn the local rules governing collective movement. The estimation of the immediate and long-term behavioral decision costs is done in a Bayesian framework. The use of basis function smoothing is used to induce smoothness in the costs across the state space. We demonstrate the advantage of the LMDP for estimating dynamics for a classic collective movement agent-based model, the self propelled particle model. Then, we present the first data application of IRL using the introduced methodology for collective movement of guppies in a tank and estimate trade offs between social and navigational decisions. Lastly, a brief discussion on the connections to traditional resource selection functions in ecology demonstrates the future potential advantage of LMDPs for inference on behavioral decisions as a result of an accumulation of behavioral costs.

 

 

Event Series: