TY - JOUR
T1 - Improving Zonal Fairness While Maintaining Efficiency in Rideshare Matching
AU - Kumar, Ashwin
AU - Vorobeychik, Yevgeniy
AU - Yeoh, William
N1 - Publisher Copyright:
© 2021 Copyright for this paper by its authors.
PY - 2022
Y1 - 2022
N2 - Order dispatching algorithms, which match passenger requests with vehicles (agents) in ridesharing systems, are able to achieve high service rates (percentage of requests served) using deep reinforcement learning techniques to estimate the relative values of the different combinations of passenger-vehicle matches. While the goal of such algorithms is to maximize the service rate, this may lead to unintended fairness issues (e.g., high disparity between the service rates of geographic zones in a city). To remedy this limitation, researchers have recently proposed deep reinforcement learning based techniques that incorporates fairness components in the value function approximated. However, this approach suffers from the need to retrain should one wish to tune the degree of fairness or optimize for a different fairness function, which can be computationally expensive. Towards this end, we propose a simpler online approach that uses state-of-art deep reinforcement learning techniques and augments their value functions with fairness components during the matching optimization step. As no additional training is needed, this approach can be adapted to use any existing value function approximator and benefits from improved flexibility in evaluating different fairness objectives efficiently. In this paper, we describe several fairness functions that can be used by this approach and evaluate them against existing state-of-the-art deep RL based fairness techniques on standard ridesharing benchmarks. Our experiments show that our fairness functions outperform existing fairness techniques (i.e., it finds matching solutions that result in higher service rates and lower service rate disparity across zones), demonstrating the practical promise of this approach.
AB - Order dispatching algorithms, which match passenger requests with vehicles (agents) in ridesharing systems, are able to achieve high service rates (percentage of requests served) using deep reinforcement learning techniques to estimate the relative values of the different combinations of passenger-vehicle matches. While the goal of such algorithms is to maximize the service rate, this may lead to unintended fairness issues (e.g., high disparity between the service rates of geographic zones in a city). To remedy this limitation, researchers have recently proposed deep reinforcement learning based techniques that incorporates fairness components in the value function approximated. However, this approach suffers from the need to retrain should one wish to tune the degree of fairness or optimize for a different fairness function, which can be computationally expensive. Towards this end, we propose a simpler online approach that uses state-of-art deep reinforcement learning techniques and augments their value functions with fairness components during the matching optimization step. As no additional training is needed, this approach can be adapted to use any existing value function approximator and benefits from improved flexibility in evaluating different fairness objectives efficiently. In this paper, we describe several fairness functions that can be used by this approach and evaluate them against existing state-of-the-art deep RL based fairness techniques on standard ridesharing benchmarks. Our experiments show that our fairness functions outperform existing fairness techniques (i.e., it finds matching solutions that result in higher service rates and lower service rate disparity across zones), demonstrating the practical promise of this approach.
KW - Fairness
KW - Matching
KW - Ridesharing
KW - Transportation
UR - https://www.scopus.com/pages/publications/85136103185
M3 - Conference article
AN - SCOPUS:85136103185
SN - 1613-0073
VL - 3173
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 12th International Workshop on Agents in Traffic and Transportation, ATT 2022
Y2 - 25 July 2022
ER -