publications
2024
- Mitigating Goal Misgeneralization via Minimax Regretπππ«π’π¦ πππππ₯ πππππ€ ,Β Matthew Farrugia-Roberts ,Β Hannah Erlebach , and 4 more authorsConference Paper under Review, 2024
Robustness research in reinforcement learning often focuses on ensuring that the policy consistently exhibits capable, goal-driven behavior. However, not every capable behavior is the intended behavior. Goal misgeneralization can occur when the policy generalizes capably with respect to a βproxy goalβ whose optimal be- havior correlates with the intended goal on the training distribution, but not out of distribution. Though the intended goal would be ambiguous if they were perfectly correlated in training, we show progress can be made if the goals are only nearly ambiguous, with the training distribution containing a small proportion of disam- biguating levels. We observe that the training signal from disambiguating levels could be amplified by regret-based prioritization. We formally show that approxi- mately optimal policies on maximal-regret levels avoid the harmful effects of goal misgeneralization, which may exist without this prioritization. Empirically, we find that current regret-based Unsupervised Environment Design (UED) methods can mitigate the effects of goal misgeneralization, though do not always entirely eliminate it. Our theoretical and empirical results show that as UED methods improve they could further mitigate goal misgeneralization in practice.
- Algorithms for Caching and MTS with reduced number of predictionsπππ«π’π¦ πππππ₯ πππππ€ ,Β andΒ Marek EliasInternational Conference on Learning Representations (ICLR), 2024
ML-augmented algorithms utilize predictions to achieve performance beyond their worst-case bounds. Producing these predictions might be a costly operation β this motivated Im et al. (2022) to introduce the study of algorithms which use predictions parsimoniously. We design parsimonious algorithms for caching and MTS with action predictions, proposed by Antoniadis et al. (2023), focusing on the parameters of consistency (performance with perfect predictions) and smoothness (dependence of their performance on the prediction error). Our algorithm for caching is 1-consistent, robust, and its smoothness deteriorates with the decreasing number of available predictions. We propose an algorithm for general MTS whose consistency and smoothness both scale linearly with the decreasing number of predictions. Without the restriction on the number of available predictions, both algorithms match the earlier guarantees achieved by Antoniadis et al. (2023).
- Dynamic Vocabulary Pruning in Early-Exit LLMsJort Vincenti ,Β πππ«π’π¦ πππππ₯ πππππ€ ,Β Joan Velja , and 2 more authorsENSLP Workshop, NeurIPS, 2024
Increasing the size of large language models (LLMs) has been shown to lead to better performance. However, this comes at the cost of slower and more expensive inference. Early-exiting is a promising approach for improving the efficiency of LLM inference by enabling next token prediction at intermediate layers. Yet, the large vocabulary size in modern LLMs makes the confidence estimation required for exit decisions computationally expensive, diminishing the efficiency gains. To address this, we propose dynamically pruning the vocabulary at test time for each token. Specifically, the vocabulary is pruned at one of the initial layers, and the smaller vocabulary is then used throughout the rest of the forward pass. Our experiments demonstrate that such post-hoc dynamic vocabulary pruning improves the efficiency of confidence estimation in early-exit LLMs while maintaining competitive performance.
- βExplaining RL decisions with trajectoriesβ: A reproducibility Studyπππ«π’π¦ πππππ₯ πππππ€ ,Β Matteo Nulli ,Β Joan Velja , and 1 more authorTransactions on Machine Learning Research (TMLR), 2024
This work investigates the reproducibility of the paper " Explaining RL decisions with trajectories β by Deshmukh et al. (2023). The original paper introduces a novel approach in explainable reinforcement learning based on the attribution decisions of an agent to specific clusters of trajectories encountered during training. We verify the main claims from the paper, which state that (i) removing trajectories induces a lower initial state value, (ii) clusters present high-level behaviours, (iii) distant trajectories influence the decision of an agent, and (iv) humans correctly identify the attributed trajectories to the decision of the agent. We recover the environments used by the authors based on the partial original code they provided for one of the environments. While we confirm that (i), (ii), and (iii) partially hold, we extend on the largely qualitative experiments from the authors by introducing a quantitative metric to further support (iii), and new experiments and visual results for (i). Moreover, we investigate the use of different clustering algorithms and encoder architectures to further support (ii). We could not support (iv), given the limited extent of the original experiments. We conclude that, while some of the claims can be supported, further investigations and experiments could be of interest. We recognize the novelty of the work from the authors and hope that our work paves the way for clearer and more transparent approaches.