I am a Postdoctoral Scholar at Stanford AI Lab where I work with Emma Brunskill on Offline Reinforcement Learning and Deep RL algorithms. I am particularly interested in using Offline RL as a driving force to achieve practical LLM alignment through RL from human feedback.
I completed my PhD at Inria where my research focused on sample-efficient Deep RL algorithms for control, exploration and safety. I graduated from DTU and École Centrale with two MSc.
In the professional sphere, I have worked as a ML Engineer at iAdvize, specializing in NLP, and have gained experience in three startups in Denmark, with roles spanning NLP, Computer Vision, and Speech Processing.
I am honored to serve as an Associate Program Chair at the 40th International Conference on Machine Learning (ICML) conference, which will be held in Hawaii.
Postdoc in Computer Science, 2022
Stanford University (Emma Brunskill lab), CA, USA
PhD in Computer Science, 2021
Inria (SequeL team), Lille, FR
MSc in Computer Science, 2017
Technical University of Denmark, Copenhagen, DK
MSc in General Engineering, 2017
École Centrale, Nantes, FR
A Reinforcement Learning Library for Research and Education (PyTorch)
AGAC: Adversarially Guided Actor-Critic (PyTorch & TensorFlow)
AVEC: Actor with Variance Estimated Critic (TensorFlow)
Materials for the Reinforcement Learning Summer School 2019: Bandits, RL & Deep RL (PyTorch)
Instructors: Alessandro Lazaric, Matteo Pirotta
Instructors: Felix Berkenkamp, Tristan Cazenave, Ludovic Denoyer, Gabriel Dulac-Arnold, Audrey Durand, Vincent François-Lavet, Matteo Hessel, Emilie Kaufmann, Marc Lanctot, Max Lapan, Alessandro Lazaric, Odalric-Ambrym Maillard, Jérémie Mary, Gerhard Neumann, Guillaume Obozinski, Olivier Pietquin, Bilal Piot, Matteo Pirotta, Bruno Scherrer, Florian Strub, Eleni Vasilaki, Oriol Vinyals
Projects, Summer Schools, etc.
2023 - Hawaii, USA
2020 - 2023
2022 - Milano, Italy
Code for RLSS | 2019 - Lille, France
2019 - London, UK
Exhibition pictures | 2019 - Lille, France