Abdelhakim Benechehab

Ph.D. student @ Huawei Noah's Ark Lab and EURECOM, a Sorbonne university graduate school.

pdp.jpg

📫 Paris, France 🇫🇷

Hey, thanks for stopping by! 👋

I am Abdelhakim Benechehab, a second-year Ph.D. student at Huawei Noah’s Ark Lab and EURECOM working on model-based reinforcement learning. I am jointly supervised by Giuseppe Paolo and Maurizio Filippone.

In my research, I explore methods to improve dynamics models in the context of model-based reinforcement learning. This includes developing models suitable for long-horizon planning, and that are aware of their errors through uncertainty estimation. Additionally, I am interested in in-context learning and large language models, particularly their applications in reinforcement learning and dynamics modeling.

Previously, I earned my master’s degree at ENS Paris-Saclay in 2021 from the Mathematics, Vision, and Machine Learning (MVA) program. I also hold an engineering degree from École des Mines de Saint-Étienne in mathematics and computer science.

Besides my Ph.D. work, I am an active member of the Moroccan NGO Math&Maroc, which aims to promote science and mathematics in my native country, Morocco 🇲🇦. As part of this endeavor, I organize and mentor at the ThinkAI Hackathon and (used to) host a bi-weekly podcast discussing the latest AI news in Moroccan dialect.

In my spare time, I enjoy sports (primarily volleyball and bouldering), traveling (30+ countries and counting), and learning new things (currently fishing 🎣 and Italian 🇮🇹).

news

Mar 05, 2025 🥳 1 workshop paper @ ICLR 2025: AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting.
Feb 23, 2025 🎤 Invited talk @ KAUST on Adapting Foundation Models.
Feb 17, 2025 📑 New preprint and code: AdaPTS.
Jan 22, 2025 🥳 1 paper @ ICLR 2025: Zero-shot Model-based Reinforcement Learning using Large Language Models.
Oct 15, 2024 📑 New preprint and code: DICL.

selected publications

  1. AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting
    Abdelhakim Benechehab, Vasilii Feofanov, Giuseppe Paolo, and 3 more authors
    ICLR 2025 Workshop SCOPE, Mar 2025
  2. Zero-shot Model-based Reinforcement Learning using Large Language Models
    Abdelhakim Benechehab, Youssef Attia El Hili, Ambroise Odonnat, and 6 more authors
    The Thirteenth International Conference on Learning Representations (ICLR 2025), Jan 2025
  3. arxiv
    llm_preview.PNG
    Large Language Models as Markov Chains
    Oussama Zekri, Ambroise Odonnat, Abdelhakim Benechehab, and 3 more authors
    Preprint, Oct 2024
  4. Can LLMs predict the convergence of Stochastic Gradient Descent?
    Oussama Zekri, Abdelhakim Benechehab, and Ievgen Redko
    ICML 2024 Workshop ICL, Jun 2024
  5. A Study of the Weighted Multi-step Loss Impact on the Predictive Error and the Return in MBRL
    Abdelhakim Benechehab, Albert Thomas, Giuseppe Paolo, and 2 more authors
    RLC 2024 Workshop ICBINB, Jun 2024