publications

The list of my publications and preprints can be found below.

For a complete list, see my Google Scholar.

2025

  1. arxiv
    nllpo.png
    From Data to Rewards: a Bilevel Optimization Perspective on Maximum Likelihood Estimation
    Abdelhakim Benechehab, Gabriel Singer, Corentin Léger, and 5 more authors
    Preprint, Oct 2025
  2. In-Context Meta-Learning with Large Language Models for Automated Model and Hyperparameter Selection
    Youssef Attia El Hili, Albert Thomas, Abdelhakim Benechehab, and 3 more authors
    NeurIPS Workshop LLM-eval, Sep 2025
  3. AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting
    Abdelhakim Benechehab, Vasilii Feofanov, Giuseppe Paolo, and 3 more authors
    ICML, May 2025
  4. Zero-shot Model-based Reinforcement Learning using Large Language Models
    Abdelhakim Benechehab, Youssef Attia El Hili, Ambroise Odonnat, and 6 more authors
    ICLR, Jan 2025

2024

  1. arxiv
    llm_preview.PNG
    Large Language Models as Markov Chains
    Oussama Zekri, Ambroise Odonnat, Abdelhakim Benechehab, and 3 more authors
    Preprint, Oct 2024
  2. Can LLMs predict the convergence of Stochastic Gradient Descent?
    Oussama Zekri, Abdelhakim Benechehab, and Ievgen Redko
    ICML Workshop ICL, Jun 2024
  3. A Study of the Weighted Multi-step Loss Impact on the Predictive Error and the Return in MBRL
    Abdelhakim Benechehab, Albert Thomas, Giuseppe Paolo, and 2 more authors
    RLC Workshop ICBINB, Jun 2024
  4. Fair Model-Based Reinforcement Learning Comparisons with Explicit and Consistent Update Frequency
    Albert Thomas, Abdelhakim Benechehab, Giuseppe Paolo, and 1 more author
    ICLR Blogpost, Feb 2024

2023

  1. Multi-timestep models for Model-based Reinforcement Learning
    Abdelhakim Benechehab, Giuseppe Paolo, Albert Thomas, and 2 more authors
    Preprint, Feb 2023

2022

  1. Deep autoregressive density nets vs neural ensembles for model-based offline reinforcement learning
    Abdelhakim Benechehab, Albert Thomas, and Balázs Kégl
    Preprint, Feb 2022