This repository contains the official implementation of the paper "Learning of Population Dynamics: Inverse Optimization Meets JKO Scheme", accepted at ICLR 2026. The goal of this work is to learn population-level dynamics from snapshot observations using the theory of Wasserstein gradient flows and optimal transport.
This paper introduces
.
├── configs/ # training configuration files
│ ├── config-base.yaml
│ └── config-method-*.yaml
│
├── data/ # datasets
├── models/ # JKOnet, JKOnet*, iJKOnet implementations
├── networks/ # neural network architectures
│
├── notebooks/ # tutorial notebooks
│
├── scripts/ # experiment scripts
│ ├── bash/ # generated bash scripts
│ ├── generate_sc_w_lo.py
│ ├── generate_sc_wo_lo.py
│ └── optuna_search.sh
│
├── utils/ # helper utilities
│ ├── dataset/
│ ├── entropy_estimation/
│ └── evaluation/
│
├── train.py # main training entrypoint
└── optuna_search.py # hyperparameter searchInstall the necessary packages for this repository by creating the Anaconda environment:
git clone https://github.com/.../iJKOnet.git
cd iJKOnet
conda env create -f environment.yml
conda activate ijkonet # Replace with actual environment name if differentThe notebooks folder contains example notebooks demonstrating how to generate data and train models. In particular, notebooks/iJKOnet_usage.ipynb demonstrates how to:
-
generate synthetic 2D data for learning a potential energy function,
-
train the available models on this data.
The scripts
generate_sc_w_lo.pygenerate_sc_wo_lo.py
generate bash scripts (saved in scripts/bash) for running
single-cell experiments:
• leave-one-out experiments
• full trajectory experiments (no leave-one-out)
The script scripts/optuna_search.sh is used to launch a SLURM job for hyperparameter search.
For the EB dataset, we followed the preprocessing pipeline described in the $\texttt{JKOnet}^\star$ tutorial.
For the Multi dataset, we used the preprocessing pipeline described in the paper “A Computational Framework for Solving Wasserstein Lagrangian Flows” and implemented in the corresponding repository.
To start training, use the following general pattern:
python train.py \
--solver <solver_name> \
--dataset <dataset_name> \
--config <path_to_base_config> \
--extra_config <path_to_additional_config> \
--K <K> \
--array-tau <tau_or_tau_list> \
--epochs <number_of_epochs> \
--seed <seed>If you find this repository useful in your research, please cite:
@inproceedings{
persiianov2026learning,
title={Learning of Population Dynamics: Inverse Optimization Meets {JKO} Scheme},
author={Mikhail Persiianov and Jiawei Chen and Petr Mokrov and Alexander Tyurin and Evgeny Burnaev and Alexander Korotin},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=tVJIKd6CLF}
}- jkonet-star — our code is primarily based on this repository with some fixes of the data generation for synthetic experiments;
- mutinfo — the code in
utils/entropy_estimationis based on code from this repository; - POT and ott-jax for ot toolkit;
- optuna - toolkit for hyperparameter search
- comet ML — experiment-tracking and visualization toolkit;
- inkscape — an excellent open-source editor for vector graphics;