Automatic analysis of human lower limb ultrasonography images
DeepACSA is an open-source tool to evaluate the anatomical cross-sectional area of muscles in ultrasound images using deep learning.
More information about the installtion and usage of DeepACSA can be found in the online documentation. You can find information about contributing, issues and bug reports there as well.
Our trained models, training data, an executable as well as example files can be accessed at .
If you find this work useful, please remember to cite the corresponding paper, where more information about the model architecture and performance can be found as well.
To quickly start the DeepACSA either open the executable or type
python -m Deep_ACSA
in your prompt once the package was installed locally with
pip install DeepACSA==0.3.1.
when the DeepACSA environment is activated
conda create -n DeepACSA python=3.9
conda activate DeepACSA
Irrespective of the way the software was started, the GUI should open and is ready to be used.
git clone https://github.com/PaulRitsche/DeepACSA.git
then creating the DeepACSA0.3.2 environment, i.e. with
conda env create -f environment.yml
in the root directory with the environment.yml file.
Subsequently, install the package locally with
python -m pip instell -e .
then run the module or the UI by
python -m DeepACSA
or
cd DeepACSA
python deep_acsa_gui.py
With version 0.3.2, we included new models for the
- patella tendon (taken from Guzzi et al. 2026)
- vastus medialis (taken from Tayfur et al. 2025)
Below you can find some overview tables. All models and the newest installer can be found here .
We provide a model for the automatic segmentation of the patellar tendon anatomical cross-sectional area (ACSA) at 25%, 50%, and 75% of tendon length in healthy subjects (UNet 3+). We evaluated two model architectures for patellar tendon segmentation: UNet-VGG16 and UNet 3+. Their performance was assessed by comparing automated predictions with manual segmentations. Overall, both models demonstrated good agreement with manual analysis, with UNet 3+ showing the most consistent performance. Detailed methodology and results are reported in our publication Guzzi et al. 2026.
We provide a model for the automatic segmentation of the vastus medialis cross-sectional area (ACSA) in healthy participants as well participants with ACL injuries. A UNet-VGG16 model was evaluated and compared to manual analysis. Comparability calculations and detailled methodology can be found at Tayfur et al. 2025
In collaboration with the ORB Michigan, we developed models for the automatic segmentation of the biceps femoris. The dataset consisted of approximately 900 images from around 150 participants. Participants included were youth and adult soccer players, adult endurance runners, adult track and field athletes as well as adults with a recent ACL tear (in total 30% women). Images were captured across different muscle regions including 33%, 50% and 66% of muscle length. We compared the performance of different models to manual analysis of the images. We used similar training procedures as decribed in our DeepACSA paper, however, we evaluated the models unsing 5-fold cross-validation to check for overfitting. We provide the model with the highest IoU scores for ACSA segmentation. We compared the different model architectures VGG16-Unet, Unet2+ and Unet3+. Below we have outlined the analysis results and the trained models can be found here.
Table 1. Comparison of model architectures throughout validation folds.
Table 2. Comparison of model architectures to manual evaluation on external test set. all -> all Testsets; 1/2/3 -> only Testset 1/2/3; p -> panoramic; s -> single image; 1+2 -> without device 2 images (fewer images in training set), only Testset 1+2; rm -> with visual inspection; n -> number of images.
DeepACSA workflow. a) Original ultrasound image of the m. rectus femoris (RF) at 50% of femur length that serves as input for the model. b) Detailed U-net CNN architecture with a VGG16 encoder (left path). c) Model prediction of muscle area following post-processing (shown as a binary image).



