top of page

Bridging AI and Ultrasound: My First Machine Learning Project in Injury Recognition

Introduction


As a board-certified RMSK clinician, I’ve spent years refining my skills in diagnostic ultrasound. But as I worked with young clinicians learning ultrasound interpretation, I saw a gap what if we could leverage AI to assist in identifying musculoskeletal injuries more efficiently?


This project marks a major milestone for me my first machine learning project. Combining my expertise in sports medicine with AI-driven image analysis, I built a deep learning model capable of automated ultrasound segmentation to support clinicians in recognizing injuries faster and possibly with greater accuracy.


The Inspiration: A Research-Driven Approach


My project was heavily inspired by research in deep learning segmentation of musculoskeletal ultrasound images. In particular, I built upon the work published by Francesco Marzola, Nens van Alfen, Jonne Doorduin, and Kristen M. Meiburger in their study:


Deep learning segmentation of transverse musculoskeletal ultrasound images for neuromuscular disease assessment.

(Computers in Biology and Medicine, 2021, https://doi.org/10.1016/j.compbiomed.2021.104623)


Their dataset provided valuable ultrasound images and segmentation masks, which I used to train a custom U-Net deep learning model with only the Biceps Brachi data set for this project. My goal was to take this research and build a functional, deployable AI tool that could support real-world clinical decision-making.


Data Exploration and Machine Learning Approach


Before building the AI model, I conducted a thorough data analysis to understand the characteristics of the dataset. Below are some key visualizations of the dataset:


Histogram plots showing the distribution of Age, Length, and Weight with density curves for better visualization of trends.
Distribution of key dataset attributes such as age, body length, and weight.

Key Observations:

• The dataset includes a diverse age range, capturing both young and older individuals.

Body length distribution is skewed, likely representing variations in anatomical structures.

Weight distribution follows a near-normal pattern but includes a significant range of values.


Model Training Pipeline

1. Data Preprocessing:

• Images were normalized and resized to ensure consistent input.

• Segmentation masks were binarized to create clear delineation between structures.

Augmentation techniques such as flipping, rotation, and brightness adjustments were applied.


2. Deep Learning Model:

• I implemented a U-Net architecture, a widely used segmentation model for medical imaging.

• The model was trained using cross-entropy loss and Dice coefficient to optimize segmentation accuracy.


Validation and testing were performed to fine-tune hyperparameters and avoid overfitting.


Model Performance: How Well Does It Work?


To assess the accuracy of my AI-powered ultrasound segmentation model, I evaluated Intersection over Union (IoU), Precision, Recall, and Dice Score. These metrics give us a quantitative measure of how well the model identifies key anatomical structures in ultrasound images.

Metric

Value

Clinical Significance

IoU (Intersection over Union)

0.7906

Measures the overlap between the predicted and actual segmentation. Higher values indicate more accurate region identification.

Precision

0.8765

Indicates how many of the segmented areas were correctly identified as the target structure (reducing false positives)

Recall

0.8816

Shows how well the model captures all relevant regions in the ultrasound, minimizing false negatives.

Dice Score (F1-Score)

0.8699

A balance between Precision and Recall, representing overall segmentation accuracy.

Clinical Implications


These results demonstrate that the model effectively segments ultrasound images with high accuracy, making it a valuable tool for clinicians—especially those new to diagnostic ultrasound. A Dice Score of 0.8699 suggests that the model performs nearly as well as an experienced sonographer when identifying anatomical structures.


By leveraging AI-assisted segmentation, we can reduce interpretation time, improve diagnostic consistency, and help young clinicians gain confidence in their ultrasound evaluations. Clinical Use Case Example: AI-Assisted Injury Recognition


Imagine a young clinician or a less experienced physical therapist evaluating ultrasound images for musculoskeletal injuries. With an IoU of 0.79 and a Dice Score of 0.86, the AI model can:


Assist in detecting soft tissue injuries by accurately segmenting tendons, muscles, or ligaments with high overlap to expert-labeled regions.

Reduce the risk of misidentification, ensuring clinicians don’t mistake normal anatomical variations for pathology (e.g., differentiating tendinopathy from healthy tendon tissue).

Enhance decision-making by providing an AI-assisted second opinion, which can be especially helpful when recognizing subtle changes in tissue integrity.


This model serves as a powerful training tool for young clinicians, helping them build confidence in interpreting ultrasound findings while improving diagnostic accuracy in real-world settings. 🚀 Code & Repository: Try It Yourself!


I have open-sourced this project so that others can explore, improve, and contribute. If you’re interested in AI-powered ultrasound segmentation, check out the full code here:


🔗 GitHub Repository: us-segmentation-api


For your blog, you can break down the process using images with captions like this:


How to Use the AI-Powered Ultrasound Segmentation Model


Step 1: Upload an Ultrasound Image


A FastAPI interface displaying the /predict/ endpoint for uploading and processing ultrasound images for segmentation

Step 2: AI Processes the Image


A pop-up window showing a selected ultrasound image labeled “BICEPS BRAC LI,” indicating the biceps brachii muscle

Step 3: Review & Interpret the AI Output


Segmentation output, including the original ultrasound image, raw model output, and thresholded segmentation mask, with a high confidence score of 99.88%

VIDEO



How Did We Get These Results?


We achieved high-confidence AI segmentation by following a structured approach in both model training and post-processing. Below is a step-by-step breakdown of how the model’s segmentation improved:


1️⃣ Model Training Improvements

Dataset & Augmentation: We used 2,734 ultrasound images (2,207 healthy, 527 pathological). This ensured a diverse and well-represented training set.

Loss Reduction: Over 50 epochs, the loss decreased from 0.5241 → 0.0901, meaning the model became better at learning segmentation boundaries.

Final Dice Score: 0.9013 on the test set, indicating strong segmentation accuracy.

Using mps (Metal Performance Shaders): Improved training speed on Mac.


2️⃣ Model Prediction & Confidence Calculation

Step 1: AI Processes the Image

• Image is preprocessed (A.Resize(256,256) → ToTensorV2()).

• It is fed through the U-Net model, which generates a grayscale probability map (raw segmentation output).

Step 2: Thresholding the Output

• The AI predicts pixel values between 0 and 1.

• We apply a threshold (0.3 or 0.5) to convert this into a binary mask (segmentation result).

Step 3: Confidence Calculation

• Previously: Used np.mean(pred_mask), which diluted confidence.

• Now: Uses np.max(pred_mask) * 100, which correctly reflects segmentation certainty.

Higher max activation = Higher confidence.


3️⃣ Clinical Application & Benefits



1. Automates Ultrasound Analysis

• AI quickly identifies structures in ultrasound images.

• Reduces manual segmentation time for clinicians.


2. Provides Decision Support

• Clinicians can validate AI segmentation before making a diagnosis.

• Helps standardize segmentation across different users.


3. Identifies Pathologies Earlier

• AI can highlight abnormal muscle, tendon, or joint structures.

• Useful for detecting muscle tears, bursitis, or ligament damage.


4. Improves Research & Data Collection

• AI-generated segmentations allow for better tracking of patient progress.

• Facilitates large-scale data analysis for sports medicine studies.


5. Enables Remote Diagnostics

• AI-powered ultrasound can assist in telemedicine.

• Could allow for early triage of injuries before in-person visits.



🔎 Clinical Example: Biceps Brachii Ultrasound


If a sports medicine clinician is evaluating a suspected biceps tendon injury, AI segmentation can:

1. Confirm anatomical structures → Ensure the biceps tendon is clearly visualized.

2. Detect abnormalities → If the segmented region appears irregular, it may indicate tendinopathy or partial tears.

3. Guide interventions → AI insights can assist in dry needling, rehab planning, or surgical referral.

Comments


CONTACT

Thanks for submitting!

Powered and secured by Wix

bottom of page