Overview

Amputees walking on a powered prosthesis can navigate more difficult terrain (e.g. ramps stairs), and can expend less metabolic energy compared to walking on passive devices. Existing methods of initiating transitions between different modes (level-ground walking, slopes, stairs) are unintuitive and can impose a cognitive burden on the user.

Ideally, a safe, automatic, seamless system would transition the prosthesis between modes by recognizing the user's intent in real time.

My high level goals for this project were to develop a robust, modular, user-friendly, and adaptive intent recognition system that was flexible enough to be used with any prosthetic leg in the lab.

For this project I learned about/utilized the following:

Machine Learning

  • Supervised Linear Classifiers
  • Principle Component Analysis (PCA)
  • Uncorrelated Linear Discriminant
    Analysis (ULDA)

Data Analysis

  • Creating custom Python packages to
    clean, organize, train, and analyze
    data collected from experiments.
  • Data Visualization
  • Data Analysis Pipelines

Professional Skills

  • Project Planning
  • Interdisciplinary Collaboration
  • Creative Problem-Solving

Data Collection

To acquire the data used to train the lower limb intent recognition classifiers, I wrote a real-time Python script which extracted features from a sliding window of incoming sensor data. This script would run for the duration of an experiment while a subject would ambulate on the prosthetic leg for multiple trials of standing, shuffling, level-ground walking, stair ascent/descent, and ramp ascent/descent, while the leg was manually transitioned via a mobile app. The raw sensor data, extracted features, and other metadata for the experiment were saved throughout the session.

Data Cleaning/Organization

The collected features and metadata were saved to binary files which were unusable without custom MATLAB/Python MEX files/packages to read them. I wrote my own Python functions to extract the experiment data without needing to satisfy all dependencies to use the custom packages. Once the information was extracted from the experiment files, I put it through a parsing pipeline I developed in Python which arranged the metadata, features, and labels into CSV files to provide readable, portable, and organized data for other lab members.

Classifier Model Training

I performed PCA and ULDA to reduce the dimensionality of the input features, depending on the classifier that was being trained. The keyfob mentioned in the Data Collection section served to label the data. I calculated the means, covariances, weights, and offsets for each classifier model with a training script. Each trained classifier model was saved to a corresponding CSV file.

Real-Time Classification

I added the trained model CSVs for the current user to the Linux based embedded controller which was housed on the prosthetic leg. The same script which extracted features in real-time would use the trained models and incoming data to predict the upcoming mode of the prosthesis.

Adaptation

In addition to the predictive algorithm running in real-time, there was an algorithm which looked back at the previous stride to determine the activities which actually happened. This second algorithm then used its estimation of the correct activity to update the corresponding classifier models point wise. Through this method, the predictive models could adapt to the way the current subject was ambulating. Ultimately, I could start with a general model trained across multiple subjects and, as more data was added, the classifiers would tailor themselves to the current subject.

Phone

(325) 370-5285

Location

Chicago, IL