Gait analysis
This tutorial showcases the high-level functions composing the gait pipeline. Before following along, make sure all data preparation steps have been followed in the data preparation tutorial.
In this tutorial, we use two days of data from a participant of the Personalized Parkinson Project to demonstrate the functionalities. Since ParaDigMa
expects contiguous time series, the collected data was stored in two segments each with contiguous timestamps. Per segment, we load the data and perform the following steps:
Data preprocessing
Gait feature extraction
Gait detection
Arm activity feature extraction
Filtering gait
Arm swing quantification
We then combine the output of the different raw data segments for the final step:
Aggregation
To run the complete gait pipeline, a prerequisite is to have both accelerometer and gyroscope data, although the first three steps can be completed using only accelerometer data.
[!WARNING] The gait pipeline has been developed on data of the Gait Up Physilog 4, and is currently being validated on the Verily Study Watch. Different sensors and positions on the wrist may affect outcomes.
Load data
Here, we start by loading a single contiguous time series (segment), for which we continue running steps 1-6. Below we show how to run these steps for multiple raw data segments.
We use the interally developed TSDF
(documentation) to load and store data [1]. Depending on the file extension of your time series data, examples of other Python functions for loading the data into memory include:
.csv:
pandas.read_csv()
(documentation).json:
json.load()
(documentation)
from pathlib import Path
from paradigma.util import load_tsdf_dataframe
# Set the path to where the prepared data is saved and load the data.
# Note: the test data is stored in TSDF, but you can load your data in your own way
path_to_data = Path('../../example_data')
path_to_prepared_data = path_to_data / 'imu'
raw_data_segment_nr = '0001'
# Load the data from the file
df_imu, metadata_time, metadata_values = load_tsdf_dataframe(path_to_prepared_data, prefix=f'IMU_segment{raw_data_segment_nr}')
df_imu
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
Cell In[1], line 12
9 raw_data_segment_nr = '0001'
11 # Load the data from the file
---> 12 df_imu, metadata_time, metadata_values = load_tsdf_dataframe(path_to_prepared_data, prefix=f'IMU_segment{raw_data_segment_nr}')
14 df_imu
File ~/work/paradigma/paradigma/src/paradigma/util.py:134, in load_tsdf_dataframe(path_to_data, prefix, meta_suffix, time_suffix, values_suffix)
131 time_filename = f"{prefix}_{time_suffix}"
132 values_filename = f"{prefix}_{values_suffix}"
--> 134 metadata_time, metadata_values = read_metadata(path_to_data, meta_filename, time_filename, values_filename)
135 df = tsdf.load_dataframe_from_binaries([metadata_time, metadata_values], tsdf.constants.ConcatenationType.columns)
137 return df, metadata_time, metadata_values
File ~/work/paradigma/paradigma/src/paradigma/util.py:121, in read_metadata(input_path, meta_filename, time_filename, values_filename)
118 def read_metadata(
119 input_path: str, meta_filename: str, time_filename: str, values_filename: str
120 ) -> Tuple[TSDFMetadata, TSDFMetadata]:
--> 121 metadata_dict = tsdf.load_metadata_from_path(
122 os.path.join(input_path, meta_filename)
123 )
124 metadata_time = metadata_dict[time_filename]
125 metadata_values = metadata_dict[values_filename]
File ~/.cache/pypoetry/virtualenvs/paradigma-1HID61PK-py3.12/lib/python3.12/site-packages/tsdf/read_tsdf.py:84, in load_metadata_from_path(path)
82 # The data is isomorphic to a JSON
83 with open(path, "r") as file:
---> 84 data = json.load(file)
86 abs_path = os.path.realpath(path)
87 # Parse the data and verify that it complies with TSDF requirements
File /usr/lib/python3.12/json/__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
274 def load(fp, *, cls=None, object_hook=None, parse_float=None,
275 parse_int=None, parse_constant=None, object_pairs_hook=None, **kw):
276 """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
277 a JSON document) to a Python object.
278
(...) 291 kwarg; otherwise ``JSONDecoder`` is used.
292 """
--> 293 return loads(fp.read(),
294 cls=cls, object_hook=object_hook,
295 parse_float=parse_float, parse_int=parse_int,
296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File /usr/lib/python3.12/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
341 s = s.decode(detect_encoding(s), 'surrogatepass')
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
348 cls = JSONDecoder
File /usr/lib/python3.12/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
332 def decode(self, s, _w=WHITESPACE.match):
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
File /usr/lib/python3.12/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Step 1: Preprocess data
The single function preprocess_imu_data
in the cell below runs all necessary preprocessing steps. It requires the loaded dataframe, a configuration object config
specifying parameters used for preprocessing, and a selection of sensors. For the sensors, options include 'accelerometer'
, 'gyroscope'
, or 'both'
.
The function preprocess_imu_data
processes the data as follows:
Resample the data to ensure uniformly distributed sampling rate
Apply filtering to separate the gravity component from the accelerometer
from paradigma.config import IMUConfig
from paradigma.preprocessing import preprocess_imu_data
config = IMUConfig()
df_preprocessed = preprocess_imu_data(
df=df_imu,
config=config,
sensor='both',
watch_side='left',
)
print(f"The dataset of {df_preprocessed.shape[0] / config.sampling_frequency} seconds is automatically resampled to {config.sampling_frequency} Hz.")
df_preprocessed.head()
The dataset of 34339.61 seconds is automatically resampled to 100 Hz.
time | accelerometer_x | accelerometer_y | accelerometer_z | gyroscope_x | gyroscope_y | gyroscope_z | accelerometer_x_grav | accelerometer_y_grav | accelerometer_z_grav | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 0.00 | -0.002324 | -0.001442 | -0.002116 | 0.000000 | 1.402439 | 0.243902 | -0.472317 | -0.377984 | 0.772451 |
1 | 0.01 | -0.000390 | -0.000914 | -0.007396 | 0.432231 | 0.665526 | -0.123434 | -0.472326 | -0.378012 | 0.772464 |
2 | 0.02 | 0.000567 | 0.002474 | -0.005445 | 1.164277 | -0.069584 | -0.307536 | -0.472336 | -0.378040 | 0.772476 |
3 | 0.03 | -0.000425 | 0.002414 | -0.002099 | 1.151432 | -0.554928 | -0.554223 | -0.472346 | -0.378068 | 0.772489 |
4 | 0.04 | -0.002807 | -0.001408 | -0.000218 | 0.657189 | -0.603207 | -0.731570 | -0.472355 | -0.378096 | 0.772502 |
The resulting dataframe shown above contains uniformly distributed timestamps with corresponding accelerometer and gyroscope values. Note the for accelerometer values, the following notation is used:
accelerometer_x
: the accelerometer signal after filtering out the gravitational componentaccelerometer_x_grav
: the gravitational component of the accelerometer signal
The accelerometer data is retained and used to compute gravity-related features for the classification tasks, because the gravity is informative of the position of the arm.
Step 2: Extract gait features
With the data uniformly resampled and the gravitional component separated from the accelerometer signal, features can be extracted from the time series data. This step does not require gyroscope data. To extract the features, the pipeline executes the following steps:
Use overlapping windows to group timestamps
Extract temporal features
Use Fast Fourier Transform the transform the windowed data into the spectral domain
Extract spectral features
Combine both temporal and spectral features into a final dataframe
These steps are encapsulated in extract_gait_features
(documentation can be found here).
from paradigma.config import GaitConfig
from paradigma.pipelines.gait_pipeline import extract_gait_features
config = GaitConfig(step='gait')
df_gait = extract_gait_features(
df=df_preprocessed,
config=config
)
print(f"A total of {df_gait.shape[1]-1} features have been extracted from {df_gait.shape[0]} {config.window_length_s}-second windows with {config.window_length_s-config.window_step_length_s} seconds overlap.")
df_gait.head()
A total of 34 features have been extracted from 34334 6-second windows with 5 seconds overlap.
time | accelerometer_x_grav_mean | accelerometer_y_grav_mean | accelerometer_z_grav_mean | accelerometer_x_grav_std | accelerometer_y_grav_std | accelerometer_z_grav_std | accelerometer_std_norm | accelerometer_x_power_below_gait | accelerometer_y_power_below_gait | ... | accelerometer_mfcc_3 | accelerometer_mfcc_4 | accelerometer_mfcc_5 | accelerometer_mfcc_6 | accelerometer_mfcc_7 | accelerometer_mfcc_8 | accelerometer_mfcc_9 | accelerometer_mfcc_10 | accelerometer_mfcc_11 | accelerometer_mfcc_12 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.0 | -0.472967 | -0.380588 | 0.774287 | 0.000270 | 0.000818 | 0.000574 | 0.003377 | 0.000003 | 1.188086e-06 | ... | -1.101486 | 0.524288 | 0.215990 | 0.429154 | 0.900923 | 1.135918 | 0.673404 | -0.128276 | -0.335655 | -0.060155 |
1 | 1.0 | -0.473001 | -0.380704 | 0.774541 | 0.000235 | 0.000588 | 0.000220 | 0.003194 | 0.000003 | 1.210176e-06 | ... | -0.997314 | 0.633275 | 0.327645 | 0.451613 | 0.972729 | 1.120786 | 0.770134 | -0.115916 | -0.395856 | -0.011206 |
2 | 2.0 | -0.473036 | -0.380563 | 0.774578 | 0.000233 | 0.000619 | 0.000195 | 0.003188 | 0.000002 | 6.693551e-07 | ... | -1.040592 | 0.404720 | 0.268514 | 0.507473 | 0.944706 | 1.016282 | 0.785686 | -0.071433 | -0.414269 | 0.020690 |
3 | 3.0 | -0.472952 | -0.380310 | 0.774660 | 0.000301 | 0.000526 | 0.000326 | 0.003020 | 0.000002 | 6.835856e-07 | ... | -1.075637 | 0.258352 | 0.257234 | 0.506739 | 0.892823 | 0.900388 | 0.706368 | -0.080562 | -0.302595 | 0.054805 |
4 | 4.0 | -0.472692 | -0.380024 | 0.774889 | 0.000468 | 0.000355 | 0.000470 | 0.002869 | 0.000002 | 1.097557e-06 | ... | -1.079496 | 0.264418 | 0.237172 | 0.587941 | 0.936835 | 0.763372 | 0.607845 | -0.159721 | -0.184856 | 0.128150 |
5 rows × 35 columns
Each row in this dataframe corresponds to a single window, with the window length and overlap set in the config
object. Note that the time
column has a 1-second interval instead of the 10-millisecond interval before, as it now represents the starting time of the window.
Step 3: Gait detection
For classification, ParaDigMa uses so-called Classifier Packages which contain a classifier, classification threshold, and a feature scaler as attributes. The classifier is a random forest trained on a dataset of people with PD performing a wide range of activities in free-living conditions: The Parkinson@Home Validation Study. The classification threshold was set to limit the amount of false-positive predictions in the original study, i.e., to limit non-gait to be predicted as gait. The classification threshold can be changed by setting clf_package.threshold
to a different float value. The feature scaler was similarly fitted on the original dataset, ensuring the features are within expected confined spaces to make reliable predictions.
from importlib.resources import files
from paradigma.classification import ClassifierPackage
from paradigma.pipelines.gait_pipeline import detect_gait
# Set the path to the classifier package
classifier_package_filename = 'gait_detection_clf_package.pkl'
full_path_to_classifier_package = files('paradigma') / 'assets' / classifier_package_filename
# Load the classifier package
clf_package_detection = ClassifierPackage.load(full_path_to_classifier_package)
# Detecting gait returns the probability of gait for each window, which is concatenated to
# the original dataframe
df_gait['pred_gait_proba'] = detect_gait(
df=df_gait,
clf_package=clf_package_detection
)
n_windows = df_gait.shape[0]
n_predictions_gait = df_gait.loc[df_gait['pred_gait_proba'] >= clf_package_detection.threshold].shape[0]
perc_predictions_gait = round(100 * n_predictions_gait / n_windows, 1)
n_predictions_non_gait = df_gait.loc[df_gait['pred_gait_proba'] < clf_package_detection.threshold].shape[0]
perc_predictions_non_gait = round(100 * n_predictions_non_gait / n_windows, 1)
print(f"Out of {n_windows} windows, {n_predictions_gait} ({perc_predictions_gait}%) were predicted as gait, and {n_predictions_non_gait} ({perc_predictions_non_gait}%) as non-gait.")
# Only the time and the predicted gait probability are shown, but the dataframe also contains
# the extracted features
df_gait[['time', 'pred_gait_proba']].head()
Out of 34334 windows, 2753 (8.0%) were predicted as gait, and 31581 (92.0%) as non-gait.
time | pred_gait_proba | |
---|---|---|
0 | 0.0 | 0.000023 |
1 | 1.0 | 0.000024 |
2 | 2.0 | 0.000023 |
3 | 3.0 | 0.000023 |
4 | 4.0 | 0.000023 |
Store as TSDF
The predicted probabilities (and optionally other features) can be stored and loaded in TSDF as demonstrated below.
import tsdf
from paradigma.util import write_df_data
# Set 'path_to_data' to the directory where you want to save the data
metadata_time_store = tsdf.TSDFMetadata(metadata_time.get_plain_tsdf_dict_copy(), path_to_data)
metadata_values_store = tsdf.TSDFMetadata(metadata_values.get_plain_tsdf_dict_copy(), path_to_data)
# Select the columns to be saved
metadata_time_store.channels = ['time']
metadata_values_store.channels = ['pred_gait_proba']
# Set the units
metadata_time_store.units = ['Relative seconds']
metadata_values_store.units = ['Unitless']
metadata_time_store.data_type = float
metadata_values_store.data_type = float
# Set the filenames
meta_store_filename = f'segment{raw_data_segment_nr}_meta.json'
values_store_filename = meta_store_filename.replace('_meta.json', '_values.bin')
time_store_filename = meta_store_filename.replace('_meta.json', '_time.bin')
metadata_values_store.file_name = values_store_filename
metadata_time_store.file_name = time_store_filename
write_df_data(metadata_time_store, metadata_values_store, path_to_data, meta_store_filename, df_gait)
df_gait, _, _ = load_tsdf_dataframe(path_to_data, prefix=f'segment{raw_data_segment_nr}')
df_gait.head()
time | pred_gait_proba | |
---|---|---|
0 | 0.0 | 0.000023 |
1 | 1.0 | 0.000024 |
2 | 2.0 | 0.000023 |
3 | 3.0 | 0.000023 |
4 | 4.0 | 0.000023 |
Once again, the time
column indicates the start time of the window. Therefore, it can be observed that probabilities are predicted of overlapping windows, and not of individual timestamps. The function merge_timestamps_with_predictions
can be used to retrieve predicted probabilities per timestamp by aggregating the predicted probabilities of overlapping windows. This function is included in the next step.
Step 4: Arm activity feature extraction
The extraction of arm swing features is similar to the extraction of gait features, but we use a different window length and step length (config.window_length_s
, config.window_step_length_s
) to distinguish between gait segments with and without other arm activities. Therefore, the following steps are conducted sequentially by extract_arm_activity_features
:
Start with the preprocessed data of step 1
Merge the gait predictions into the preprocessed data
Discard predicted non-gait activities
Create windows of the time series data and extract features
But, first, the gait predictions should be merged with the preprocessed time series data, such that individual timestamps have a corresponding probability of gait. The function extract_arm_activity_features
expects a time series dataframe of predicted gait.
from paradigma.constants import DataColumns
from paradigma.util import merge_predictions_with_timestamps
# Merge gait predictions into timeseries data
if not any(df_gait[DataColumns.PRED_GAIT_PROBA] >= clf_package_detection.threshold):
raise ValueError("No gait detected in the input data.")
gait_preprocessing_config = GaitConfig(step='gait')
df = merge_predictions_with_timestamps(
df_ts=df_preprocessed,
df_predictions=df_gait,
pred_proba_colname=DataColumns.PRED_GAIT_PROBA,
window_length_s=gait_preprocessing_config.window_length_s,
fs=gait_preprocessing_config.sampling_frequency
)
# Add a column for predicted gait based on a fitted threshold
df[DataColumns.PRED_GAIT] = (df[DataColumns.PRED_GAIT_PROBA] >= clf_package_detection.threshold).astype(int)
# Filter the DataFrame to only include predicted gait (1)
df = df.loc[df[DataColumns.PRED_GAIT]==1].reset_index(drop=True)
from paradigma.pipelines.gait_pipeline import extract_arm_activity_features
config = GaitConfig(step='arm_activity')
df_arm_activity = extract_arm_activity_features(
df=df,
config=config,
)
print(f"A total of {df_arm_activity.shape[1] - 1} features have been extracted from {df_arm_activity.shape[0]} {config.window_length_s} - second windows with {config.window_length_s - config.window_step_length_s} seconds overlap.")
df_arm_activity.head()
A total of 61 features have been extracted from 2749 3 - second windows with 2.25 seconds overlap.
time | accelerometer_x_grav_mean | accelerometer_y_grav_mean | accelerometer_z_grav_mean | accelerometer_x_grav_std | accelerometer_y_grav_std | accelerometer_z_grav_std | accelerometer_std_norm | accelerometer_x_power_below_gait | accelerometer_y_power_below_gait | ... | gyroscope_mfcc_3 | gyroscope_mfcc_4 | gyroscope_mfcc_5 | gyroscope_mfcc_6 | gyroscope_mfcc_7 | gyroscope_mfcc_8 | gyroscope_mfcc_9 | gyroscope_mfcc_10 | gyroscope_mfcc_11 | gyroscope_mfcc_12 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1463.00 | -0.941812 | -0.216149 | -0.129170 | 0.031409 | 0.089397 | 0.060771 | 0.166084 | 0.000596 | 0.007746 | ... | -0.555190 | 0.735644 | 0.180382 | 0.044897 | -0.645257 | -0.255383 | 0.121998 | 0.297776 | 0.326170 | 0.348648 |
1 | 1463.75 | -0.933787 | -0.198807 | -0.092710 | 0.045961 | 0.066987 | 0.038606 | 0.363777 | 0.001216 | 0.002593 | ... | -0.722972 | 0.686450 | -0.254451 | -0.282469 | -0.798232 | -0.100043 | 0.028278 | 0.114591 | 0.160311 | 0.372009 |
2 | 1464.50 | -0.882285 | -0.265160 | -0.080937 | 0.094924 | 0.146720 | 0.021218 | 0.362434 | 0.002429 | 0.001315 | ... | -1.134321 | 0.773245 | -0.218279 | -0.430585 | -0.437373 | -0.065236 | 0.014411 | 0.083823 | 0.181666 | 0.079949 |
3 | 1465.25 | -0.794800 | -0.405043 | -0.094178 | 0.126863 | 0.212621 | 0.034948 | 0.363425 | 0.004974 | 0.008407 | ... | -1.154252 | 1.024267 | -0.161531 | -0.217479 | -0.153630 | -0.016550 | 0.119570 | 0.095287 | 0.231406 | 0.015294 |
4 | 1466.00 | -0.691081 | -0.578715 | -0.118220 | 0.127414 | 0.219660 | 0.035758 | 0.360352 | 0.003998 | 0.004305 | ... | -0.763188 | 0.763812 | -0.158849 | -0.023935 | -0.006564 | -0.185257 | -0.120585 | 0.090823 | 0.171506 | -0.038381 |
5 rows × 62 columns
The features extracted are similar to the features extracted for gait detection, but the gyroscope has been added to extract additional MFCCs of this sensor. The gyroscope (measuring angular velocity) is relevant to distinguish between arm activities. Also note that the time
column no longer starts at 0, since the first timestamps were predicted as non-gait and therefore discarded.
Step 5: Filtering gait
This classification task is similar to gait detection, although it uses a different classification object. The trained classifier is a logistic regression, similarly trained on the dataset of the Parkinson@Home Validation Study. Filtering gait is the process of detecting and removing gait segments containing other arm activities. This is an important process since individuals entertain a wide array of arm activities during gait: having hands in pockets, holding a dog leash, or carrying a plate to the kitchen. We trained a classifier to detect these other arm activities during gait, enabling accurate estimations of the arm swing.
from paradigma.classification import ClassifierPackage
from paradigma.pipelines.gait_pipeline import filter_gait
# Set the path to the classifier package
classifier_package_filename = 'gait_filtering_clf_package.pkl'
full_path_to_classifier_package = files('paradigma') / 'assets' / classifier_package_filename
# Load the classifier package
clf_package_filtering = ClassifierPackage.load(full_path_to_classifier_package)
# Detecting no_other_arm_activity returns the probability of no_other_arm_activity for each window, which is concatenated to
# the original dataframe
df_arm_activity['pred_no_other_arm_activity_proba'] = filter_gait(
df=df_arm_activity,
clf_package=clf_package_filtering
)
n_windows = df_arm_activity.shape[0]
n_predictions_no_other_arm_activity = df_arm_activity.loc[df_arm_activity['pred_no_other_arm_activity_proba']>=clf_package_filtering.threshold].shape[0]
perc_predictions_no_other_arm_activity = round(100 * n_predictions_no_other_arm_activity / n_windows, 1)
n_predictions_other_arm_activity = df_arm_activity.loc[df_arm_activity['pred_no_other_arm_activity_proba']<clf_package_filtering.threshold].shape[0]
perc_predictions_other_arm_activity = round(100 * n_predictions_other_arm_activity / n_windows, 1)
print(f"Out of {n_windows} windows, {n_predictions_no_other_arm_activity} ({perc_predictions_no_other_arm_activity}%) were predicted as no_other_arm_activity, and {n_predictions_other_arm_activity} ({perc_predictions_other_arm_activity}%) as other_arm_activity.")
# Only the time and predicted probabilities are shown, but the dataframe also contains
# the extracted features
df_arm_activity[['time', 'pred_no_other_arm_activity_proba']].head()
Out of 2749 windows, 916 (33.3%) were predicted as no_other_arm_activity, and 1833 (66.7%) as other_arm_activity.
time | pred_no_other_arm_activity_proba | |
---|---|---|
0 | 1463.00 | 0.199764 |
1 | 1463.75 | 0.107982 |
2 | 1464.50 | 0.138796 |
3 | 1465.25 | 0.168050 |
4 | 1466.00 | 0.033986 |
Step 6: Arm swing quantification
The next step is to extract arm swing estimates from the predicted gait segments without other arm activities. Arm swing estimates can be calculated for both filtered and unfiltered gait, with the latter being predicted gait including all arm activities. Specifically, the range of motion ('range_of_motion'
) and peak angular velocity ('peak_velocity'
) are extracted.
This step creates gait segments based on consecutively predicted gait windows. A new gait segment is created if the gap between consecutive gait predictions exceeds config.max_segment_gap_s
. Furthermore, a gait segment is considered valid if it is of at minimum length config.min_segment_length_s
.
But, first, similar to the step of extracting arm activity features, the predictions of the previous step should be merged with the preprocessed time series data.
# Merge arm activity predictions into timeseries data
if not any(df_arm_activity[DataColumns.PRED_NO_OTHER_ARM_ACTIVITY_PROBA] >= clf_package_filtering.threshold):
raise ValueError("No gait without other arm activities detected in the input data.")
config = GaitConfig(step='arm_activity')
df = merge_predictions_with_timestamps(
df_ts=df_preprocessed,
df_predictions=df_arm_activity,
pred_proba_colname=DataColumns.PRED_NO_OTHER_ARM_ACTIVITY_PROBA,
window_length_s=config.window_length_s,
fs=config.sampling_frequency
)
# Add a column for predicted gait based on a fitted threshold
df[DataColumns.PRED_NO_OTHER_ARM_ACTIVITY] = (df[DataColumns.PRED_NO_OTHER_ARM_ACTIVITY_PROBA] >= clf_package_filtering.threshold).astype(int)
# Filter the DataFrame to only include predicted gait (1)
df = df.loc[df[DataColumns.PRED_NO_OTHER_ARM_ACTIVITY]==1].reset_index(drop=True)
from paradigma.pipelines.gait_pipeline import quantify_arm_swing
from pprint import pprint
# Set to True to quantify arm swing based on the filtered gait segments, and False
# to quantify arm swing based on all gait segments
filtered = True
if filtered:
dataset_used = 'filtered'
print(f"The arm swing quantification is based on the filtered gait segments.\n")
else:
dataset_used = 'unfiltered'
print(f"The arm swing quantification is based on all gait segments.\n")
quantified_arm_swing, gait_segment_meta = quantify_arm_swing(
df=df,
fs=config.sampling_frequency,
filtered=filtered,
max_segment_gap_s=config.max_segment_gap_s,
min_segment_length_s=config.min_segment_length_s,
)
print(f"Gait segments are created of minimum {config.min_segment_length_s} seconds and maximum {config.max_segment_gap_s} seconds gap between segments.\n")
print(f"A total of {quantified_arm_swing['segment_nr'].nunique()} {dataset_used} gait segments have been quantified.")
print(f"\nMetadata of the first gait segment:")
pprint(gait_segment_meta['per_segment'][1])
print(f"\nIndividual arm swings of the first gait segment of the {dataset_used} dataset:")
quantified_arm_swing.loc[quantified_arm_swing['segment_nr']==1]
The arm swing quantification is based on the filtered gait segments.
Gait segments are created of minimum 1.5 seconds and maximum 1.5 seconds gap between segments.
A total of 84 filtered gait segments have been quantified.
Metadata of the first gait segment:
{'duration_s': 9.0,
'end_time_s': 2230.74,
'segment_category': 'moderately_long',
'start_time_s': 2221.75}
Individual arm swings of the first gait segment of the filtered dataset:
segment_nr | segment_category | range_of_motion | peak_velocity | |
---|---|---|---|---|
0 | 1 | moderately_long | 19.218491 | 90.807689 |
1 | 1 | moderately_long | 21.267287 | 105.781357 |
2 | 1 | moderately_long | 23.582098 | 103.932332 |
3 | 1 | moderately_long | 23.757712 | 114.846304 |
4 | 1 | moderately_long | 17.430734 | 63.297391 |
5 | 1 | moderately_long | 12.139037 | 59.740258 |
6 | 1 | moderately_long | 6.681346 | 36.802784 |
7 | 1 | moderately_long | 6.293493 | 30.793498 |
8 | 1 | moderately_long | 7.892546 | 42.481470 |
9 | 1 | moderately_long | 9.633521 | 43.837249 |
10 | 1 | moderately_long | 9.679263 | 38.867993 |
11 | 1 | moderately_long | 9.437900 | 34.112233 |
12 | 1 | moderately_long | 9.272199 | 33.344802 |
The gait segment categories are defined as follows:
short: < 5 seconds
moderately_long: 5-10 seconds
long: 10-20 seconds
very_long: > 20 seconds
As noted before, the gait segments (and categories) are determined based on predicted gait (unfiltered gait). Therefore, for the arm swing of filtered gait, a gait segment may be smaller as parts of the gait segment were predicted to have other arm activities, yet the category remained the same.
Run steps 1-6 for the all raw data segment(s)
If your data is also stored in multiple raw data segments, you can modify raw_data_segments
in the cell below to a list of the filenames of your respective segmented data.
import pandas as pd
from pathlib import Path
from importlib.resources import files
from pprint import pprint
from paradigma.util import load_tsdf_dataframe, merge_predictions_with_timestamps
from paradigma.config import IMUConfig, GaitConfig
from paradigma.preprocessing import preprocess_imu_data
from paradigma.pipelines.gait_pipeline import extract_gait_features, detect_gait,extract_arm_activity_features, filter_gait, quantify_arm_swing
from paradigma.constants import DataColumns
from paradigma.classification import ClassifierPackage
# Set the path to where the prepared data is saved
path_to_data = Path('../../example_data')
path_to_prepared_data = path_to_data / 'imu'
# Load the gait detection classifier package
classifier_package_filename = 'gait_detection_clf_package.pkl'
full_path_to_classifier_package = files('paradigma') / 'assets' / classifier_package_filename
clf_package_detection = ClassifierPackage.load(full_path_to_classifier_package)
# Load the gait filtering classifier package
classifier_package_filename = 'gait_filtering_clf_package.pkl'
full_path_to_classifier_package = files('paradigma') / 'assets' / classifier_package_filename
clf_package_filtering = ClassifierPackage.load(full_path_to_classifier_package)
# Set to True to quantify arm swing based on the filtered gait segments, and False
# to quantify arm swing based on all gait segments
filtered = True
# Create a list to store all quantified arm swing segments
list_quantified_arm_swing = []
raw_data_segments = ['0001','0002'] # list with all available raw data segments
for raw_data_segment_nr in raw_data_segments:
# Load the data
df_imu, _, _ = load_tsdf_dataframe(path_to_prepared_data, prefix=f'IMU_segment{raw_data_segment_nr}')
# 1: Preprocess the data
config = IMUConfig()
df_preprocessed = preprocess_imu_data(
df=df_imu,
config=config,
sensor='both',
watch_side='left',
)
# 2: Extract gait features
config = GaitConfig(step='gait')
df_gait = extract_gait_features(
df=df_preprocessed,
config=config
)
# 3: Detect gait
df_gait['pred_gait_proba'] = detect_gait(
df=df_gait,
clf_package=clf_package_detection
)
# Merge gait predictions into timeseries data
if not any(df_gait[DataColumns.PRED_GAIT_PROBA] >= clf_package_detection.threshold):
raise ValueError("No gait detected in the input data.")
df = merge_predictions_with_timestamps(
df_ts=df_preprocessed,
df_predictions=df_gait,
pred_proba_colname=DataColumns.PRED_GAIT_PROBA,
window_length_s=config.window_length_s,
fs=config.sampling_frequency
)
df[DataColumns.PRED_GAIT] = (df[DataColumns.PRED_GAIT_PROBA] >= clf_package_detection.threshold).astype(int)
df = df.loc[df[DataColumns.PRED_GAIT]==1].reset_index(drop=True)
# 4: Extract arm activity features
config = GaitConfig(step='arm_activity')
df_arm_activity = extract_arm_activity_features(
df=df,
config=config,
)
# 5: Filter gait
df_arm_activity['pred_no_other_arm_activity_proba'] = filter_gait(
df=df_arm_activity,
clf_package=clf_package_filtering
)
# Merge arm activity predictions into timeseries data
if not any(df_arm_activity[DataColumns.PRED_NO_OTHER_ARM_ACTIVITY_PROBA] >= clf_package_filtering.threshold):
raise ValueError("No gait without other arm activities detected in the input data.")
df = merge_predictions_with_timestamps(
df_ts=df_preprocessed,
df_predictions=df_arm_activity,
pred_proba_colname=DataColumns.PRED_NO_OTHER_ARM_ACTIVITY_PROBA,
window_length_s=config.window_length_s,
fs=config.sampling_frequency
)
df[DataColumns.PRED_NO_OTHER_ARM_ACTIVITY] = (df[DataColumns.PRED_NO_OTHER_ARM_ACTIVITY_PROBA] >= clf_package_filtering.threshold).astype(int)
df = df.loc[df[DataColumns.PRED_NO_OTHER_ARM_ACTIVITY]==1].reset_index(drop=True)
# 6: Quantify arm swing
quantified_arm_swing, gait_segment_meta = quantify_arm_swing(
df=df,
fs=config.sampling_frequency,
filtered=filtered,
max_segment_gap_s=config.max_segment_gap_s,
min_segment_length_s=config.min_segment_length_s,
)
# Add the predictions of the current raw data segment to the list
quantified_arm_swing['raw_data_segment_nr'] = raw_data_segment_nr
list_quantified_arm_swing.append(quantified_arm_swing)
quantified_arm_swing = pd.concat(list_quantified_arm_swing, ignore_index=True)
Step 7: Aggregation
Finally, the arm swing estimates can be aggregated across all gait segments.
from paradigma.pipelines.gait_pipeline import aggregate_arm_swing_params
arm_swing_aggregations = aggregate_arm_swing_params(
df_arm_swing_params=quantified_arm_swing,
segment_meta=gait_segment_meta['per_segment'],
aggregates=['median', '95p']
)
pprint(arm_swing_aggregations, sort_dicts=False)
{'long': {'duration_s': 60.75,
'median_range_of_motion': 15.78108745792784,
'95p_range_of_motion': 45.16540046751929,
'median_peak_velocity': 86.83257977334745,
'95p_peak_velocity': 219.97254034894718},
'short': {'duration_s': 153.75,
'median_range_of_motion': 14.225382307390944,
'95p_range_of_motion': 40.53847370093226,
'median_peak_velocity': 71.56035976932178,
'95p_peak_velocity': 197.13328716416063},
'very_long': {'duration_s': 1905.75,
'median_range_of_motion': 25.2896510096605,
'95p_range_of_motion': 43.74907398039543,
'median_peak_velocity': 125.9443142903539,
'95p_peak_velocity': 217.80854223601992},
'moderately_long': {'duration_s': 187.5,
'median_range_of_motion': 15.73004566220565,
'95p_range_of_motion': 54.55881567144294,
'median_peak_velocity': 77.94780939826387,
'95p_peak_velocity': 256.9799773546029},
'all_segment_categories': {'duration_s': 2307.75,
'median_range_of_motion': 23.100608971051315,
'95p_range_of_motion': 45.92600123148869,
'median_peak_velocity': 116.50364930684765,
'95p_peak_velocity': 219.2008357820751}}
The output of the aggregation step contains the aggregated arm swing parameters per gait segment category. Additionally, the total time in seconds time_s
is added to inform based on how much data the aggregations were created.