# Tremor analysis This tutorial shows how to run the tremor pipeline to obtain aggregated tremor measures from gyroscope sensor data. Before following along, make sure all data preparation steps have been followed in the data preparation tutorial. In this tutorial, we use two days of data from a participant of the Personalized Parkinson Project to demonstrate the functionalities. Since `ParaDigMa` expects contiguous time series, the collected data was stored in two segments each with contiguous timestamps. Per segment, we load the data and perform the following steps: 1. Preprocess the time series data 2. Extract tremor features 3. Detect tremor 4. Quantify tremor We then combine the output of the different segments for the final step: 5. Compute aggregated tremor measures ## Load example data Here, we start by loading a single contiguous time series (segment), for which we continue running steps 1-3. [Below](#multiple_segments_cell) we show how to run these steps for multiple segments. We use the interally developed `TSDF` ([documentation](https://biomarkersparkinson.github.io/tsdf/)) to load and store data [[1](https://arxiv.org/abs/2211.11294)]. Depending on the file extension of your time series data, examples of other Python functions for loading the data into memory include: - _.csv_: `pandas.read_csv()` ([documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html)) - _.json_: `json.load()` ([documentation](https://docs.python.org/3/library/json.html#json.load)) ```python from pathlib import Path from paradigma.util import load_tsdf_dataframe # Set the path to where the prepared data is saved and load the data. # Note: the test data is stored in TSDF, but you can load your data in your own way path_to_data = Path('../../example_data') path_to_prepared_data = path_to_data / 'imu' segment_nr = '0001' df_data, metadata_time, metadata_values = load_tsdf_dataframe(path_to_prepared_data, prefix=f'IMU_segment{segment_nr}') df_data ```
time accelerometer_x accelerometer_y accelerometer_z gyroscope_x gyroscope_y gyroscope_z
0 0.000000 -0.474641 -0.379426 0.770335 0.000000 1.402439 0.243902
1 0.009933 -0.472727 -0.378947 0.765072 0.426829 0.670732 -0.121951
2 0.019867 -0.471770 -0.375598 0.766986 1.158537 -0.060976 -0.304878
3 0.029800 -0.472727 -0.375598 0.770335 1.158537 -0.548780 -0.548780
4 0.039733 -0.475120 -0.379426 0.772249 0.670732 -0.609756 -0.731707
... ... ... ... ... ... ... ...
3455326 34339.561333 -0.257895 -0.319139 -0.761244 159.329269 14.634146 -28.658537
3455327 34339.571267 -0.555502 -0.153110 -0.671292 125.060976 -213.902440 -19.329268
3455328 34339.581200 -0.286124 -0.263636 -0.981340 158.658537 -328.170733 -3.170732
3455329 34339.591133 -0.232536 -0.161722 -0.832536 288.841465 -281.707318 17.073171
3455330 34339.601067 0.180383 -0.368421 -1.525837 376.219514 -140.853659 37.256098

3455331 rows × 7 columns

## Step 1: Preprocess data IMU sensors collect data at a fixed sampling frequency, but the sampling rate is not uniform, causing variation in time differences between timestamps. The [preprocess_imu_data](https://github.com/biomarkersParkinson/paradigma/blob/main/src/paradigma/preprocessing.py#:~:text=preprocess_imu_data) function therefore resamples the timestamps to be uniformly distributed, and then interpolates IMU values at these new timestamps using the original timestamps and corresponding IMU values. By setting `sensor` to 'gyroscope', only gyroscope data is preprocessed and the accelerometer data is removed from the dataframe. Also a `watch_side` should be provided, although for the tremor analysis it does not matter whether this is the correct side since the tremor features are not influenced by the gyroscope axes orientation. ```python from paradigma.config import IMUConfig from paradigma.preprocessing import preprocess_imu_data config = IMUConfig() print(f'The data is resampled to {config.sampling_frequency} Hz.') df_preprocessed_data = preprocess_imu_data(df_data, config, sensor='gyroscope', watch_side='left') df_preprocessed_data ``` The data is resampled to 100 Hz.
time gyroscope_x gyroscope_y gyroscope_z
0 0.00 0.000000 1.402439 0.243902
1 0.01 0.432231 0.665526 -0.123434
2 0.02 1.164277 -0.069584 -0.307536
3 0.03 1.151432 -0.554928 -0.554223
4 0.04 0.657189 -0.603207 -0.731570
... ... ... ... ...
3433956 34339.56 130.392434 29.491627 -26.868202
3433957 34339.57 135.771133 -184.515525 -21.544211
3433958 34339.58 146.364103 -324.248909 -5.248641
3433959 34339.59 273.675024 -293.011330 14.618256
3433960 34339.60 372.878731 -158.516265 35.330770

3433961 rows × 4 columns

## Step 2: Extract tremor features The function [`extract_tremor_features`](https://github.com/biomarkersParkinson/paradigma/blob/main/src/paradigma/pipelines/tremor_pipeline.py#:~:text=extract_tremor_features) extracts windows from the preprocessed gyroscope data using non-overlapping windows of length `config.window_length_s`. Next, from these windows the tremor features are extracted: 12 mel-frequency cepstral coefficients (MFCCs), frequency of the peak in the power spectral density, power below tremor (0.5 - 3 Hz), and power around the tremor peak. The latter is not used for tremor detection, but stored for tremor quantification in Step 4. ```python from paradigma.config import TremorConfig from paradigma.pipelines.tremor_pipeline import extract_tremor_features config = TremorConfig(step='features') print(f'The window length is {config.window_length_s} seconds') df_features = extract_tremor_features(df_preprocessed_data, config) df_features ``` The window length is 4 seconds
time mfcc_1 mfcc_2 mfcc_3 mfcc_4 mfcc_5 mfcc_6 mfcc_7 mfcc_8 mfcc_9 mfcc_10 mfcc_11 mfcc_12 freq_peak below_tremor_power tremor_power
0 0.0 5.323582 1.179579 -0.498552 -0.149152 -0.063535 -0.132090 -0.112380 -0.044326 -0.025917 0.116045 0.169869 0.213884 3.75 0.082219 0.471588
1 4.0 5.333162 1.205712 -0.607844 -0.138371 -0.039518 -0.137703 -0.069552 -0.008029 -0.087711 0.089844 0.152380 0.195165 3.75 0.071260 0.327252
2 8.0 5.180974 1.039548 -0.627100 -0.054816 -0.016767 -0.044817 0.079859 -0.023155 0.024729 0.104989 0.126502 0.192319 7.75 0.097961 0.114138
3 12.0 5.290298 1.183957 -0.627651 -0.027235 0.095184 -0.050455 -0.024654 0.029754 -0.007459 0.125700 0.146895 0.220589 7.75 0.193237 0.180988
4 16.0 5.128074 1.066869 -0.622282 0.038557 -0.034719 0.045109 0.076679 0.057267 -0.024619 0.131755 0.177849 0.149686 7.75 0.156469 0.090009
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
8579 34316.0 7.071408 -0.376556 0.272322 0.068750 0.051588 0.102012 0.055017 0.115942 0.012746 0.117970 0.073279 0.057367 13.50 48.930380 91.971686
8580 34320.0 1.917642 0.307927 0.142330 0.265357 0.285635 0.143886 0.259636 0.195724 0.176947 0.162205 0.147897 0.170488 11.00 0.012123 0.000316
8581 34324.0 2.383806 0.268580 0.151254 0.414430 0.241540 0.244071 0.201109 0.209611 0.097146 0.048798 0.013239 0.035379 2.00 0.013077 0.000615
8582 34328.0 1.883626 0.089983 0.196880 0.300523 0.239185 0.259342 0.277586 0.206517 0.178499 0.215561 0.067234 0.123958 13.75 0.011466 0.000211
8583 34332.0 2.599103 0.286252 -0.014529 0.475488 0.229446 0.188200 0.173689 0.033262 0.138957 0.106176 0.036859 0.082178 12.50 0.015068 0.000891

8584 rows × 16 columns

## Step 3: Detect tremor The function [`detect_tremor`](https://github.com/biomarkersParkinson/paradigma/blob/main/src/paradigma/pipelines/tremor_pipeline.py#:~:text=detect_tremor) uses a pretrained logistic regression classifier to predict the tremor probability (`pred_tremor_proba`) for each window, based on the MFCCs. Using the prespecified threshold, a tremor label of 0 (no tremor) or 1 (tremor) is assigned (`pred_tremor_logreg`). Furthermore, the detected tremor windows are checked for rest tremor in two ways. First, the frequency of the peak should be between 3-7 Hz. Second, we want to exclude windows with significant arm movements. We consider a window to have significant arm movement if `below_tremor_power` exceeds `config.movement_threshold`. The final tremor label is saved in `pred_tremor_checked`. A label for predicted arm at rest (`pred_arm_at_rest`, which is 1 when at rest and 0 when not at rest) was also saved, to control for the amount of arm movement during the observed time period when aggregating the amount of tremor time in Step 4 (if a person is moving their arm, they cannot have rest tremor). ```python from importlib.resources import files from paradigma.pipelines.tremor_pipeline import detect_tremor print(f'A threshold of {config.movement_threshold} deg\u00b2/s\u00b2 \ is used to determine whether the arm is at rest or in stable posture.') # Load the pre-trained logistic regression classifier tremor_detection_classifier_package_filename = 'tremor_detection_clf_package.pkl' full_path_to_classifier_package = files('paradigma') / 'assets' / tremor_detection_classifier_package_filename # Use the logistic regression classifier to detect tremor and check for rest tremor df_predictions = detect_tremor(df_features, config, full_path_to_classifier_package) df_predictions[['time', 'pred_tremor_proba', 'pred_tremor_logreg', 'pred_arm_at_rest', 'pred_tremor_checked']] ``` A threshold of 50 deg²/s² is used to determine whether the arm is at rest or in stable posture.
time pred_tremor_proba pred_tremor_logreg pred_arm_at_rest pred_tremor_checked
0 0.0 0.038968 1 1 1
1 4.0 0.035365 1 1 1
2 8.0 0.031255 1 1 0
3 12.0 0.021106 0 1 0
4 16.0 0.021078 0 1 0
... ... ... ... ... ...
8579 34316.0 0.000296 0 1 0
8580 34320.0 0.000089 0 1 0
8581 34324.0 0.000023 0 1 0
8582 34328.0 0.000053 0 1 0
8583 34332.0 0.000049 0 1 0

8584 rows × 5 columns

#### Store as TSDF The predicted probabilities (and optionally other features) can be stored and loaded in TSDF as demonstrated below. ```python import tsdf from paradigma.util import write_df_data # Set 'path_to_data' to the directory where you want to save the data metadata_time_store = tsdf.TSDFMetadata(metadata_time.get_plain_tsdf_dict_copy(), path_to_data) metadata_values_store = tsdf.TSDFMetadata(metadata_values.get_plain_tsdf_dict_copy(), path_to_data) # Select the columns to be saved metadata_time_store.channels = ['time'] metadata_values_store.channels = ['pred_tremor_proba', 'pred_tremor_logreg', 'pred_arm_at_rest', 'pred_tremor_checked'] # Set the units metadata_time_store.units = ['Relative seconds'] metadata_values_store.units = ['Unitless', 'Unitless', 'Unitless', 'Unitless'] metadata_time_store.data_type = float metadata_values_store.data_type = float # Set the filenames meta_store_filename = f'segment{segment_nr}_meta.json' values_store_filename = meta_store_filename.replace('_meta.json', '_values.bin') time_store_filename = meta_store_filename.replace('_meta.json', '_time.bin') metadata_values_store.file_name = values_store_filename metadata_time_store.file_name = time_store_filename write_df_data(metadata_time_store, metadata_values_store, path_to_data, meta_store_filename, df_predictions) ``` ```python df_predictions, _, _ = load_tsdf_dataframe(path_to_data, prefix=f'segment{segment_nr}') df_predictions.head() ``` ## Step 4: Quantify tremor The tremor power of all predicted tremor windows (where `pred_tremor_checked` is 1) is used for tremor quantification. A datetime column is also added, providing necessary information before aggregating over specified hours in Step 5. ```python import pandas as pd import datetime import pytz df_quantification = df_predictions[['time', 'pred_arm_at_rest', 'pred_tremor_checked','tremor_power']].copy() df_quantification.loc[df_predictions['pred_tremor_checked'] == 0, 'tremor_power'] = None # tremor power of non-tremor windows is set to None # Create datetime column based on the start time of the segment start_time = datetime.datetime.strptime(metadata_time.start_iso8601, '%Y-%m-%dT%H:%M:%SZ') start_time = start_time.replace(tzinfo=pytz.timezone('UTC')).astimezone(pytz.timezone('CET')) # convert to correct timezone if necessary df_quantification['time_dt'] = start_time + pd.to_timedelta(df_quantification['time'], unit="s") df_quantification = df_quantification[['time', 'time_dt', 'pred_arm_at_rest', 'pred_tremor_checked', 'tremor_power']] df_quantification ```
time time_dt pred_arm_at_rest pred_tremor_checked tremor_power
0 0.0 2019-08-20 12:39:16+02:00 1 1 0.471588
1 4.0 2019-08-20 12:39:20+02:00 1 1 0.327252
2 8.0 2019-08-20 12:39:24+02:00 1 0 NaN
3 12.0 2019-08-20 12:39:28+02:00 1 0 NaN
4 16.0 2019-08-20 12:39:32+02:00 1 0 NaN
... ... ... ... ... ...
8579 34316.0 2019-08-20 22:11:12+02:00 1 0 NaN
8580 34320.0 2019-08-20 22:11:16+02:00 1 0 NaN
8581 34324.0 2019-08-20 22:11:20+02:00 1 0 NaN
8582 34328.0 2019-08-20 22:11:24+02:00 1 0 NaN
8583 34332.0 2019-08-20 22:11:28+02:00 1 0 NaN

8584 rows × 5 columns

### Run steps 1 - 4 for all segments If your data is also stored in multiple segments, you can modify `segments` in the cell below to a list of the filenames of your respective segmented data. ```python from pathlib import Path from importlib.resources import files from paradigma.util import load_tsdf_dataframe from paradigma.config import IMUConfig, TremorConfig from paradigma.preprocessing import preprocess_imu_data from paradigma.pipelines.tremor_pipeline import extract_tremor_features, detect_tremor # Set the path to where the prepared data is saved path_to_data = Path('../../example_data') path_to_prepared_data = path_to_data / 'imu' # Load the pre-trained logistic regression classifier tremor_detection_classifier_package_filename = 'tremor_detection_clf_package.pkl' full_path_to_classifier_package = files('paradigma') / 'assets' / tremor_detection_classifier_package_filename # Create a list of dataframes to store the quantifications of all segments list_df_quantifications = [] segments = ['0001','0002'] # list with all available segments for segment_nr in segments: # Load the data df_data, metadata_time, _ = load_tsdf_dataframe(path_to_prepared_data, prefix='IMU_segment'+segment_nr) # 1: Preprocess the data config = IMUConfig() df_preprocessed_data = preprocess_imu_data(df_data, config, sensor='gyroscope', watch_side='left') # 2: Extract features config = TremorConfig(step='features') df_features = extract_tremor_features(df_preprocessed_data, config) # 3: Detect tremor df_predictions = detect_tremor(df_features, config, full_path_to_classifier_package) # 4: Quantify tremor df_quantification = df_predictions[['time', 'pred_arm_at_rest', 'pred_tremor_checked','tremor_power']].copy() df_quantification.loc[df_predictions['pred_tremor_checked'] == 0, 'tremor_power'] = None # Create datetime column based on the start time of the segment start_time = datetime.datetime.strptime(metadata_time.start_iso8601, '%Y-%m-%dT%H:%M:%SZ') start_time = start_time.replace(tzinfo=pytz.timezone('UTC')).astimezone(pytz.timezone('CET')) # convert to correct timezone if necessary df_quantification['time_dt'] = start_time + pd.to_timedelta(df_quantification['time'], unit="s") df_quantification = df_quantification[['time', 'time_dt', 'pred_arm_at_rest', 'pred_tremor_checked', 'tremor_power']] # Add the quantifications of the current segment to the list df_quantification['segment_nr'] = segment_nr list_df_quantifications.append(df_quantification) df_quantification = pd.concat(list_df_quantifications, ignore_index=True) ``` ## Step 5: Compute aggregated tremor measures The final step is to compute the amount of tremor time and tremor power with the function [`aggregate_tremor`](https://github.com/biomarkersParkinson/paradigma/blob/main/src/paradigma/pipelines/tremor_pipeline.py#:~:text=aggregate_tremor), which aggregates over all windows in the input dataframe. Depending on the size of the input dateframe, you could select the hours and days (both optional) that you want to include in this analysis. In this case we use data collected between 8 am and 10 pm (specified as `select_hours_start` and `select_hours_end`), and days with at least 10 hours of data (`min_hours_per_day`) based on. Based on the selected data, we compute aggregated measures for tremor time and tremor power: - Tremor time is calculated as the number of detected tremor windows, as percentage of the number of windows while the arm is at rest or in stable posture (when `below_tremor_power` does not exceed `config.movement_threshold`). This way the tremor time is controlled for the amount of time the arm is at rest or in stable posture, when rest tremor and re-emergent tremor could occur. - For tremor power the following aggregates are derived: the mode, median and 90th percentile of tremor power (specified in `config.aggregates_tremor_power`). The median and modal tremor power reflect the typical tremor severity, whereas the 90th percentile reflects the maximal tremor severity within the observed timeframe. The aggregated tremor measures and metadata are stored in a json file. ```python import pprint from paradigma.util import select_hours, select_days from paradigma.pipelines.tremor_pipeline import aggregate_tremor select_hours_start = '08:00' # you can specifiy the hours and minutes here select_hours_end = '22:00' min_hours_per_day = 10 print(f'Before aggregation we select data collected between {select_hours_start} \ and {select_hours_end}. We also select days with at least {min_hours_per_day} hours of data.') print(f'The following tremor power aggregates are derived: {config.aggregates_tremor_power}.') # Select the hours that should be included in the analysis df_quantification = select_hours(df_quantification, select_hours_start, select_hours_end) # Remove days with less than the specified minimum amount of hours df_quantification = select_days(df_quantification, min_hours_per_day) # Compute the aggregated measures config = TremorConfig() d_tremor_aggregates = aggregate_tremor(df = df_quantification, config = config) pprint.pprint(d_tremor_aggregates) ``` Before aggregation we select data collected between 08:00 and 22:00. We also select days with at least 10 hours of data. The following tremor power aggregates are derived: ['mode', 'median', '90p']. {'aggregated_tremor_measures': {'90p_tremor_power': 1.3259483071516063, 'median_tremor_power': 0.5143985314908104, 'modal_tremor_power': 0.3, 'perc_windows_tremor': 19.386769676484793}, 'metadata': {'nr_valid_days': 1, 'nr_windows_rest': 8284, 'nr_windows_total': 12600}}