Blog

Human activity recognition with Apache Spark

By June 16, 2016 No Comments

In this article I want to show how Apache Spark can be used to classify human activity based on smartphone data. We will build and train two simple multi-label classifiers using decision trees and random forests.

Dataset

We will use the Human Activity Recognition Using Smartphones Data Set provided by the UC Irvine Machine Learning Repository. The dataset is described as follows:
The experiments have been carried out with a group of 30 volunteers within an age bracket of 19-48 years. Each person performed six activities (WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING) wearing a smartphone (Samsung Galaxy S II) on the waist. Using its embedded accelerometer and gyroscope, we captured 3-axial linear acceleration and 3-axial angular velocity at a constant rate of 50Hz. The experiments have been video-recorded to label the data manually. The obtained dataset has been randomly partitioned into two sets, where 70% of the volunteers was selected for generating the training data and 30% the test data.

The dataset contains 10299 instances (7352 train and 2947 test samples) distributed among six classes: WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING. Each record have 561 features.

Preparing data

Before we train our classifiers, we need to prepare train and test sets. We need four files from the dataset:

  • X_train.txt – train data, each row is a separate instance
  • y_train.txt – train data labels
  • X_test.txt – test data, each row is a separate instance
  • y_test.txt – test data labels

Data and corresponding labels are in separate files, so we have to join them. To do so, I used two Apache Spark functions: zipWithIndex and then join. Next, I convert the data do LabeledPoint – it is simply a vector with associated label. Labels numbers must be adjusted to match LabeledPoint requirements – accoring to the documentation, for multiclass classification, labels indices should start from 0. Additionaly, I decided to extract a validation set from test data, which will be used for preliminary model evaluation.

Model construction and evaluation

Both decision trees and random forests have some tunable parameters. We will use the validation set to select the optimal values. During preliminary evaluation several versions of classifier are built and then evaluated using metrics calculated for each class: precision, recall, true positive rate, false positive rate and f-measure. Model with the highest score (the highest sum of metrics over all classes), is selected as a final model that will be evaluated against test set. If you are interested in detailed info about decision trees, random forests and their parameters, please refer to Apache Spark documentation: decision trees, random forests.

Decision trees

Final model parameters: impurity: gini, maxBins: 25.

Decision trees – confusion matrix

376 13 14 0 0 0
75 268 31 0 0 0
19 30 283 0 0 0
0 0 0 307 84 0
0 1 0 46 381 0
0 0 0 0 0 423

 

 

Class / Metric Precision Recall True positive rate False positive rate F measure
WALKING 0.8 0.93 0.93 0.05 0.86
WALKING_UPSTAIRS 0.86 0.72 0.72 0.02 0.78
WALKING_DOWNSTAIRS 0.86 0.85 0.85 0.02 0.86
SITTING 0.87 0.79 0.79 0.02 0.83
STANDING 0.82 0.89 0.89 0.04 0.85
LAYING 1.0 1.0 1.0 0 1.0

 

 

Random forests

Final model parameters: impurity: gini, maxBins: 200, numTrees: 100.

 

Random forests – confusion matrix

393 7 7 0 0 0
36 329 7 0 0 0
17 40 281 0 0 0
0 1 0 336 50 0
0 0 0 32 390 0
0 0 0 0 0 423

 

Class / metric Precision Recall True positive rate False positive rate F measure
WALKING 0.88 0.97 0.97 0.03 0.92
WALKING_UPSTAIRS 0.87 0.88 0.88 0.02 0.88
WALKING_DOWNSTAIRS 0.95 0.83 0.83 0.007 0.89
SITTING 0.91 0.87 0.87 0.02 0.89
STANDING 0.89 0.92 0.92 0.03 0.90
LAYING 1.0 1.0 1.0 0 1.0

Summary

By comparing performances for each class, it is clear that random forests achieved better results in terms of precision and f-measure and had better true positive rate. It is not a surprise, as random forests are more complex and powerful, but overall both classifiers got decent results and were perfect for LAYING class.

Code

Full source code is available on our github: https://github.com/Semantive/apache-spark-examples

Leave a Reply