Abstract:
In the recent times, smartphone usage has become
increasingly popular for learning. User’s exhibit multiple gesture
interactions with smartphones, while reading, which can provide
valuable implicit feedback about the content consumed.
Smartphones have many embedded sensors which capture
plethora of user interaction data. The on-device Gyroscope and
Accelerometer can be enabled to capture the variations done due
to gesture interactions like scrolling, pinch to zoom, tap,
orientation change and screen capture. This research work is
based on training machine learning classifier models with
smartphone sensors’ readings to identify the users screen gesture
interactions. Data for the classifier is collected by from 44 users
in total using an android application. Aggregated time domain
feature extraction has been computed on the preprocessed data.
Four groups of data have been used to train the models.
Extensive experiments are done to test the success of proposed
system using Random Forests, Support Vector Machine (SVM),
Extreme Gradient Boost (XGB), ADA boost, Naïve Bayes (NB)
and K-Nearest Neighbour (KNN). Detailed analysis of the
success rate and accuracy calculation have been performed. Best
identification accuracy of 97.58% is achieved by Random Forest
Classifier followed by Extreme Gradient Boost and K-Nearest
Neighbour with accuracy 95.97% and 93.55% respectively.