Human Activity Recognition Using Accelerometer and Gyroscope Sensors

— Mobile phones are pervasive, moderately specialized gadgets that have an effective and capable handling power enveloped with smaller segments that can do efficient and powerful calculations. One of the components that is built into the mobile phones to make it more robust are the sensors. Mobile phones are encompassed with several sensors, for example, proximity sensors, temperature sensors, accelerometers, gyroscopes and many more. These sensors have opened up ways to different fields in data mining and data analytics. The existence of these sensors has empowered people to control its information to perform different tasks. One such task is movement detection which is termed as activity recognition. In this paper an existing dataset has been used which consists of 10 volunteers, wearing a pair of accelerometers and gyroscopes close to their right lower arm and a pair of accelerometers and gyroscopes close to their left ankle. The subjects are asked to perform 12 exercises which are standing still, sitting and relaxing, lying down, walking, climbing stairs, waist-bends forward, frontal elevation of arms, knees bending (crouching), cycling, jogging, running, jumping front & back. 11 features were separated for the raw data collected from the sensors. In this paper, a novel automated method for classification of human activities, using wearable sensors which are also found interfaced within most of the modern mobile phones, is developed. The features are extracted from the recordings of data from individual as well as combination of sensors. The publicly available dataset is used for experimentation. The extracted features are classified using six popular classifiers: K-Nearest Neighbors (KNN), Naïve Bayes (NB), Support Vector Machine (SVM), Conditional Inference Tree (C-Tree), J48 and Random Forest (RF). The experimental results are tabulated and analyzed. Activity recognition turns out to be critical in distinguishing and sending fast data about irregular physical body developments of a person.


I. INTRODUCTION
Mobile phones are ubiquitous and are the fastest growing and technologically advancing devices. The components that make a phone smart are the sensors. GPS sensors, accelerometer, gyroscope, proximity sensors, light sensors and fingerprint sensors are the few of the sensors that are built into many of the modern mobile devices. The presence of these powerful sensors in the smart phones has enabled us to utilize and manipulate it to perform various tasks. One of such tasks is to recognize activity by placing a phone in contact with the body and understanding the data that the sensors produce.
In this world where the wellbeing of a person is a primary concern, it is necessary to maintain a continuous watch on the movements of a person. Detection of movement is very crucial among the patients, because it is important to constantly monitor their daily routine, for example whether the person has sufficient rest or whether the person is active. This research serves the purpose of detecting the activity of the patients. In hospitals. it is required to consistently monitor the patients. To be informed of any irregular movements, it is essential to follow the activity of the patient at every moment. Instead of the patient's caretaker having to leave the room in search of a nurse or a doctor and leave the patient alone in the room which could result in fatality, there can be a mechanism to somehow inform the authorities of the activity of the patient. Besides the medical and health usage of this application, there are numerous other fields where it can likely be applied. With the advancement of technology and data mining, activity recognition proves to be useful in defense, homes for the elderly people, prison, monitoring children and several other places that require keeping a check on the body movement.
Mobile phones have been used since they are affordable and have similar computing power to that of its equivalent larger devices. To infer the classification of the activity done by the person in this paper a triaxial accelerometer and triaxial gyroscope were used because they are present in most of the smart phones today. A triaxial accelerometer returns the value of displacement of a body along its X axis, Y axis and Z axis. A triaxial Gyroscope is a device that returns value of rotation of a body along its X axis (Move from side to side), Y axis (Tilt back and front) and Z axis (Rotate from portrait to landscape and vice versa) as shown in Fig 1. II. LITERATURE REVIEW Mobile phones are robust devices which are implanted with various sensors and in the meantime are affordable [1]. The accessibility of these sensors has demonstrated open entryways for data mining and data analytics. The presence of sensors has made it possible to understand the data collected from it, to recognize various mundane tasks performed by humans which is labeled as activity recognition. Accelerometer is amongst the most regularly utilized sensors for these sort of trials, since the data from an accelerometer delivers high and desired accuracy [3]. Action recognition is done by initially requesting the subjects to perform the required exercises, which are analyzed by the classifier for further detection of the same activities, and feature components are extracted from the recorded sensor data for the machine learning calculations to represent the information better [3]. There have been numerous other implementations in the area of activity detection. Each of the studies utilizes various sensors at distinct locations, with different components extracted and calculated against different machine learning algorithms yielding distinct results for each trial [6]- [8]. It is discovered that exercises can be recognized by utilizing moderately few elements [4], [5], [9]. The studies have also been performed to see if using more components will produce better results [10], [11]. Table I display a comparative analysis of the earlier works performed, for detecting activities using phone sensors. The table reflects the reference paper ID, the sensors that have been used, the activities that have been performed to be predicted, the features that are extracted for the raw sensor data and the machine learning algorithms that have been applied to classify and detect the activities.

III. DATASET DESCRIPTION
For predicting the activities, the dataset that is publicly available is utilized [12]. The dataset consists of recordings of body movements of ten volunteers performing 12 activities. Body wearable sensors were used to collect the data. Body wearable sensors are smart electronic devices that can be worn on the body with the help of elastic straps. The data was recorded from the sensors placed on the subject's chest, right lower arm and left ankle.  To collect the above data a total of nine sensors have been fixed at different locations. An accelerometer is placed at chest (XYZ), Left ankle (XYZ) and right lower arm (XYZ). A gyroscope is placed at left ankle (XYZ) and right lower arm (XYZ). A magnetometer is placed at left ankle (XYZ) and right lower arm (XYZ) and two electrocardiography sensors to monitor the electrical signals of the heart, resulting in retrieving a total of 23 data for each record. The information in the brackets indicates the number of readings or the axis for the sensor.    The graphs in Fig. 2 and Fig. 3 show the variations of accelerometer data at different axes for standing, sitting and lying down and gyroscope data at different for the same activities. It is drawn against the 3 axis value on the Y axis and the count of the data on the X axis. In this work, the accelerometer and gyroscope sensors placed at right lower arm and left ankle are considered, because these sensors are embedded into most of the modern mobile phones. Furthermore, static and dynamic activities include drastic movements of the hands and legs.

IV. FEATURE EXTRACTION AND SEGMENTATION
The raw data collected from the pair of sensors cannot be applied directly to the machine learning algorithms [13], Hence it is necessary to extract features. Features were extracted from the real time sensor values for every 50 records. The activity with more than 50% overlapping was used to label the window. 50 records are used because it is sufficient to catch various analyses of body motions for each activity.   • The numbers in the brackets indicate the number of features, one for each of the axes of the sensors, and thus total equals to 34 features for every sensor at each location. These features provide meaning to the raw time data collected from sensors when applied through a machine learning algorithms.

V. IMPLEMENTATION AND RESULTS
The extracted features are subjected to the machine learning algorithm to classify the activities. There are two settings that have been focused on to compare which will deliver the best results. These settings have been used to distinguish between the end outcomes. Setting 1: Single subject is used to train and test the data. 70% of the subject data is used for training and is tested against the remaining 30% on the same subject. Setting 2: Nine subjects are used to train the data and one subject is used to test the data.
For both of the settings listed above the studies have been conducted based on three analyses.
1. Analysis for all the sensors used. As discussed in section III, the dataset that is used consists of four sensors placed at two locations. 2. Analysis for pair of sensors (Accelerometer and Gyroscope) collectively placed at two locations. 3. Analysis for combination of same type of sensor collectively.  Table III reflects data for analysis 1 where the activity is predicted by implementing every sensor individually. The instances are right lower arm accelerometer, right lower arm gyroscope, left ankle accelerometer and left ankle gyroscope. Experiments have also been conducted with combination of sensors at each location of the body. The settings remain the same as discussed earlier in this paper. In analysis 2 the instances are considered for both the pair of sensors together at each location. It is done to compare if the individual sensors give the best results and also find out to see if the results can be enhanced by combining both the sensors.  Furthermore, in analysis 3 the study is carried on by combining pair of same type of sensors at every location. The instances are right lower arm accelerometer with left ankle accelerometer and right lower arm gyroscope and left ankle gyroscope. Table V shows the results for the same. The bar graph in Fig.4 corresponds to the accuracy resulted for each activity and for each of the machine learning algorithms. It can be inferred that various algorithms give different accuracy for predictions. For standing still it's seen that k-nearest algorithm performs lower than the remaining 5 algorithms. The accuracy cannot be predicted 100 % because of conflict and overlapping of body movements between two activities. Given standing and sitting there is not much movement in the legs since they are static and it could be difficult to classify the results. Hence to achieve higher accuracy rates it necessary to use the relevant sensors at the correct locations. For activities related to the hand movements it is important that the sensors should be placed near the arms. In the similar way it is important to set up sensors near the legs for activities that involve leg movements like running and jogging.
In this study the results are evaluated and displayed in Table 3 and Table 4 by using six different machine learning algorithms that are easily available with its packages in R studio toolkit. The Machine Learning Algorithms' that would be testing in this paper are K-nearest neighbor, NB, Scalar Vector Machine, Conditional interference tree, J48, RF. To import these algorithms into R there are packages that can be easily installed. The packages imported for these algorithms are MASS, caret, e1071, rminer, party, Rweka and randomforest.
This work concludes that a gyroscope placed at the right lower arm gives the best result for detecting activities when individual sensors are considered with an average of 94.47%, with NB producing the optimum results of 98.58%. It is also found out that setting 1 result in better activity prediction than setting 2. RF yields agreeable results for right lower arm accelerometer and NB for right lower arm gyroscope with accuracy of 98.58% each. For individual sensors the RF algorithm gives the highest accuracy with an average of 95.03%. The single sensor displays satisfying outcomes but not optimum, hence further studies have been decided to perform experiments by combining both the sensors. The expectations were matched NB and RF yielding a outcome of 100% respectively for right lower arm accelerometer. Again setting 1 proves to produce better results than setting 2. For the combination of 2 sensors RF provides an average accuracy of 97.15% for setting 1.

VI. CONCLUSION
Mobile phones are compact yet powerful devices that possess the ability to perform numerous tasks. In this paper the body worn sensors have been exploited to recognize the activities performed by a person. Two different settings are used to discuss the classification results. This research infers that for effective activity recognition, it is important to place the sensors in the location which involves drastic movement of the body for a particular activity and also for a higher rate of accuracy the combination of sensors provides best result. From analysis 3 the inference is that combination of accelerometers at different locations can give the desired results and deliver better outcomes to that of analysis 1 and analysis 2. From all the analysis the observance is that RF produces the optimum with NB being next. But for implementation of same type of sensors J48 gives better results than NB. Hence it is concluded that the most favorable machine learning algorithms for activity recognition are RF, NB and J48. It is also noticed that more the sensors are combined the better the results. To get better results it is also important to set right sensors at the right locations The outcome of this research has the potential for an application that can be used to detect the patient's activity.