+ All documents
Home > Documents > Hold \u0026 Sign: A Novel Behavioral Biometrics for Smartphone User Authentication

Hold \u0026 Sign: A Novel Behavioral Biometrics for Smartphone User Authentication

Date post: 27-Nov-2023
Category:
Upload: unitn
View: 0 times
Download: 0 times
Share this document with a friend
10
Hold & Sign: A Novel Behavioral Biometrics for Smartphone User Authentication Attaullah Buriro * , Bruno Crispo †* , Filippo DelFrari * and Konrad Wrona * Department of Information Engineering and Computer Science (DISI), University of Trento, Via Sommarive, 38123, Italy, Email * :{attaullah.buriro, filippo.delfrari, bruno.crispo}@unitn.it DistrNet, KULeuven, Belgium Email : [email protected] NATO Communications and Information Agency, The Hague, Netherlands, Email : [email protected] Abstract— The search for new authentication methods to re- place passwords for modern mobile devices such as smartphones and tablets has attracted a substantial amount of research in recent years. As a result, several new behavioral biometric schemes have been proposed. Most of these schemes, however, are uni-modal. This paper presents a new, bi-modal behav- ioral biometric solution for user authentication. The proposed mechanism takes into account micro-movements of a phone and movements of the user’s finger during writing or signing on the touchscreen. More specifically, it profiles a user based on how he holds the phone and based on the characteristics of the points being pressed on the touchscreen, and not the produced signature image. We have implemented and evaluated our scheme on commercially available smartphones. Using Mul- tilayer Perceptron (MLP) 1-class verifier, we achieved 95% True Acceptance Rate (TAR) with 3.1% False Acceptance Rate (FAR) on a dataset of 30 volunteers. Preliminary results on usability show a positive opinion about our system. Index Terms—Biometrics, Authentication, Human-Computer Interaction I. I NTRODUCTION Smartphones and tablets are widely used personal devices [1]. They generate and store an increasing amount of sensitive information. Furthermore, smartphones and tablets are used to perform security-critical transactions such as mobile pay- ments, remote access to a company’s intranet, etc. Existing authentication methods based on PINs and passwords are not convenient for the type of user interactions that characterize smartphones (very frequent [2] and short [3]); as a result, more and more users simply do not use any security (up to 40.9% according to a recent study [4]). Thus, the focus of security research has shifted towards biometric-based authentication schemes as a possible alternative. In particular, behavioral biometrics looks attractive since it is easy to implement because it requires only the standard hardware provided by most modern smartphones. A handwritten signature establishes a user’s identity based on how he writes his name. This behavioral modality is very popular because it is socially and legally accepted as a means of personal identification in everyday life, however its implementations require dedicated pads [5]. Modern touch- screens make it feasible to implement handwritten signatures in smartphones and tablets. However, like all other biometric modalities, this behavioral modality faces two basic chal- lenges: intra-class variability and inter-class similarity. Intra- class variability refers to variations in signatures of the same person while inter-class similarity refers to the similarity of signatures of two or more persons found incidentally or intentionally as a result of an adversary’s targeted attack. In smartphone user authentication scenarios, intra-class varia- tion is challenging due to the comparatively smaller display area and the quality of the touchscreen, which result in large intra-class variations [6] [7]. Intra-class variations and inter- class similarity lead respectively to higher FRR and FAR. This paper presents a smartphone user authentication sys- tem based on how a user holds his phone while signing on its touchscreen. The system profiles pressed screen points (so-called touch-points) and the micro-movements of the phone during the signing process in order to verify the user’s identity. Although typing a PIN is easier than writing something on the touchscreen, a PIN can be forgotten, whereas most users remember their own name. Moreover, launching shoulder surfing and smudge attacks to steal PINs and passwords is relatively easy. In our method, even if an attacker knows what is being written, access is still denied because he cannot mimic the phone movements of the legitimate user. We registered the phone micro-movements using multiple physical sensors available on most smartphones. These sen- sors are triggered when a user starts writing (first touch-point) and stop as the user finishes writing (last touch-point). We do not take into account the signature image because it can be copied and mimicked [8]. We tested our mechanism over a dataset collected from 30 users, by applying the anomaly detection (one-class) approach. Results show that using MLP as verifier, we achieve 95% TAR and 3.1% FAR. The main contributions of this paper are: The proposal and implementation of Hold & Sign, a new behavioral biometric user authentication mecha- nism, based on how the user holds his smartphone in his hand and signs his name on the smartphone touchscreen. It combines two behavioral modalities. Furthermore, it implements dynamic handwritten signature verification using multiple sensors that do not require the use of a 1
Transcript

Hold & Sign: A Novel Behavioral Biometricsfor Smartphone User Authentication

Attaullah Buriro∗, Bruno Crispo†∗, Filippo DelFrari∗ and Konrad Wrona‡∗Department of Information Engineering and Computer Science (DISI),

University of Trento, Via Sommarive, 38123, Italy,Email∗:{attaullah.buriro, filippo.delfrari, bruno.crispo}@unitn.it

†DistrNet, KULeuven, BelgiumEmail†: [email protected]

‡NATO Communications and Information Agency, The Hague, Netherlands,Email‡: [email protected]

Abstract— The search for new authentication methods to re-place passwords for modern mobile devices such as smartphonesand tablets has attracted a substantial amount of research inrecent years. As a result, several new behavioral biometricschemes have been proposed. Most of these schemes, however,are uni-modal. This paper presents a new, bi-modal behav-ioral biometric solution for user authentication. The proposedmechanism takes into account micro-movements of a phoneand movements of the user’s finger during writing or signingon the touchscreen. More specifically, it profiles a user basedon how he holds the phone and based on the characteristicsof the points being pressed on the touchscreen, and not theproduced signature image. We have implemented and evaluatedour scheme on commercially available smartphones. Using Mul-tilayer Perceptron (MLP) 1-class verifier, we achieved ≈ 95%True Acceptance Rate (TAR) with 3.1% False Acceptance Rate(FAR) on a dataset of 30 volunteers. Preliminary results onusability show a positive opinion about our system.

Index Terms—Biometrics, Authentication, Human-ComputerInteraction

I. INTRODUCTION

Smartphones and tablets are widely used personal devices[1]. They generate and store an increasing amount of sensitiveinformation. Furthermore, smartphones and tablets are usedto perform security-critical transactions such as mobile pay-ments, remote access to a company’s intranet, etc. Existingauthentication methods based on PINs and passwords are notconvenient for the type of user interactions that characterizesmartphones (very frequent [2] and short [3]); as a result,more and more users simply do not use any security (upto 40.9% according to a recent study [4]). Thus, the focusof security research has shifted towards biometric-basedauthentication schemes as a possible alternative. In particular,behavioral biometrics looks attractive since it is easy toimplement because it requires only the standard hardwareprovided by most modern smartphones.

A handwritten signature establishes a user’s identity basedon how he writes his name. This behavioral modality isvery popular because it is socially and legally accepted as ameans of personal identification in everyday life, however itsimplementations require dedicated pads [5]. Modern touch-screens make it feasible to implement handwritten signaturesin smartphones and tablets. However, like all other biometric

modalities, this behavioral modality faces two basic chal-lenges: intra-class variability and inter-class similarity. Intra-class variability refers to variations in signatures of the sameperson while inter-class similarity refers to the similarityof signatures of two or more persons found incidentally orintentionally as a result of an adversary’s targeted attack. Insmartphone user authentication scenarios, intra-class varia-tion is challenging due to the comparatively smaller displayarea and the quality of the touchscreen, which result in largeintra-class variations [6] [7]. Intra-class variations and inter-class similarity lead respectively to higher FRR and FAR.

This paper presents a smartphone user authentication sys-tem based on how a user holds his phone while signing onits touchscreen. The system profiles pressed screen points(so-called touch-points) and the micro-movements of thephone during the signing process in order to verify theuser’s identity. Although typing a PIN is easier than writingsomething on the touchscreen, a PIN can be forgotten,whereas most users remember their own name. Moreover,launching shoulder surfing and smudge attacks to steal PINsand passwords is relatively easy. In our method, even ifan attacker knows what is being written, access is stilldenied because he cannot mimic the phone movements ofthe legitimate user.

We registered the phone micro-movements using multiplephysical sensors available on most smartphones. These sen-sors are triggered when a user starts writing (first touch-point)and stop as the user finishes writing (last touch-point). Wedo not take into account the signature image because it canbe copied and mimicked [8]. We tested our mechanism overa dataset collected from 30 users, by applying the anomalydetection (one-class) approach. Results show that using MLPas verifier, we achieve ≈ 95% TAR and 3.1% FAR.

The main contributions of this paper are:• The proposal and implementation of Hold & Sign,

a new behavioral biometric user authentication mecha-nism, based on how the user holds his smartphone in hishand and signs his name on the smartphone touchscreen.It combines two behavioral modalities. Furthermore, itimplements dynamic handwritten signature verificationusing multiple sensors that do not require the use of a

1

dedicated device to capture the signature.• Experimental validation, considering how different situ-

ations in which a user can use the device can affect therobustness and accuracy of the biometrics.

• Performance and power consumption analysis duringacquisition, training and testing phases. A preliminaryusability analysis was carried out to assess how end-users reacted to our solution.

The rest of the paper is organized as follows. SectionII describes related work. Section III presents our solution.Section IV illustrates the experimental analysis. Section Vdiscusses the prototype implementation and some operationalconcerns such as power consumption. Section VI presents ananalysis of users’ experiences with the prototype. Section VIIanalyzes the results. Section VIII concludes the paper.

II. RELATED WORK

Researchers have proposed several biometric-based solu-tions for smartphone user authentication. In this section, wesurvey the most relevant approaches.

A. Sensor-Based AuthenticationPhysical three-dimensional sensors – such as accelerome-

ters, gyroscopes, and orientation sensors – are built into mostsmartphones. These sensors have been used to identify usersbased on their walking patterns [9], arm movements [10],arm movement and voiceprints [11], gesture models [12],and free-text typing patterns [13].

Li et al. [14] investigated the role of three sensors, namelyaccelerometer, orientation sensor, and compass, in additionto the touch gestures in continuous user authentication.They propose a transparent mechanism, which profiles fingermovements and interprets the sensed data as different ges-tures. It then trains the Support Vector Machine (SVM) clas-sifier with those gestures and performs authentication tasks.The authors achieved 95.78% gesture recognition accuracyon a database of 75 users.

Zhu et al. [12] propose a mobile framework Sensec, whichmakes use of sensory data from the accelerometer, orientationsensor, gyroscope, and magnetometer and constructs a usergesture model of phone usage. Based on this gesture model,Sensec continuously computes the sureness score, and au-thorizes the real users to enable/disable certain features toprotect their privacy. Users were asked to follow a script, i.e.a sequence of actions; the sensory data was collected duringthe entire user interaction. Sensec identified a valid user with75% accuracy and it detected an adversary with an accuracyof 71.3% (with 13.1% FAR) based on 20 recruited users.

Buriro et al. [13] authenticate users using a sensor-enhanced touch stroke mechanism based on two humanbehaviors: how a person holds his phone and how he typeshis 4-digit free-text PIN. Using a Bayesian classifier and aRandom Forest (RF) classifier, they achieved 1% Equal ErrorRate (EER).

A recent study [15] makes use of Hand Movement, Ori-entation, and Grasp (HMOG) to continuously authenticatesmartphone users. HMOG transparently collects data from

the accelerometer, gyroscope, and magnetometer when auser grasps, holds and taps on the smartphone screen. On adataset of 100 test subjects (53 male and 47 female), HMOGachieved the lowest EER of 6.92% in walking state with theSVM verifier.

All the solutions given above use some of the three-dimensional sensors available in most smartphones and con-firm the potential of these sensors for user authentication. Oursolution uses 3-dimensional built-in sensors in combinationwith handwritten signatures to achieve high accuracy forauthentication.

B. Touch-Based AuthenticationUser authentication based on touch-interaction is a com-

paratively less explored area. Touch-interactions can be usedboth for one-shot login and continuous user authentication[16]. Touch-based features may include time, position, thesize of touch, pressure and touch velocity, etc. De Luca etal. [17] profile touch data generated during different slideoperations for unlocking the smartphone screen. Using theDynamic Time Warping (DTW) algorithm, they achieve 77%authentication accuracy.

Angulo et al. [18] suggest an improvement to the phonelock patterns. Their system authenticates users based on thelock patterns combined with the touch data associated withthose lock patterns. They try multiple classifiers and theyachieve an EER of 10.39% using a Random Forest classifier.

Sae-Bae et al. [19] use specific five-finger touch gestures.They achieve an accuracy of 90% on the Apple iPad. How-ever, the method is not feasible for the small touchscreens oftypical smartphones. Shahzad et al. [20] consider customizedslide-based gestures to authenticate a smartphone’s users.Their study yielded an EER of 0.5% with the combinationof just three slide movements. Sun et al. [21] require users todraw an arbitrary pattern with their fingers in a specific regionof the screen for unlocking their smartphones. Users were au-thenticated on the basis of geometric features extracted fromtheir drawn curves along with their behavioral and physio-logical modalities. The solution in Sae-Bae and Memon [22]is conceptually similar to our work. This uni-modal onlinesignature verification scheme extracts the histogram featuresfrom the user signature and performs user authentication. Thelowest EER achieved was 5.34% across different sessions.

Our solution relies on the screen touch-points beingpressed and the velocity of finger movement during thesigning – neither signature image nor its geometry is used.It does not require the user to draw specific patterns forauthentication, but simply use any pattern, which is conve-nient or well-known to him - e.g. to sign his name. Thisincreases usability of our solution as the user is not requiredto perform an initial learning of an unknown pattern in orderto memorize it and for his signing features to become stableand reliable.

C. Signature-Based AuthenticationSome work has been done regarding signature-based bio-

metric authentication on smartphones. Koreman and Morris

2

[23] propose a continuous authentication method based onmultiple modalities, namely the face, voice, and signature onthe touchscreen. Their study yielded an EER of 2.3%, 17%,4.3% and 0.6% for voice, face, signature and fused modalitiesrespectively.

Vahab et al. [24] implement online signature verificationusing an MLP classifier on a subset of Principal ComponentAnalysis (PCA) features. The validation was performed using4000 signature samples from the SIGMA database [25] andyielded an FAR of 7.4% and an FRR of 6.4%.

In recent work of Xu et al. [26], users were asked towrite different alphabets on the screen; 42 handwriting fea-tures were extracted using a handwriting forensics approach(which focuses on the geometry of writing [27]). Those fea-tures were then classified using SVM. The proposed solutionachieved an EER of 5.62%. Additionally, the touch slide(touch-points stimulated when writing an alphabet) yieldedan EER of 0.75%.

Images of handwriting signatures have been used bySignEasy as an authentication method in iOS8 [28], allowingusers to transparently add their electronic signatures on im-portant documents. Similarly, a signature recognition system[29] performs user identification based on user signaturescaptured via a smartphone touchscreen or via a dedicatedsignature capturing device. It verifies signatures by comput-ing the similarity score between the query signature and thestored signature template. Additionally, this system providesclient-server solutions based on signature images. None ofthem uses phone movements and/or touch features for userauthentication.

Our solution is different because it is bi-modal thus in-tuitively more secure than the uni-modal ones; it takes intoaccount phone movements and finger movements during thesigning process. Spoofing only one of the two modalitieswould not suffice to grant access to the phone.

III. PROBLEM FORMULATION AND SOLUTION

In this section, we describe the main building blocks ofour solution.

A. Threat ModelWe consider the adversarial model in which the attacker

is already in possession of the device. The attacker can bea stranger who steals or finds the smartphone. Similarly, theattacker can be a family member, close friend or co-worker(who knows the implemented authentication mechanism).The goal of both types of attacker is the same: gaining accessto the device and its contents. This threat model does notinclude the possibility of opening the phone and stealing agenuine biometric template. We do address this problem, bymeans of cryptography and trusted storage, however this issueis outside the scope of this paper.

B. Our SolutionOur solution (see Figure 2) exploits the phone movements

in hand and finger movements on the touchscreen as shownin Figure 1. In particular, we consider all the touch-points

Fig. 1: Different phone positions during signing process

Fig. 2: Our proposed authentication system

pushed for the entire signature and the velocity of the fingermovement. All the physical sensors are triggered and keptrunning during the whole signing process (from first to lasttouch-point) on the touchscreen. Obtained sensor readings arethen preprocessed to extract useful features. As we propose abi-modal system, we need to combine the extracted featuresfrom both built-in sensors and the touchscreen to profile userbehavior. Our model involves feature selection, which entailsselecting the subset of productive features to be used for userauthentication. A user profile template is formed based onthe selected feature subset and is then stored in the maindatabase. These behavioral vectors are later matched withthe vector of the test sample in order to authenticate/rejectthe claimant.

C. Considered Data Sources

1) Sensors: Related work [13] [11] [30] [31] [15] showsthat each user has a unique way of holding and/or pickingup his smartphone. This movement behavior can be profiledonly with three-dimensional sensors.

Our solution relies on three built-in three-dimensionalsensors: the accelerometer, the gravity sensor, and the mag-netometer. We derived two additional sensor readings fromthe accelerometer by applying two filters1 (low pass andhigh pass) with the parameter α = 0.5, and call the out-comes Low-Pass Filter (LPF) and High-Pass Filter (HPF)accelerometer readings. Thus, in total, we have three variants

1http://developer.android.com/guide/topics/sensors/sensors motion.html

3

of accelerometer sensor readings: Raw, LPF, and HPF ac-celerometer readings. A raw accelerometer reading producesraw values including gravity values. An LPF accelerometerreading measures the apparent transient forces acting on thephone, caused by the user activity, and an HPF readingproduces the exact acceleration applied by the user on thephone. The gravity sensor provides the magnitude and direc-tion of the gravity force applied on the phone. The coordinatesystem and the unit of measurement of gravity sensor are thesame as those of the accelerometer sensor. The magnetometersensor measures the strength and/or direction of the magneticfield in three dimensions. It differs from the compass as itdoes not provide point north. The magnetometer measuresthe Earth’s magnetic field if the device is placed in anenvironment absolutely free of magnetic interference. Allthe above sensors generate continuous streams in x, y andz directions. We have added a fourth dimension to all ofthese sensors and name it magnitude. Magnitude has beentested in the context of smartphone user authentication [13][32] [15] and has proved to be very effective in classificationaccuracy. The magnitude is mathematically represented as:

SM =√(a2x + a2y + a2z) (1)

where SM is the resultant dimension and ax, ay and az arethe accelerations along the X, Y and Z directions.

2) TouchScreen: The touchscreen provides the user in-terface for the operation of the device. Devices can becategorized as single or multi-touch devices. A finger and/ora pen interacts with the touchscreen. In Android, the libraryMotionEvent provides a class for tracking the motion ofdifferent pointers such as a finger, stylus, mouse, trackball,etc. The event triggered as a result of a touch is reported by anobject of this class. This object may contain a specific actioncode such as the location of the touch in XY coordinatesof the touchscreen, and information about pressure, size andorientation of the touched area. The action code representsthe state of the touch action, e.g. Action_Down stands forthe start of a touch action while Action_Up represents theend of a touch action. The Android VelocityTrackerclass is used to track the motion of the pointer on thetouchscreen. The class methods, getXVelocity() andgetYVelocity(), are used to acquire the velocities of thepointer on the touchscreen in the X and Y axes respectively.

D. Considered Classifiers

Generally, the problem of user biometric authentication issolved in two ways: with binary classification (training withtwo classes) and anomaly detection (training with only thetarget class). Classifiers are very powerful in discriminatingthe true user from a given training set, whereas anomalydetectors check for deviation from the legitimate user’s be-havior and authenticate/reject on the basis of this deviation. Inorder to train a binary classifier, the biometric data from boththe owner and the non-owner of the smartphone is required,which is an unrealistic assumption in the real world, sincethe sharing of biometric information between smartphone

users may lead to privacy concerns. Hence, we used anomalydetectors (1-class verifiers) for user authentication [15] [33].

We chose four different verifiers, i.e. BayesNET, K-NearestNeighbor (KNN), Multilayer Perceptron (MLP) and RandomForest (RF), because they were found to be very effective inprevious studies. BayesNET and RF verifiers were used withtheir default settings. However, the parameters of both MLPand KNN were optimized, because with default parametersthey performed quite poorly. We used K = 3 in KNN andsimilarly used 3 hidden layers in MLP. We used all ofour verifiers wrapped into Weka’s metaclass classifier; theOneClassClassifier2.

E. Success MetricTrue Acceptance Rate (TAR): The proportion of attemptsof a legitimate user correctly accepted by the system.False Acceptance Rate (FAR): The proportion of attemptsof an adversary wrongly granted access to the system. It canbe computed as FAR = 1− TRRFalse Rejection Rate (FRR): The proportion of attemptsof a legitimate user wrongly rejected by the system. It canbe computed as FRR = 1− TAR.True Rejection Rate (TRR): The proportion of attempts ofan adversary correctly rejected by the system.Failure to Acquire Rate (FTAR): The proportion of failedrecognition attempts (due to system limitations). A reasonfor this failure could be the inability of the sensor to capture,insufficient sample size, number of features, etc.

IV. EXPERIMENTAL ANALYSIS

A. Data collectionAndroid supports data collection in both fixed and

customized intervals after registering the sensorswith registerlistener()3. Such intervals are oftentermed Sensor Delay Modes. There are four delays:SENSOR DELAY FASTEST with a fixed delay of 0s,SENSOR DELAY GAME with a fixed delay of 0.02s,SENSOR DELAY UI with a fixed delay of 0.06s andSENSOR DELAY NORMAL with a fixed delay of 0.2s.

We developed an Android application, Hold & Sign,which can be installed on any Android smartphone startingfrom version 4.0.4. We used SENSOR DELAY GAME, sincewe observed that SENSOR DELAY NORMAL and SEN-SOR DELAY UI were too slow and some of the sensors werenot able to sense the user interactions in these two modes.SENSOR DELAY FASTEST mode could have been used aswell, but it includes noise in the data collection.

We recruited 30 volunteers (22 male and 8 female); themajority of them are either Master’s or Ph.D. students but notsecurity experts. In order to have diversity, we recruited usersfrom several nationalities. The purpose of the experimentand the description of our proposed solution was clearly

2http://weka.sourceforge.net/doc.packages/oneClassClassifier/weka/classifiers/meta/OneClassClassifier.html

3http://developer.android.com/reference/android/hardware/SensorManager.html

4

explained to each user individually. The process of datacollection and how data are stored were carefully explained.Each volunteer provided explicit consent to participate in theexperiment. We collected data in three different activities,sitting, standing and walking with Google Nexus 5.

B. Features

We gathered 4 data streams from every 3-dimensionalsensor except touchscreen, and we extracted 4 statisticalfeatures, namely mean, standard deviation, skewness, andkurtosis, from every data stream. Data from every sensorwas transformed into a 4 by 4 features matrix. In total weobtained 16 features from all four dimensions of each sensor.Similarly, we extracted 13 features from touchscreen data.The extracted features from touchscreen data are shown inTable I.

C. Features Fusion

In a study conducted by Jain et al. [34], the authors explainthat there are five levels in a biometric system at which the ac-quired data can be fused: sensor; feature; match score; rank;and decision level. The fusion of data as early as possible mayincrease the recognition accuracy of the system. However,the fusion of data at sensor level may not yield better resultsbecause of the presence of noise during data acquisition. Thusfusion at feature level is expected to provide better results,because the feature representation communicates much morerelevant information. The extracted feature set from the datafrom multiple sources can be combined to form a new featureset. We used fusion at the feature level, in order to provide themaximum amount of relevant information to our recognitionsystem. The fusion of 16 features from each sensor makes anew feature vector and we call this feature vector the patternof user’s hold behavior. The length of this feature vector is 80features (16 for each of the five used sensors). Similarly, thefeature vector of sign behavior is small (13 features, extractedfrom the captured touch-points through the touchscreen) andwe call it a sign pattern.

The length of the fused feature vector for both modalitiesbecomes 93 features.

D. Feature Subset Selection

To avoid overfitting and address the curse of dimension-ality issue, we performed feature subset selection. Featuresubset selection is the process of choosing the best possiblesubset, i.e. the set that gives the maximum accuracy, fromthe original feature set. Note that even if we achieve thesame accuracy with reduced features, smaller feature vectorsdecrease computation time and allow the classifier to decidefaster.

We evaluated our feature set (93 features for fused behav-iors) with Recursive Feature Elimination (RFE) feature subsetselection methods. We relied on scikit-learn4, a Python-basedtool for data mining and analysis, for RFE feature subsetselection.

4http://scikit-learn.org/stable/

TABLE I: List of selected features from touchscreen data

No. Touch Features1 StartX2 EndX3 StartY4 EndY5 AvgXVelocity6 AvgYVelocity7 MaxXVelocity8 MaxYVelocity9 STDX10 STDY11 DiffX12 DiffY13 EUDistance

0 20 40 60 80 100Number of features selected

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Cro

ss v

alid

ati

on s

core

(no o

f co

rrect

cla

ssific

ati

ons)

# of optimal features for fused data (Sitting state) : 10# of optimal features for fused data (Standing state): 11# of optimal features for fused data (Walking state) : 11

Fig. 3: RFE Feature Selection from sitting, standing andwalking states.

The RFE classifier trains itself on the initial set of featuresand assigns weights to each of them. The features withsmallest weights are later pruned from the current featureset. The procedure is repeated until the intended number offeatures is eventually reached5. We applied RFE with 10-foldstratified cross-validation using an SVM classifier on the dataof all activities for two classes. The plot (see Figure 3) showsthe optimal number (11) of features selected from fused datain standing and walking state and 10 for sitting state.

5http://scikit-learn.org/stable/modules/feature selection.html

TABLE II: List of selected features from fused (bi-modal)data.

Sitting Standing Walking CombinedMgX Mean HPFMag Kurt EndY HPFY MeanRAWY STD RAWY STD RAWY STD HPFZ Mean

DiffX DiffX DiffX GrZ SkewStartY StartX RAWZ Mean StartX

MgY Mean EU Distance STDY EndXStartX RAWMag STD HPFZ Mean StartYEndY StartY StartY EndY

MgMag Mean DiffY HPFX Skew MaxYVelocityGrY Mean HPFX Mean HPFX Mean AvgXVelocity

STDX MgMag Mean DiffY STDY- EndX HPFY Mean DiffX

5

TABLE III: Results of different classifiers (averaged over all30 users) in different activities.

Sitting Standing WalkingClassifiers TAR FAR TAR FAR TAR FAR

BN 0.758 0.001 0.740 0.003 0.710 0.000MLP 0.797 0.001 0.790 0.004 0.790 0.000IBk 0.761 0.001 0.750 0.002 0.720 0.000RF 0.767 0.001 0.750 0.002 0.710 0.000

E. AnalysisWe analyzed data in two settings, i.e. (i) a verifying

legitimate user scenario, and (ii) an attack scenario.In the verifying legitimate user scenario, we train the

system with the data from the owner class and then test thesystem with the patterns belonging to that class. The outcomecan be either accept or reject. We used a 10-fold stratifiedcross-validation method for testing. In cross-validation, thedataset is randomized and then split into k (here k = 10)folds of equal size. In each iteration, one fold is used fortesting, and the other k − 1 folds are used for trainingthe classifier. The test results are averaged over all folds,which give the cross-validation estimate of the accuracy. Thismethod is useful in dealing with small datasets. Using cross-validation we tested each available sample in our dataset. Wereport the results of these settings in terms of TAR and FRR.

In the attack scenario, we train the system with all thedata samples from the owner class and then test the systemwith the patterns belonging to all the remaining classes (29users). The outcome can be either false accept or true reject.We report the results of these settings in terms of FAR andTRR.

F. ResultsWe report our results in three ways: intra-activity, inter-

activity and activity fusion. By intra-activity, we mean train-ing and testing each single activity (i.e. training walking totest walking only). Inter-activity means training with onesingle activity and using that training for testing all activities.We tested the training for each activity. In activity fusion, weused the combined data of all 3 activities for both trainingand testing (i.e. training with fused data from walking, sittingand standing) to test all activities. The reason for this is thatwe want to check whether training in a single activity issufficient to recognize all the testing samples across activities.Otherwise, we would need to train the recognition systemwith patterns of multiple activities. As the MLP verifierhas consistently out-performed all other verifiers in all threeactivities (see Table III), we will take into account only thisverifier in further analysis.

The results of all settings are presented below:1) Intra-Activity: The results of all three activities, prior

to feature selection (averaged over 30 users), are given inTable III. We achieved ≥ 79% TAR with full features inall the activities using the MLP verifier. We then applied afeature subset selection method (RFE) on our dataset. Figure4 shows that we improved our authentication results (from

SittingFull/RFE(features)

StandingFull/RFE(features)

WalkingFull/RFE(features)

60

70

80

90

100

True

Acc

epta

nce

Rat

e(%

)

Fig. 4: Comparison of TAR for Full and RFE based featuresubsets in Intra-activity.

SittingFull/RFE(features)

StandingFull/RFE(features)

WalkingFull/RFE(features)

0

50

100

True

Acc

epta

nce

Rat

e(%

)

Fig. 5: Comparison of TAR for Full and RFE based featuresubsets in Inter-activity.

≥ 79% to 85.56% in sitting, 86.75% in standing and 86% inwalking) with our chosen RFE feature subsets (see Table II).We obtained 85.5% to 86.7% TAR with the MLP verifierin the three user activities. In related work, [15] reported93.08% TAR but at the expense of 6.92% FAR using the 1-class SVM verifier and [11] reported 10.28% FAR and 3.93%FRR with 1-class RF verifier.

2) Inter-Activity: In order to validate the applicabilityof our mechanism in multiple user positions, we tested itsperformance across multiple activities. For example, if wetrain the system with the training patterns of just the sittingactivity and test it with the patterns of both standing andwalking activities and vice versa, we can observe whether ornot training with a single activity is sufficient. Figure 5 showsunsatisfactory results (65.82% at best), and thus we concludethat we need to train our system in multiple situations toincrease its accuracy.

3) Activity Fusion: Training the system in just one activ-ity and using it in multiple activities does not lead to goodresults. As a solution, we combined the patterns of multipleactivities and applied the RFE feature selection method on thecombined data. As done earlier, we picked 11 highly rankedfeatures (see the last column of Table II) and proceededto further analysis. We applied the same methodology (asper section IV-E) to test our combined dataset from all

6

TABLE IV: Results of MLP (averaged over all 30 users) forcombined data of all three activities.

Combined data from all activitiesClassifiers TAR FRR FAR TRR

MLP 0.948 0.052 0.031 0.969

three activities. The results are summarized in Table IV. Thesystem achieved ≈ 95% TAR at the expense of just 3.1%FAR. We observed that activity fusion could be useful interms of usability (as it requires one-time training in multipleactivities) and accuracy (we obtained ≈ 95% TAR) so wechecked its efficacy with the final implementation of Hold &Sign. We trained the system with a different set of trainingpatterns from different activities and used the same set offeatures (see the last column of Table II) and compared theresults.

V. HOLD & SIGN IMPLEMENTATION

We developed the final prototype of Hold & Sign takinginto consideration all our findings. Hold & Sign uses the MLPclassifier based on the feature set extracted using the RFEmethod. The analysis was performed using this applicationon a Google Nexus 5 smartphone running Android 4.4.4.Screenshots for training and testing are shown in Figure 6.Hold & Sign requires a minimal configuration, i.e. a user maychoose either both modalities or any one of them (as shown inFigure 6b) and needs to train the classifier accordingly. Theuser can also decide the number of training instances, i.e.how many times to write his own name on the touchscreento train the classifier (Figure 6c). In all choices, the user ishelped by the display of suggested recommended values. Theuser is later required to write his own name for authentication(see Figure 6d).

A. PerformanceWe tested the performance of Hold & Sign. We measured

three different timings: sample acquisition time, training timeand testing time. We computed these times for 3 differentsettings: with 15, 30 and 45 patterns. We tested each settingon the Google Nexus 5 with 35 tries for each time. Resultsare averaged over all 35 runs.

1) Sample Acquisition Time: This is the time used by theuser to provide a sample for authentication. It is importantto know it because users may feel annoyed by the requiredacquisition time that possibly results in complete removalof the Hold & Sign application. We compared the sampleacquisition time for multiple mechanisms in Table V. Whatmakes our acquisition fast is the free-text feature, e.g. theuser can write any word (e.g. own name).

2) Training/Testing Time: Training time is the time re-quired to train the classifier. It is usually computed justonce, at the installation, when the training samples areprovided to the system. In contrast, testing time is the timerequired by the system to accept/reject the authenticationattempt. Our mechanism took 3.497s, 6.193s and 9.310s forclassifier training with 15, 30 and 45 patterns, respectively.

TABLE V: Sample acquisition time for different methodsadapted from [35].

Method Sample Acquisition Time (s)Our method 3.5

PIN 3.7Password 7.46

Voice 5.15Face 5.55

Gesture 8.10Face + Voice 7.63

Gesture + Voice 9.91

Similarly, the testing times with 15, 30 and 45 patterns were0.200s, 0.213s, and 0.253s, respectively. Comparison withthe performance of other recent proposals is shown in TableVI.

B. Power ConsumptionGenerally, it is quite difficult to determine with high

accuracy the power consumption of single mobile applica-tions. Using dedicated hardware allows high accuracy [15].However, there are software-based approaches that thoughless accurate, are being extensively used [36]. Since wewanted an initial indication, we used the software-basedapproach.

In order to check the overhead resulting from use of theapplication (in different steps), we terminated all the runningapplications and all Google services, switched off WiFi,Bluetooth, and cellular radios. The screen was kept runningfor the entire duration of the experiment with brightness at thelowest level and automatic brightness adjustment disabled.A similar approach is applied in [36]. We used Trepn6 andperformed the experiments as follows.

In the first step, we computed reference power consump-tion by running Hold & Sign with all the steps (sensor datacollection, feature extraction, etc.) disabled. In the secondstage, we enabled the sensor data collection part only tocompute the overhead resulting from sensory data collection.In the third stage, we enabled the features extraction part tocompute the power consumption resulting from this process.In the final step, we analyzed the app with all functionalities.We profiled the power consumption for all these settingsof Hold & Sign for the entire duration (shortest duration 1minute and 50s and longest 2 minutes and 40s) of the exper-iment with 35 attempts each. The reference power consump-tion is 460mW . We observed 7.17% overhead (493mW )for sensor data collection, 27.8% in both data collection andfeature extraction stages (588mW ) and ≈ 1000mW in allstages of the final setting. The feature computation incurredjust 19.2% overhead corresponding to data collection.

We observed that the average power consumption of ourmechanism is very low, which makes it a power-friendly app.This claim can be supported by looking at some commonsmartphone tasks and their average power consumption [37][38]:

6https://play.google.com/store/apps/details?id=com.quicinc.trepn&hl=en.

7

(a) (b) (c) (d) (e)

Fig. 6: Screenshots of Hold & Sign in training (a to d) and testing phase (d & e)

• A one-minute phone call: 1054mW• Sending a text message: 302mW• Sending or receiving an email over WiFi: 432mW• Sending or receiving an email over a mobile network:

610mW

VI. USABILITY ANALYSIS

We report the usability of our mechanism in two ways:based on how many patterns are enough for training theclassifier to achieve significant authentication accuracy, andby applying standard the System Usability Scale (SUS) forcollecting users’ views about our proposed mechanism.

A. Tradeoffs between Training and AccuracyAs shown in Table V, the average duration of a signature

drawn by a user on the touchscreen was 3.5s with the lowestvalue being 2s. In our test, we observed that the willingnessof users to participate in our testing is strongly related tothe amount of time spent for training. We expect a similardependency also in normal usage. Hence it is important toevaluate the ratio of training time to accuracy. We observedthat with just 15 patterns (in which case a user may takeless than a minute to train the system), the user could beidentified with around 70% TAR. Accuracy can be increasedat the cost of training time. It took less than 4 minutes forthe slowest of our testers to train the system with 45 patterns(15 in each activity) and authentication results were ≈ 90%.The TAR percents are averaged over 35 user attempts. Theresults are shown in Figure 7.

B. EvaluationWe distributed Hold & Sign along with an 11-question

questionnaire adapted from the System Usability Scale7

(SUS) to our chosen volunteers (30 users). The SUS assess-ment tool is widely used for gathering subjective impressionsabout the usability of a system. It has already been used inthe context of smartphone authentication [35]. The responseto each question can be given on a five-point scale ranging

7http://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html.

15 30 45

40

60

80

100

No of Training Pat terns

TAR(%)

Full Features RFE Reduced Features

60s

65s

120s

130s 220s

235s

Fig. 7: User authentication on the prototype application. Thisfigure verifies the average results obtained from the fusion ofactivities as described in Section IV-F3.The values above thebars indicate time spent to provide samples

from ‘Strongly Disagree’ to ‘Strongly Agree’. The SUSscore is a value between 0 and 100 where a higher valueindicates a more usable mechanism. A raw SUS score can betransformed to a percentile [40] or to a grading scale [41],allowing easier interpretation of results. The average SUSscore is 68. Like the previous study [35], we added a questionto this questionnaire: What did you like or dislike about themechanism? This question was optional and subjective; userswere to write some lines supporting the reason for like ordislike. We wanted to collect early feedback to allow us toimprove our solution in future. We asked the users to useour app for some days (preferably a week) and share theirexperience with us. We received responses from 18 out of 30volunteers (60%).

C. ResponsesWe received useful feedback on our mechanism. We

achieved an average SUS score of 68.33%. Our score is betterthan the well-established voice recognition score (66%) andits fusion with the face (46%) and gestures (50%) as reported

8

TABLE VI: Comparison of our results with state of the art.

Ref. Devices Classifier No. of Users Training Time Testing TimeOur method Nexus 5 MLP 30 3.5 - 9.3s 0.215 - 0.250 s

Lee et al. [31] Nexus 5 SVM 8 6.07s 20sLi et al. [14] Motorolla Droid Sliding patterns 75 n.a 0.648s

Nickel et al. [39] Motorolla Milestoon KNN 36 90s 30s

in the literature [35]. Most of the responses were positiveabout the use of signing as an authentication credential.Most of the participants were also positive and comfortableusing a finger and using the smartphone touchscreen (i.e. nocomplaints about the size of the display). We also got somenegative responses, mostly related to the initial setup; it was“too cumbersome” for some, i.e. “a user has to sign multipletimes in order to train the system whereas setting up a PIN iseasier”. We also received some negative responses regardingthe system requiring the use of both hands.

Our mechanism is clearly in the initial stages and requiresa lot of tuning. We are planning to incorporate these initialsuggestions into future versions of Hold & Sign and also torun more extensive usability studies.

VII. LIMITATIONS

Our current solution suffers from two important limita-tions. Firstly, also pointed out by a volunteer, users mustuse both hands. One hand holds the phone and other hand’sfingertip is used for the signature. The user, therefore, mayexperience some difficulty in using our solution, especiallywhen on the move. Secondly, the system cannot predict theuser’s ongoing activity in order to extract the best pre-selectedfeatures and use them for verifying user identity.

VIII. CONCLUSION & FUTURE WORK

We proposed a new bi-modal behavioral biometric authen-tication, Hold & Sign, using as behaviors how a user holdsa phone and how he writes on the touchscreen. We achieved79% TAR at zero FAR from 1-class MLP with full featuresin walking activity. The reason for this achievement could bethe fact that during walking, sensors gather more data thusis possible to build accurate patterns. After applying featuresubset selection, TAR improved to 86.7% at the expense ofjust 0.1% FAR. Lastly, processing the data from combinedactivities yielded 94.8% TAR at 3.1% FAR.

Hold & Sign requires on average just 3.5s to enter thebehavioral pattern. Its ability to authenticate/reject a userwithin 0.215−0.250s makes it very fast. The closest reportedtesting time in the literature is 0.648s [14].

Hold & Sign offers two advantages over traditional mech-anisms. Firstly, a user can write his own name in an uncon-strained way with a finger on the smartphone’s touchscreen,which makes memorability and repetition easier.

There is no need to remember a password/pattern and noneed to keep them secret, thus eliminating the problem ofsharing and stolen passwords. Also, it is easy to integrateand implement in most modern smartphones without the need

for additional hardware. Hold & Sign can be used as a stand-alone method or can be used in conjunction with other well-established mechanisms for additional security.

Since signature-based authentication is already deployedfor user identification and it is also very common to use fingermovements for navigating documents, e.g. web pages, photoalbums, messages, etc., we expect our solution to receivepositive user acceptance. The results of the preliminaryusability analysis, with an SUS score above the average(68.33%), is a positive starting point.

As future work, we plan to investigate the permanencyof this biometric modality, extend our work in terms ofcontinuous authentication and explore its usability with alarger and more heterogeneous sample of testers. We are alsogoing to address the problem of seamless and fast detection ofa user’s current activity since this would allow authenticatingusers based on the best feature subset selected from thatparticular activity.

ACKNOWLEDGMENT

The authors would like to thank all the participants ofthe experiment for their time and effort, colleagues forvaluable and insightful input and anonymous reviewers fortheir reviews and comments.

This work was partially supported by the EIT DigitalSecurePhone project and European Training Network forCyberSecurity (NeCS) grant number 675320.

REFERENCES

[1] M. Bohmer, B. Hecht, J. Schoning, A. Kruger, and G. Bauer, “Fallingasleep with angry birds, facebook and kindle: a large scale study onmobile application usage,” in proceedings of the 13th InternationalConference on Human Computer Interaction with mobile devices andservices. ACM, 2011, pp. 47–56.

[2] B. Spencer, “Mobile users can’t leave their phone alone for six minutesand check it up to 150 times a day,” Daily Mail, 11 Feb. 2013.

[3] H. Falaki, R. Mahajan, S. Kandula, D. Lymberopoulos, R. Govindan,and D. Estrin, “Diversity in smartphone usage,” in proceedings ofthe 8th international conference on Mobile systems, applications, andservices. ACM, 2010, pp. 179–194.

[4] M. Harbach, E. von Zezschwitz, A. Fichtner, A. De Luca, andM. Smith, “Itsa hard lock life: A field study of smartphone (un) lockingbehavior and risk perception,” in Symposium on Usable Privacy andSecurity (SOUPS 2014), 2014.

[5] M. M. Dıaz and U. Ingeniero de Telecomunicacion, “Dynamic signa-ture verification for portable devices,” 2008.

[6] N. Houmani, A. Mayoue, S. Garcia-Salicetti, B. Dorizzi, M. I. Khalil,M. N. Moustafa, H. Abbas, D. Muramatsu, B. Yanikoglu, A. Khol-matov et al., “Biosecure signature evaluation campaign (bsec’2009):Evaluating online signature algorithms depending on the quality ofsignatures,” Pattern Recognition, vol. 45, no. 3, pp. 993–1003, 2012.

[7] M. Martinez-Diaz, J. Fierrez, J. Galbally, and J. Ortega-Garcia, “To-wards mobile authentication using dynamic signature verification:useful features and performance evaluation,” in Proc. of the 19th Int.Conf. on Pattern Recognition. IEEE, 2008, pp. 1–5.

9

[8] J. Galbally, Vulnerabilities and attack protection in security systemsbased on biometric recognition. Javier Galbally, 2009.

[9] J. Mantyjarvi, M. Lindholm, E. Vildjiounaite, S.-M. Makela, andH. Ailisto, “Identifying users of portable devices from gait pattern withaccelerometers,” in proceedings of IEEE International Conference onAcoustics, Speech, and Signal Processing (ICASSP’05), vol. 2. IEEE,2005, pp. ii–973.

[10] M. Conti, I. Zachia-Zlatea, and B. Crispo, “Mind how you answer me!:transparently authenticating the user of a smartphone when answeringor placing a call,” in Proc. of the 6th ACM Symposium on Information,Computer and Communications Security, 2011, pp. 249–259.

[11] A. Buriro, B. Crispo, F. Del Frari, J. Klardie, and K. Wrona,“Itsme: Multi-modal and unobtrusive behavioural user authenticationfor smartphones,” in proceedings of the 9th Conference on passwords(PASSWORDS 2015). Springer, 2016, pp. 45–61.

[12] J. Zhu, P. Wu, X. Wang, and J. Zhang, “Sensec: Mobile securitythrough passive sensing,” in Int. Conf. on Computing, Networking andCommunications (ICNC). IEEE, 2013, pp. 1128–1133.

[13] A. Buriro, B. Crispo, F. DelFrari, and K. Wrona, “Touchstroke:Smartphone user authentication based on touch-typing biometrics,” inNew Trends in Image Analysis and Processing–ICIAP 2015 Workshops.Springer, 2015, pp. 27–34.

[14] L. Li, X. Zhao, and G. Xue, “Unobservable re-authentication forsmartphones.” in NDSS, 2013.

[15] Z. Sitova, J. Sedenka, Q. Yang, G. Peng, G. Zhou, P. Gasti, and K. Bal-agani, “Hmog: A new biometric modality for continuous authenticationof smartphone users,” arXiv preprint arXiv:1501.01199, 2015.

[16] N. Sae-Bae, N. Memon, K. Isbister, and K. Ahmed, “Multitouchgesture-based authentication.” IEEE Transactions on InformationForensics and Security, vol. 9, no. 4, pp. 568–582, 2014.

[17] A. De Luca, A. Hang, F. Brudy, C. Lindner, and H. Hussmann, “Touchme once and i know it’s you!: implicit authentication based on touchscreen patterns,” in proceedings of the SIGCHI Conference on HumanFactors in Computing Systems. ACM, 2012, pp. 987–996.

[18] J. Angulo and E. Wastlund, “Exploring touch-screen biometrics foruser identification on smart phones,” in Privacy and Identity Manage-ment for Life. Springer, 2012, pp. 130–143.

[19] N. Sae-Bae, K. Ahmed, K. Isbister, and N. Memon, “Biometric-richgestures: a novel approach to authentication on multi-touch devices,”in proceedings of the SIGCHI Conference on Human Factors inComputing Systems. ACM, 2012, pp. 977–986.

[20] M. Shahzad, A. X. Liu, and A. Samuel, “Secure unlocking of mobiletouch screen devices by simple gestures: you can see it but you cannot do it,” in proceedings of the 19th annual international conferenceon Mobile computing & networking. ACM, 2013, pp. 39–50.

[21] J. Sun, R. Zhang, J. Zhang, and Y. Zhang, “Touchin: Sightless two-factor authentication on multi-touch mobile devices,” arXiv preprintarXiv:1402.1216, 2014.

[22] N. Sae-Bae and N. Memon, “Online signature verification on mobiledevices,” IEEE Transactions on Information Forensics and Security,vol. 9, no. 6, pp. 933–947, 2014.

[23] J. Koreman, A. Morris, D. Wu, S. Jassim, H. Sellahewa, J. Ehlers,G. Chollet, G. Aversano, H. Bredin, S. Garcia-Salicetti et al., “Multi-modal biometric authentication on the securephone pda,” 2006.

[24] V. Iranmanesh, S. M. S. Ahmad, W. A. W. Adnan, S. Yussof, O. A.Arigbabu, and F. L. Malallah, “Online handwritten signature verifi-cation using neural network classifier based on principal componentanalysis,” The Scientific World Journal, vol. 2014, 2014.

[25] S. M. S. Ahmad, A. Shakil, A. R. Ahmad, M. Agil, M. Balbed, andR. Anwar, “Sigma-a malaysian signatures database,” in IEEE/ACSInternational Conference on Computer Systems and Applica-tions,(AICCSA). IEEE, 2008, pp. 919–920.

[26] H. Xu, Y. Zhou, and M. R. Lyu, “Towards continuous and passiveauthentication via touch biometrics: An experimental study on smart-phones,” in Symposium On Usable Privacy and Security (SOUPS2014). USENIX Association, 2014.

[27] S. N. Srihari, S.-H. Cha, H. Arora, and S. Lee, “Individuality ofhandwriting,” Journal of Forensic Sciences, vol. 47, no. 4, 2002.

[28] Ananda. (2014) Signeasy announces new security features andenhancements for ios8 app, aims to streamline the way weaccess and sign digital paperwork. [Online]. Available: http://blog.getsigneasy.com

[29] Sutisoft. (2014) Signature verification. [Online]. Available: http://www.sutisoft.com/sutidsignature/key-features.htm

[30] W. Shi, J. Yang, Y. Jiang, F. Yang, and Y. Xiong, “Senguard: Passiveuser identification on smartphones using multiple sensors,” in IEEE7th International Conference on Wireless and Mobile Computing,Networking and Communications (WiMob), 2011, pp. 141–148.

[31] W.-H. Lee and R. B. Lee, “Multi-sensor authentication to improvesmartphone security,” in International Conference on InformationSystems Security and Privacy, 2015.

[32] N. Zheng, K. Bai, H. Huang, and H. Wang, “You are how youtouch: User verification on smartphones via tapping behaviors,” inInternational Conference on Network Protocols (ICNP). IEEE, 2014,pp. 221–232.

[33] D. Buschek, A. De Luca, and F. Alt, “Improving accuracy, applicabilityand usability of keystroke biometrics on mobile touchscreen devices,”in proceedings of the 33rd Annual ACM Conference on Human Factorsin Computing Systems. ACM, 2015, pp. 1393–1402.

[34] A. K. Jain, A. A. Ross, and K. Nandakumar, Introduction to biometrics.Springer, 2011.

[35] S. Trewin, C. Swart, L. Koved, J. Martino, K. Singh, and S. Ben-David,“Biometric authentication on a mobile device: a study of user effort,error and task disruption,” in proceedings of the 28th Annual ComputerSecurity Applications Conference. ACM, 2012, pp. 159–168.

[36] H. Khan, A. Atwater, and U. Hengartner, “Itus: an implicit authen-tication framework for android,” in proceedings of the 20th annualinternational conference on Mobile computing and networking. ACM,2014, pp. 507–518.

[37] A. Carroll and G. Heiser, “An analysis of power consumption ina smartphone.” in USENIX annual technical conference, vol. 14.Boston, MA, 2010.

[38] W. Lee. (2013) Mobile apps and power consumptionbasics. [Online]. Available: https://developer.qualcomm.com/blog/mobile-apps-and-power-consumption-basics-part-1

[39] C. Nickel, T. Wirtl, and C. Busch, “Authentication of smartphone usersbased on the way they walk using k-nn algorithm,” in 8th InternationalConference on Intelligent Information Hiding and Multimedia SignalProcessing (IIH-MSP). IEEE, 2012, pp. 16–20.

[40] J. Sauro. (2011) Measuring usability with the system usability scale(sus). [Online]. Available: http://www.measuringu.com/sus.php

[41] A. Bangor, P. T. Kortum, and J. T. Miller, “An empirical evaluationof the system usability scale,” Intl. Journal of Human–ComputerInteraction, vol. 24, no. 6, pp. 574–594, 2008.

10


Recommended