Design and evaluation of characteristic incentive ...

6 downloads 101107 Views 872KB Size Report
This work consists the first attempt to lay down a generic model for an ...... [5] G. Kalic, I. Bojic, M. Kusek, Energy consumption in android phones when using ... Mittal, Usense – a smartphone middleware for community sensing, in: MDM, 2013.
Simulation Modelling Practice and Theory 55 (2015) 95–106

Contents lists available at ScienceDirect

Simulation Modelling Practice and Theory journal homepage: www.elsevier.com/locate/simpat

Design and evaluation of characteristic incentive mechanisms in Mobile Crowdsensing Systems q Constantinos Marios Angelopoulos c, Sotiris Nikoletseas a,b, Theofanis P. Raptis a,b,⇑, José Rolim c a

Department of Computer Engineering and Informatics, University of Patras, Greece Computer Technology Institute and Press ‘‘Diophantus’’ (CTI), Greece c Centre Universitaire d’ Informatique, Université de Genève, Switzerland b

a r t i c l e

i n f o

Article history: Received 28 November 2014 Received in revised form 20 April 2015 Accepted 22 April 2015

Keywords: Mobile crowdsensing Distributed applications Smartphones Performance evaluation

a b s t r a c t In this paper we identify basic design issues of Mobile Crowdsensing Systems and investigate some characteristic challenges. We define the basic components of an MCS – the Task, the Server and the Crowd – and investigate the functions describing/governing their interactions. We identify three qualitatively different types of Tasks; (a) those whose added utility is proportional to the size of the Task, (b) those whose added utility is proportional to the progress of the Task and (c) those whose added utility is reversely proportional to the progress of the Task. For a given type of Task, and a finite Budget, the Server makes offers to the agents of the Crowd based on some Incentive Policy. On the other hand, each agent that receives an offer decides whether it will undertake the Task or not, based on the inferred cost (computed via a Cost function) and some Join Policy. In their policies, the Crowd and the Server take into account several aspects, such as the number and quality of participating agents, the progress of execution of the Task and possible network effects, present in real-life systems. We evaluate the impact and the performance of selected characteristic policies, for both the Crowd and the Server, in terms of Task execution, Budget efficiency and Workload balance of the Crowd. Experimental findings demonstrate key performance features of the various policies and indicate that some policies are more effective in enabling the Server to efficiently manage its Budget while providing satisfactory incentives to the Crowd and effectively executing the system Tasks. Interestingly, incentive policies that take into account the current crowd participation achieve a better trade-off between Task completion and budget expense. Ó 2015 Elsevier B.V. All rights reserved.

1. Introduction During the last years, high adoption rates of truly portable smart devices, such as smartphones and other wearable devices (e.g. smart watches [2] and glasses [3]), have shaped a new technological reality. Nowadays we are capable of freely moving around carrying in our pockets technological artefacts with significant computational and communication resources, while exchanging large volumes of data among us. This ubiquitous presence of smart devices, that have the capability of being always connected from everywhere, offers an unprecedented ability of augmenting traditional computer networks

q

A preliminary version of this paper appeared in [1].

⇑ Corresponding author at: Department of Computer Engineering and Informatics, University of Patras, Greece. Tel.: +30 2610996964. E-mail address: [email protected] (T.P. Raptis). http://dx.doi.org/10.1016/j.simpat.2015.04.007 1569-190X/Ó 2015 Elsevier B.V. All rights reserved.

96

C.M. Angelopoulos et al. / Simulation Modelling Practice and Theory 55 (2015) 95–106

and systems with crowdsourced resources; i.e. with smart devices provided by the public. In this context, recently a new paradigm has emerged for distributed sensing systems and applications. Mobile Crowdsensing Systems (MCS) instead of relying on special purpose distributed systems, like Wireless Sensor Networks, they exploit the embedded sensory capabilities of modern smartphones (and of other similar devices) in order to collaboratively perform data collection. Collecting data in a distributed manner from a set of autonomous devices available in an area of interest is not a novel idea. In fact several aspects of such systems, like efficiency, robustness, scalability and network lifetime, have been extensively studied during the past years. Even the notion of unpredictable and highly diverse mobility in such systems is not novel. However, the envisioned Mobile Crowdsensing Systems demonstrate several characteristic attributes that clearly distinguish them from well studied sensing systems, like Wireless Sensor Networks. First, each node of a MCS (a smartphone, a tablet or other devices) has significantly more computational capabilities than a corresponding node of a traditional sensing system such as a sensor mote. Indeed, sensor motes (like the broadly used TelosB mote [4]) mainly refer to embedded micro-controller units characterized by very limited processing power (e.g. TelosB runs on a 25 MHz MCU) and small memory space (e.g. TelosB has only 10 kB of RAM and 48 kB of flash memory). On the other hand, a mediocre modern smartphone is equipped with a dual-core processor running @1 GHz (high-end models come with 64-bit quad-core processors) and available memory typically more than 1 GB. These higher computational capabilities are able to support more sophisticated algorithms, thus allowing us to shift a portion of the computational burden from backbone Servers down to distributed ecosystems of autonomous devices. Second, typical sensor motes only support one type of wireless interface (e.g. IEEE 802.15.4), thus requiring a gateway to act as a liaison between the network and the rest of the world. On the contrary, modern smartphones (and several tablets) are equipped with three, qualitatively different types of wireless communication interfaces; cellular networks (UMTS/GPRS), Wi-Fi and Bluetooth. These interfaces greatly differ in terms of communication range (from few kilometres for cellular and around 100 m for Wi-Fi, to only few meters for Bluetooth) and energy consumption [5]. As a result smart devices enable the set-up of more agile architectures that support various communication topologies spanning from direct communication with remote Servers to ad hoc mush-ups. Last but not least, a major difference between traditional sensing systems and the envisioned MCS is the human factor. Traditional sensing systems are specific purpose systems particularly developed to monitor and collect data from typically fixed positions in their immediate environment. This means that the behavior of such systems can be engineered and therefore is (more or less) predictable. In contrast, in MCS each sensing point is controlled by a person that has to consent in order for its device to participate in the system. This need for consent adds a high degree of unpredictability and unreliability and raises the need to design incentive mechanisms in order to engage the owners of the devices, while taking into account their individual preferences and behavior. Our contribution. We address some key design issues of a Mobile Crowdsensing System (MCS) and identify its main characterizing challenges. We define the basic components of an MCS – the Task, the Server and the Crowd – and investigate the functions that describe/govern their interactions. This work consists the first attempt to lay down a generic model for an MCS that will be able to address the major issues of such systems. Our efforts have been focused on identifying the main components and outlining their qualitative attributes. In particular, motivated by real-life applications, we first identify and provide examples of three qualitatively different types of Tasks; (a) those whose added utility is proportional to the size of the Task, (b) those whose added utility is proportional to the progress of the Task and (c) those whose added utility is reversely proportional to the progress of the Task. Then, we define the Crowd as a set of autonomous agents each of which follows its own Join Policy and is characterized by its own attributes, such as a quality indicator and a personal threshold, based on which the incentives provided are evaluated. The Server abstracts a stakeholder that wishes to utilize the augmented sensory capabilities provided by the Crowd in order to perform a Task. A finite budget B is at the disposal of the Server for providing incentives to the Crowd. The budget abstracts either monetary incentives, access to premium services (such as more Internet bandwidth) or any other kind of incentive. The budget needs to be managed efficiently in order to yield as much pay-off from the Crowd as possible and the Server does so via a utility function and an incentive Policy. With respect to the identified types of Tasks, we evaluate and identify the most suitable incentive Policy the Server should follow. Our performance evaluation is conducted via selected metrics, such as the percentage of Task completion, the overall spent budget and the corresponding trade-off between the two, the workload balance of the network and the achieved cumulative quality of the performed Task.

2. Related work and comparison During the past few years, smartphones and other truly portable devices (such as tablets, smart watches [2] and smart glasses [3]) have evolved into sophisticated multi-sensory computing platforms. In [6] an overview is provided of the current state of applications that are based on MCS systems. The main challenges recognized refer to resource limitations, such as available energy, bandwidth and computational power, privacy issues that may arise due to correlation of sensor data with individuals and the lack of a unifying architecture that would optimize the cross-application usage of sensors on a particular device or even on a set of correlated devices (e.g. if they are located in the same geographical area). In [7] authors recognize the opportunity of fusing information from populations of privately-held sensors as well as the corresponding limitations

C.M. Angelopoulos et al. / Simulation Modelling Practice and Theory 55 (2015) 95–106

97

due to privacy issues. In this context they describe the principles of community based sensing and they propose corresponding methods that take into consideration the uncertain availability of the sensors, the context sensitive value of sensor information and sensor owners’ preferences about privacy and resource usage. Authors present efficient and well-characterized approximations of optimal sensing policies in the context of a road traffic monitoring application. In more recent works, in [8] authors use the notion of Participatory Sensing (PS) to describe such systems. They consider the problem of efficient data acquisition methods for multiple PS applications while taking into consideration issues such as resource constraints, user privacy, data reliability, and uncontrolled mobility. They evaluate heuristic algorithms that seek to maximize the total social welfare via simulations that are based on mobility datasets consisted of both real-life and artificial data traces. In [9] authors propose a utility-driven smartphone middleware for executing community-driven sensing tasks. The proposed middleware framework considers preferences of the user and resources available on the phone to tune the sensing strategy thus enabling the execution of tasks in an opportunistic and passive manner. In [10] the sensing capabilities of smart devices are classified into three distinct categories; inertial sensors (such as accelerometers and gyroscopes), positioning and proximity sensors (like GPS and information correlated to wireless access points) and ambient environment sensors (e.g. cameras, microphones, magnetometers, etc.). Data coming from such sensors can be used in order to extract several types of features regarding physical activities, social interactions and the environment. The feature extraction is achieved by employing a variety of techniques including among others discriminative models, decision trees, fuzzy logic and Bayesian classifiers. By taking advantage of these capabilities, several proof-of-concept applications have been developed in a variety of topics including transportation, health, environmental monitoring and other. For instance, VTrack [11], developed in MIT, is a system for travel time estimation using smartphone sensor data. By utilizing methods like the hidden-Markov model and sparse data interpolation to process the sensory data, the system is able to provide accurate location estimates and corresponding delays for delay-aware routing algorithms. In [12] a crowdsourced approach of detecting and localizing events in outdoor environments is presented. Each smartphone user simply has to point his device towards the direction of an event in order for the application to collect and report sensory data including accelerometer, compass, GPS and time. By combining data from multiple users, the application is capable of successfully localizing events taking place near-by. What is of great interest is the fact that although each individual measurement may be inaccurate, the final precision of the application is proportional to the number of total measurements; in other words, there appears a network effect. In [13] authors present a scalable Internet system, designed for continuous video collection from crowdsourced devices such as smartphones and Google Glasses [3]. By decentralizing the collection infrastructure using virtual machines, the system achieves scalability while also providing a privacy-preserving mechanism that automatically removes sensitive information from the videos. In [14] authors present a crowdedness detection scheme for mobile Crowdsensing applications; i.e. a duty cycle adaptation scheme that provides an estimation of how dense the neighbourhood of a smartphone is. Finally, [15] refers to more crowdsensing applications while also providing a survey on mobile phone sensing. Few programming frameworks have also been introduced in an effort to facilitate the design and development of crowdsensing applications. In [16] the MEDUSA programming framework is introduced that is specifically designed to address the particular requirements of crowdsensing applications. By providing high level abstractions of commonly used sub-Tasks the description of a crowdsourcing Task is reduced by two orders of magnitude, while at the same time a distributed runtime system coordinates the Task execution between several smartphones and a cluster on the cloud. A second development framework is PRISM [17], that adopts a push model that enables timely and scalable application deployment while ensuring a good degree of privacy. It manages to do so by enabling the application developers to package their applications as executable binaries that are then automatically deployed to smartphones based on some specified predicates. Apart from specific applications and application development frameworks, significant effort has been made to define models for the crowdsensing paradigm. In [18] authors consider two system models; namely the platform-centric and the user-centric models. By using game-theoretic analysis for the first and auction theory for the latter, corresponding incentive mechanisms are designed for each model. Although the provided incentive mechanisms are well designed (e.g. for the user-centric model the mechanisms are efficient, rational, profitable and truthful), however they are based on the assumption that the agents and the Server have full information regarding the Task allocation procedure (e.g. what is the total Task to be executed, what is the total available budget, etc.). Also, in [19], the authors study incentive mechanisms for a Mobile crowdsensing scheduling problem, where a mobile crowdsensing application owner announces a set of sensing Tasks, then human users (carrying mobile devices) compete for the Tasks based on their respective sensing costs and available time periods, and finally the owner schedules as well as pays the users to maximize its own sensing revenue under a certain budget. In contrast to the above works, we study on-line scenarios for crowdsensing systems; i.e. the Server does not have complete knowledge on the system and in some cases the policies followed by both the Server and the agents are adjusted to the way the Task allocation and execution evolve over time. In [20] authors investigate the problem of Task pricing and scheduling on crowdsourcing markets. Trying to maximize the likelihood of a proposed Task to be accepted for execution by the Crowd, a survival analysis model is employed to provide an algorithm for determining the optimal reward for a crowdsourced Task. Again, here the Server is assumed to have access to full market information. Finally, in [21] two mechanisms for validating the Tasks performed in crowdsourcing platforms are studied in terms of cost and accuracy. The first mechanism decides whether the reported results are truthful based on a majority decision while the second one relies on a control group to perform the validation.

98

C.M. Angelopoulos et al. / Simulation Modelling Practice and Theory 55 (2015) 95–106

3. The model We view Crowdsensing as the practice of utilizing the embedded sensory capabilities of smart devices provided by a community in order to perform a Task (the notation used in this paper can be found at Table 1). Definition 1. We define a Mobile Crowdsensing System as a distributed system consisting of: 1. the Crowd; i.e. a set of devices inside an area of interest that are equipped with embedded sensory capabilities and are carried by people. 2. the Server; i.e. a stakeholder that seeks to utilize the augmented computational, communication and sensory capabilities of the Crowd in order to perform a Task. 3. a set of functions governing the interactions among the Crowd and the Server (i.e. incentive mechanisms, join policies, etc.). In the following we refer to a Mobile Crowdsensing System (MCS) comprising of a Crowd C and a Server S. C consists of agents that abstract people carrying smart portable devices, while S abstracts a stakeholder that seeks to exploit the sensory capabilities offered by the Crowd in order to perform a Task T . The process of executing T takes place in rounds. At the beginning of each round the Server publishes offers to the Crowd consisting of Task segments of T along with corresponding incentives. The agents evaluate the offers and either accept to execute the Task segment being offered and receive the corresponding incentive or they reject the offer. If at the end of the round there are any Task segments left unexecuted, the same process is repeated until either all Task segments are executed or the entire budget available to S has been spent. 3.1. The Task We define by T the Task of total size k that Server S seeks to execute by exploiting the sensory capabilities of the Crowd C. Depending on the context, the size k of T may refer either to processing effort (e.g. in FLOPs) or to the time interval (e.g. in seconds) needed by the Crowd in order to perform T (for example consider a target tracking application or an application monitoring an environmental attribute for a given amount of time). Whether S has one or several Tasks to be performed, without loss of generality we can assume that the Server is able to S P break a given Task T into several Task segments T k : k ¼ f1; 2; . . . ; Kg such that T ¼ k¼f1;2;...;Kg T k and k 6 k kk . This implies that the Task segments could be overlapping over time. However, in this paper we consider the special case where T k is partitioned in equally sized, non overlapping Task segments. This is a simplification assumption that we make, in order to investigate the special case at first, and leave for future work the investigation of the impact of non-equal and/or overlapping tasks on the overall system. Finally, by kðtÞ we denote the cumulative size of Task segments that have been executed by time t. The Server S tries to make the most out of the MCS, in terms of Task execution, by efficiently managing the available budget B. In fact S will provide incentives to the agents by first evaluating the expected pay-off gained by the execution of a Task segment and second by offering a corresponding fraction of the budget to the agents. Comment. At this point we would like to note that, as stated before, the individual threshold thresi based on which each agent Ai evaluates the offers made is unknown to the Server. Also, in this paper we do not investigate any strategies that the Server could employ in order to infer thresi for each agent. Therefore, thresholds are unknown and unpredictable to the Server and as such are considered random. Furthermore, we also consider that each Task T is broken down to K equally sized Table 1 Notation used. The crowd C Ai N NðtÞ qAi mAi thresi c Ai

An individual agent of the crowd Size of the Crowd (total number of agents) Percentage of the crowd participated in Task execution until time t Quality indicator of agent Ai Number of times Ai has already contributed in Task execution Threshold of Ai regarding the evaluation of offers Inferred cost to agent Ai for executing a Task segment

The server S B BðtÞ uk Ik

Initially available budget Residual budget at time t Expected utility gained by S from Task execution Incentive provided by S for executing Task segment T k

The task T T Tk K kðtÞ

Task of size k Task segment of size kk Total number of Task segments Total size of Task segments completed by time t

C.M. Angelopoulos et al. / Simulation Modelling Practice and Theory 55 (2015) 95–106

99

and non-overlapping Task segments. Therefore, we consider the Server to allocate Task segments and make the corresponding offers to the Crowd by selecting agents uniformly at random. The Task utility. Depending on the type of the Task T the Server may have a different assessment of the expected pay-off provided by the execution of a Task segment. For instance, in one scenario it may suffice to receive live-streaming video regarding an event from only one agent; in this case the expected utility from each consequent Task segment gained from a second participating agent would be much less if not zero. On the other hand, in a localization scenario the more agents participate, the higher the accuracy of the tracking; here, the expected utility is proportional to the number of participating agents. In general we consider that the expected utility of S received by the execution of Task segment T k is of the form uk ¼ f ðk; kk ; qAi Þ. Following, we identify three different qualities of utility functions and provide corresponding indicative Task examples. 1. Utility proportional to the Task completion:

uk ¼

kk k

ð1Þ

i.e. the expected utility gained for S is proportional to the size of the Task segment to be executed. As an example consider an environmental monitoring application (e.g. monitoring background noise) where kk corresponds to the amount of time the Crowd will be providing noise measurements. The longer the time interval (corresponding to more task segments), the more information the server will collect. 2. Utility proportional to the progress of the Task:

uk ¼

ðkðtÞ þ kk Þd k

ð2Þ

where 0 < d < 1, i.e. the expected utility gained for S is increasing over the overall Task progress. As an example consider a video rendering application in which if one Task segment is not executed, then the entire Task T fails. In that case, as the spent budget on already executed Task segments is increasing, the expected utility for the remaining Task segments is also increasing. For instance, consider the case where the very last Task segment fails to be executed; then the Server will have spent almost the entire budget while the entire Task will also have failed. 3. Utility reveresely proportional to the progress of the Task:

uk ¼

kk kðtÞ þ d

ð3Þ

where d positive constant and kðtÞ the percentage of Task completion by time t, i.e. the expected utility for S decreases over the progress of the execution of the overall Task T . In other words as more and more Task segments are executed, the expected utility from the remaining Task segments is less. As an example consider a target tracking application; initially, as the first agents join the application the tracking accuracy is significantly improved, thus the expected utility is high. However, once the number of agents has reached a point that provides the desired tracking accuracy, the utility gained from any additional participating agents decreases since their participation does not provide additional information. 3.2. The Server We define as Server S a stakeholder that seeks to exploit the sensory and computational capabilities provided by the MCS in order to perform a Task T . A Task could be for example to monitor the background noise level for a given period of time or to remotely overlook an event by collecting live streaming video. The Server also has at its disposal a finite budget B that can freely manage in order to provide incentives to the agents of the Crowd in order to participate in the execution of Task T . The nature of B can either be monetary or it may come in the form of a service, such as increased internet bandwidth allocation. We denote by BðtÞ the residual budget of the Server at time t. The incentive policy of the Server. There are several strategies based on which the Server S can manage the available budget B. In general, we consider that the incentive provided by S is of the form Ik ¼ f ðuk ; NðtÞ; BðtÞÞ. Following we identify four indicative incentive policies. 1. Proportional incentive policy

Ik ¼ uk BðtÞ

ð4Þ

Following this policy, the incentive allocated by S to each Task segment is proportional to the expected utility and the current residual budget. 2. Participation-aware incentive policy

Ik ¼

1 uk BðtÞ cðNðtÞ þ 1Þ

ð5Þ

100

C.M. Angelopoulos et al. / Simulation Modelling Practice and Theory 55 (2015) 95–106

where c a positive constant. Following this policy, S initially provides high incentives in order to stimulate the crowd and achieve a minimum percentage of participating agents. Then S becomes more conservative, trying not to attract new agents but to sustain the already participating ones. 3. Quality-aware incentive policy

Ik ¼ uk

q Ai BðtÞ qmax

ð6Þ

where qmax is the maximum quality that can be provided by a single agent of the crowd. Following this incentive policy, the incentive allocated by S to each Task segment is proportional to the execution quality of the agent. This policy aims at attracting high quality agents by offering higher amounts of incentive. 4. Thrifty incentive policy

Ik ¼ uk



BðtÞ B



BðtÞ

ð7Þ

where  a positive constant. This incentive policy, although not using additional crowd-based information (like qAi or NðtÞ), aims at a more restrained budget expenditure, by reinforcing the fraction of the residual and initial budget in the incentive computation. It is designed for applications where the Server utility acquisition requires high budget expenses. 3.3. The Crowd We define as Crowd C the set of devices Ai : i 2 f1; 2; . . . ; Ng carried by agents inside an area of interest. The devices are characterized by some embedded sensory capabilities (e.g. accelerometers, gyroscopes, microphones, cameras, etc.) and are available to potentially undertake the execution of a Task (or Task segment) assigned by the Server S. The evaluation of the received offers is performed by each agent based on a characterizing threshold thresi , which is unknown to the rest of the agents and S. Each agent Ai is also characterized by a Task execution quality indicator qAi , which depending on the context of the crowdsensing application may refer to computational power (e.g. FLOPs per second) or other application specific attributes (e.g. camera resolution, quality of sound, etc.). Also, each agent is able to keep track of the number of times he has contributed to the execution of a Task by maintaining a counter mAi . Finally, we denote by NðtÞ 2 ½0; 1 the percentage of agents that have participated in the execution of a Task at time t; i.e. number of participating agents over the total number of agents. Once an agent Ai belonging to the crowd C has received an offer from the Server S (i.e. a Task segment to be executed with a corresponding incentive) first evaluates the cost that the Task execution will infer (this cost may reflect energy dissipation, resource allocation over time, etc.) and then will make a decision on whether will it undertake the Task segment or not. The cost function of the agents. The execution of a Task segment T k by agent Ai , infers to the agent a cost computed by the agent’s cost function. For a given T k we consider the cost function to be of the form cAi ¼ f ðkk Þ; i.e. we consider the inferred to the agent cost to be proportional to the size of the allocated Task segment. We identify the following cost function for the agents: b

c Ai ¼ a i k k i

ð8Þ

where ai ; bi are constants depending on each individual agent. The join policy of the agents. Depending on the cost inferred by Task T k and the incentive Ik offered, the agent decides whether she will accept or decline the offer based on its join policy; i.e. a boolean function P Ai ðcAi ; Ik ; mAi Þ. Each agent is also individually characterized by a threshold thresi that is an independent variable and constitutes a measure based on which each agent evaluates the offers provided. thresi captures how willing the agent is to participate in the execution of the Task and it has a varying impact according to the join policy of the agent. For instance, thresi has a decreasing impact when network effect phenomena are present and an increasing impact when the agent takes into account its past contributions in the Task execution. We below identify three Join policies of indicative qualities: 1. Simple join policy

( P Ai ¼

1 if

Ik CA

P thresi

i

0

ð9Þ

otherwise

Agents following this policy simply compare the ratio between the incentive being offered over the expected inferred cost to their own threshold thresi . If the ratio is higher than the threshold of the agent, then the agent accepts the offer, otherwise the offer is rejected. 2. Join policy with network effect

( P Ai ¼

1 if

Ik CA

i

0

i P thres gNðtÞ

otherwise

ð10Þ

C.M. Angelopoulos et al. / Simulation Modelling Practice and Theory 55 (2015) 95–106

101

where g is a constant. This policy captures network effect phenomena present in real-life systems according to which the more popular an application is the more willing people are to participate in it; e.g. social applications. In particular, additionally to the thresi , the agent also takes into account the percentage of participating agents NðtÞ when evaluating an offer. 3. Join policy with memory

( P Ai ¼

1 if

Ik CA

i

0

P thresi  mcAi

ð11Þ

otherwise

where c is a constant. This policy captures the growing unwillingness of agents that are frequently chosen by S. This unwillingness is due to intense usage or high dissipation rate of their resources. In particular, each agent also takes into account the number of times mAi it has already participated in the application when evaluating an offer. 4. Performance evaluation 4.1. Experimental set-up and metrics The experiments were conducted in Matlab R2013b with the following set-up. The code we developed is provided at [22]. We consider a crowd instance of N ¼ 100 agents which are divided into three types according to their threshold values. There are the willing agents which have very low threshold value, the un-willing agents which have a very high threshold value and the agents in-between whose threshold value is moderate. The agents also have varying agent qualities, with 0 < qAi < 10. We consider K ¼ 1000 equal-sized, non-overlapping Task segments and the initial budget is set to B ¼ 1000 units. In order to achieve statistical smoothness, we applied several times the deployment of nodes in the network and repeated each experiment 100 times. The statistical analysis of the findings (the median, lower and upper quartiles and outliers of the samples) demonstrate very high concentration around the mean, so in the following figures we only depict average values. Our evaluation is focused on the following performance metrics: Percentage of Task T completion over time. In particular, we will evaluate the utility functions and Incentive policies of the Server in terms of Task completion over several types of crowd (where by the term ‘‘type’’ we refer to the join policy the agents follow). In other words given the budget B we evaluate the expected number of executed Task segments achieved by each configuration. Percentage of residual budget BðtÞ over time. With this metric we will evaluate the utility functions and Incentive policies of the Server in terms of the expected spending rate of the budget achieved over several types of Crowd. Expenditure efficiency (Task completion over Budget spent). With this metric we wish to investigate the trade-off between budget spent and Task segment execution rate. In other words we wish to measure the efficiency of each utility function and Incentive policy. Workload balance of the crowd. With this metric we wish to investigate how are Task segments distributed over the agents. Although the Task segment allocation is performed uniformly at random by the Server, however different policies may favor different agents. For instance, consider a policy according to which the Server offers very small incentives; in this case only the very willing agents would accept the offers made and therefore the Task execution would be imbalanced. Task quality achieved. In particular we will investigate the expected Task quality achieved by the Server; that is the P accumulated quality of the executed Task segments qAi ki . 4.2. Incentive policies’ performance The performance of the four incentive policies under various utility functions, in terms of Task completion and residual budget over time as well as expenditure efficiency, is depicted in Figs. 1–3 respectively for the simple join policy, in Figs. 4–6 for the join policy with network effect and in Figs. 7–9 for the join policy with memory. For the simple join policy, as shown in Fig. 1a, all incentive policies persuade the agents to gradually complete the Task segments, with almost the same rate. However, it is clear (Fig. 1b) that, for the participation-aware incentive policy, this is achieved with much lower budget overhead. This fact also becomes clear, after a closer look in Fig. 1c. By examinating the corresponding Figs. 4 and 7, we end up in the same conclusions for the join policies with network effect and with memory. The behavior of the incentive policies under the decreasing utility function is shown in Figs. 2, 5, and 8 for each join policy. In this case, the Task execution over time is following almost the exact same pattern for all incentive policies. However, the rate of the budget expenditure is even more sharp for the proportional incentive policy, than the case of the proportional utility function. Also, the budget is expended at a very fast rate at the beginning of the Task assignment process, in contrast to the proportional utility function case. This is explained by the nature of the decreasing utility function, where once the number of agents has reached a point that provides the desired accuracy, the utility gained from any additional participating agents decreases, since their participation does not provide additional information. This fact results in a sharp decrease of the offered amount of incentive. Regarding the increasing utility function, it is clear that all incentive policies outperform the proportional one, in terms of Task completion over time, budget remaining over time and expenditure efficiency (Figs. 3, 6, and 9). Some notable

102

C.M. Angelopoulos et al. / Simulation Modelling Practice and Theory 55 (2015) 95–106 Proportional Participation−aware Quality−aware Thrifty

Proportional Participation−aware Quality−aware Thrifty

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25

30

35

40

45

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2 0.1

0.1 0

50

Budget

Budget percentage remaining

Task percentage completed

1 0.9

Proportional Participation−aware Quality−aware Thrifty

0

5

10

15

20

Time

25

30

35

40

45

0

50

0

0.1

0.2

0.3

0.4

Time

(a) Task percentage completed.

0.5

0.6

0.7

0.8

0.9

1

0.9

1

0.9

1

0.9

1

Task

(b) Residual budget percentage.

(c) Expenditure efficiency.

Fig. 1. Task and budget over time. Proportional utility function. Simple join policy.

Proportional Participation−aware Quality−aware Thrifty

Proportional Participation−aware Quality−aware Thrifty

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25

30

35

40

45

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2 0.1

0.1 0

50

Budget

Budget percentage remaining

Task percentage completed

1 0.9

Proportional Participation−aware Quality−aware Thrifty

0

5

10

15

20

25

30

35

40

45

0

50

0

0.1

0.2

0.3

0.4

Time

Time

(a) Task percentage completed.

0.5

0.6

0.7

0.8

Task

(b) Residual budget percentage.

(c) Expenditure efficiency.

Fig. 2. Task and budget over time. Decreasing utility function. Simple join policy.

Proportional Participation−aware Quality−aware Thrifty

Proportional Participation−aware Quality−aware Thrifty

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25

30

35

40

45

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2 0.1

0.1 0

50

Budget

Budget percentage remaining

Task percentage completed

1 0.9

Proportional Participation−aware Quality−aware Thrifty

0

5

10

15

20

25

30

35

40

45

0

50

0

0.1

0.2

0.3

0.4

Time

Time

(a) Task percentage completed.

0.5

0.6

0.7

0.8

Task

(b) Residual budget percentage.

(c) Expenditure efficiency.

Fig. 3. Task and budget over time. Increasing utility function. Simple join policy.

Proportional Participation−aware Quality−aware Thrifty

Proportional Participation−aware Quality−aware Thrifty

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25

30

35

40

45

Time

(a) Task percentage completed.

50

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

Budget

Budget percentage remaining

Task percentage completed

1 0.9

Proportional Participation−aware Quality−aware Thrifty

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2

0.1 0

0.1 0

5

10

15

20

25

30

35

40

45

50

0

0

0.1

0.2

0.3

0.4

Time

(b) Residual budget percentage.

0.5

0.6

0.7

0.8

Task

(c) Expenditure efficiency.

Fig. 4. Task and budget over time. Proportional utility function. Join policy with network effect.

103

C.M. Angelopoulos et al. / Simulation Modelling Practice and Theory 55 (2015) 95–106 Proportional Participation−aware Quality−aware Thrifty

Proportional Participation−aware Quality−aware Thrifty

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25

30

35

40

45

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2 0.1

0.1 0

50

Budget

Budget percentage remaining

Task percentage completed

1 0.9

Proportional Participation−aware Quality−aware Thrifty

0

5

10

15

20

Time

25

30

35

40

45

0

50

0

0.1

0.2

0.3

0.4

(a) Task percentage completed.

0.5

0.6

0.7

0.8

0.9

1

0.9

1

0.9

1

0.9

1

Task

Time

(b) Residual budget percentage.

(c) Expenditure efficiency.

Fig. 5. Task and budget over time. Decreasing utility function. Join policy with network effect.

Proportional Participation−aware Quality−aware Thrifty

Proportional Participation−aware Quality−aware Thrifty

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25

30

35

40

45

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2 0.1

0.1 0

50

Budget

Budget percentage remaining

Task percentage completed

1 0.9

Proportional Participation−aware Quality−aware Thrifty

0

5

10

15

20

25

30

35

40

45

0

50

0

0.1

0.2

0.3

0.4

Time

Time

(a) Task percentage completed.

0.5

0.6

0.7

0.8

Task

(b) Residual budget percentage.

(c) Expenditure efficiency.

Fig. 6. Task and budget over time. Increasing utility function. Join policy with network effect.

Proportional Participation−aware Quality−aware Thrifty

Proportional Participation−aware Quality−aware Thrifty

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25

30

35

40

45

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2 0.1

0.1 0

50

Budget

Budget percentage remaining

Task percentage completed

1 0.9

Proportional Participation−aware Quality−aware Thrifty

0

5

10

15

20

Time

25

30

35

40

45

0

50

0

0.1

0.2

0.3

0.4

(a) Task percentage completed.

0.5

0.6

0.7

0.8

Task

Time

(b) Residual budget percentage.

(c) Expenditure efficiency.

Fig. 7. Task and budget over time. Proportional utility function. Join policy with memory.

Proportional Participation−aware Quality−aware Thrifty

Proportional Participation−aware Quality−aware Thrifty

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25

30

35

40

45

50

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

Budget

Budget percentage remaining

Task percentage completed

1 0.9

Proportional Participation−aware Quality−aware Thrifty

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2 0.1

0.1 0

0

5

10

15

20

25

30

35

40

45

50

0

0

0.1

0.2

0.3

Time

Time

(a) Task percentage completed.

(b) Residual budget percentage.

0.4

0.5

0.6

0.7

0.8

Task

(c) Expenditure efficiency.

Fig. 8. Task and budget over time. Decreasing utility function. Join policy with memory.

104

C.M. Angelopoulos et al. / Simulation Modelling Practice and Theory 55 (2015) 95–106 Proportional Participation−aware Quality−aware Thrifty

Proportional Participation−aware Quality−aware Thrifty

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

5

10

15

20

25

30

35

40

45

50

1

1

0.9

0.9

0.8

0.8

0.7

0.7

0.6

0.6

Budget

Budget percentage remaining

Task percentage completed

1 0.9

Proportional Participation−aware Quality−aware Thrifty

0.5 0.4

0.5 0.4

0.3

0.3

0.2

0.2 0.1

0.1 0

0

5

10

15

20

25

30

35

40

45

50

0

0

0.1

0.2

0.3

Time

Time

(a) Task percentage completed.

(b) Residual budget percentage.

0.4

0.5

0.6

0.7

0.8

0.9

1

Task

(c) Expenditure efficiency.

Fig. 9. Task and budget over time. Increasing utility function. Join policy with memory.

differences from the previous configurations are the Task completion rate of the proportional incentive policy, which in this case halts due to expenditure inefficiency, and the budget expenditure of the participation-aware incentive policy which is more sharp. In this case, the thrifty incentive policy, which succeeds in budget management, while completing a high percentage of Task segments, achieves a high overall expenditure efficiency, compared to other incentive policies. Overall, the participation-aware incentive policy, which takes into account the current crowd participation, achieves a very good trade-off between Task completion and budget expense via a network effect introduced. Comment. The join policy modelling is validated by the simulations, if we observe carefully the three different Figures’ sets that correspond to each join policy (1, 4, 7-2, 5, 8-3, 6, 9). When the agents are joining the experiment by taking into account the network effect, the task completion rate is higher than the simple join case, since the more the agents which join, the more the overall joining eagerness becomes. On the contrary, when the agents are using memory, the task completion rate decreases, because of their growing unwillingness to frequently participate in the experiment. 4.3. Workload balance and overall Task quality The cumulative Task quality for incentive policies and utility functions combinations is shown in Table 2. The best Task quality for each combination is marked as bold. In general, the quality-aware incentive policy achieves the best overall Task quality. This fact is explained by its incentive distribution strategy, and more specifically by the proportional to the agent’s quality incentive allocation. The lowest quality is achieved by the combination of the proportional incentive policy with the increasing utility function. Note that when the agents follow the join policy with memory, the overall quality achieved is lower compared to the two other join cases. This can be explained by Figs. 7a, 8a and 9a, from which we conclude that in this case the Task completion rate is lower. The average crowd workload and its standard deviation for the simple join function and the join function with memory are shown in Fig. 10a and b respectively. Each point represents a combination of an incentive policy and a utility function. The perfectly balanced workload is marked with a straight line with zero deviation on K=N ¼ 10 Task segments per agent. Numbers 1–3 stand for the corresponding utility functions (proportional, decreasing, increasing). When the crowd applies a simple join policy, the average workload among the agents is more balanced in almost all cases (except the case of proportional incentive and proportional utility). On the other hand, when the crowd applies a join policy with memory, the average

Table 2 Quality achieved. Utility

Incentive Proportional

Participation-aware

Quality-aware

Thrifty

Simple join policy Proportional Decreasing Increasing

6.279 6.078 2.923

6.099 6.111 6.273

6.937 6.896 6.602

6.167 5.898 6.155

Join policy with network effect Proportional Decreasing Increasing

5.801 5.622 2.699

5.631 5.633 5.775

6.510 6.464 6.081

5.649 5.465 5.693

Join policy with memory Proportional Decreasing Increasing

4.004 3.754 2.785

3.384 3.636 4.470

4.051 4.109 4.341

3.707 3.349 3.842

C.M. Angelopoulos et al. / Simulation Modelling Practice and Theory 55 (2015) 95–106

105

20 18

Average workload

16 14 12 10 8 6 4 2 0

Pro.−1 Pro.−2 Pro.−3 Par.−1 Par.−2 Par.−3 Qua.−1 Qua.−2 Qua.−3 Thr.−1 Thr.−2 Thr.−3

Incentive policies

(a) Simple join policy. 20 18

Average workload

16 14 12 10 8 6 4 2 0

Pro.−1 Pro.−2 Pro.−3 Par.−1 Par.−2 Par.−3 Qua.−1 Qua.−2 Qua.−3 Thr.−1 Thr.−2 Thr.−3

Incentive policies

(b) Join policy with memory. Fig. 10. Workload balance of the crowd.

workload is less balanced, with each agent being unwilling to overtake proposed Tasks after a number of times having participated. This fact can also be observed in Fig. 7a, 8a, and 9a, in which the behavior of the crowd is more visible. 4.4. Incentive policy impact on different types of Tasks Different types of Tasks necessitate different approaches in terms of incentive policy applied. For the Tasks corresponding to the proportional utility function, in which the gains of the Server are proportional to the interval of the Task completed, we observe that the best applied incentive policy is the proportional one (1a, 4a and 7a). As an example consider an environmental monitoring application (e.g. monitoring background noise) where kk corresponds to the amount of time an agent is providing noise measurements. The longer the time interval (corresponding to more task segments), the more information the server will collect. For this reason, when the proportional incentive policy is applied, a steady time measurement task segment completion is maintained. On the contrary, the proportional incentive policy is not suitable for Tasks corresponding to the increasing utility function. In this type of Tasks, since the value of each additional segment is even higher, other types of incentive policies should be considered (3a, 6a and 9a). As an example consider a video rendering application in which if one Task segment is not executed, then the entire Task T fails. In that case, as the spent budget on already executed Task segments is increasing, the expected utility for the remaining Task segments is also increasing. For instance, consider the case where the very last Task segment fails to be executed; then the Server will have spent almost the entire budget while the entire Task will also have failed. In applications like video rendering, the participation-aware incentive policy performs good, because it maintains a pool of dedicated users that are frequently rewarded and thus are more keen on completing the corresponding task segments even at the later stages of the Task. As for the Tasks corresponding to the decreasing utility function, the factor of quality is dominating the incentivization process, since after some time, the percentage of Task completion is becoming less important than quality achieved. Consequently, the quality-aware incentive policy performs well (2a, 5a and 8a). As an example consider a target tracking

106

C.M. Angelopoulos et al. / Simulation Modelling Practice and Theory 55 (2015) 95–106

application; initially, as the first agents join the application the tracking accuracy is significantly improved, thus the expected utility is high. However, once the number of agents has reached a point that provides the desired tracking accuracy, the utility gained from any additional participating agents decreases since their participation does not provide additional information. The quality-aware incentive policy then attracts only a few high quality agents by offering them higher amounts of incentive. 5. Conclusions and future work In this work we identified some key design issues of a Mobile Crowdsensing System and investigated some important characterizing challenges. We defined the basic components of an MCS, the crowd, the Server and the Task, and investigated the functions describing/governing their interactions. We evaluated the impact and the performance of selected characteristic policies, for both the crowd and the Server, in terms of Task execution, budget efficiency and workload balance of the crowd. Experimental findings indicate that some policies are more effective in enabling the Server to efficiently manage its budget while providing satisfactory incentives to the crowd and affectively executing the system Tasks. For future research, we plan to further fine-tune the proposed model and investigate other cases of MCSs that are also characterized by realistic features, such as overlapping and non-equal Task segments, varying crowd sizes and qualities over time, agent ability to entering or leaving a crowd. We also plan to adopt business models in our research, both in the utility function design and in the incentive/join mechanisms. Finally, we are willing to use real world datasets as input to our experiments, in order to validate the soundness of our methods. Acknowledgments This work was partially supported by  the EU/FIRE IoT Lab project – ICT-610477.  the European Social Fund (ESF) and Greek national funds through the Operational Program ‘‘Education and Lifelong Learning’’ of the National Strategic Reference Framework (NSRF) – Research Funding Program: Thalis-DISFER, Investing in knowledge society through the European Social Fund.

References [1] C.M. Angelopoulos, S. Nikoletseas, T.P. Raptis, J. Rolim, Characteristic utilities, join policies and efficient incentives in mobile crowdsensing systems, in: IFIP Wireless Days, 2014. [2] Samsung. Samsung gear. . [3] Google. Google glasses. . [4] J. Polastre, R. Szewczyk, D. Culler, Telos: Enabling ultra-low power wireless research, in: Proceedings of the 4th International Symposium on Information Processing in Sensor Networks, ser. IPSN, 2005. [5] G. Kalic, I. Bojic, M. Kusek, Energy consumption in android phones when using wireless communication technologies, in: MIPRO, 2012. [6] R.K. Ganti, F. Ye, H. Lei, Mobile crowdsensing: current state and future challenges, IEEE Commun. Mag. 49 (2011) 32–39. [7] A. Krause, E. Horvitz, A. Kansal, F. Zhao, Toward community sensing, in: International Conference on Information Processing in Sensor Networks, 2008. IPSN ’08, 2008. [8] M. Riahi, T.G. Papaioannou, I. Trummer, K. Aberer, Utility-driven data acquisition in participatory sensing, in: EDBT, 2013. [9] V. Agarwal, N. Banerjee, D. Chakraborty, S. Mittal, Usense – a smartphone middleware for community sensing, in: MDM, 2013. [10] S.A. Hoseini-Tabatabaei, A. Gluhak, R. Tafazolli, A survey on smartphone-based systems for opportunistic user context recognition, ACM Comput. Surv. 45 (2013) 27:1–27:51. [11] A. Thiagarajan, L. Ravindranath, K. Lacurts, S. Toledo, J. Eriksson, S. Madden, H. Balakrishnan, U.I. Chicago, Vtrack: Accurate, Energy-aware Road Traffic Delay Estimation Using Mobile Phones. [12] R.W. Ouyang, A. Srivastava, P. Prabahar, R. Roy Choudhury, M. Addicott, F.J. McClernon, If you see something, swipe towards it: crowdsourced event localization using smartphones, in: Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ser. UbiComp, 2013. [13] P. Simoens, Y. Xiao, P. Pillai, Z. Chen, K. Ha, M. Satyanarayanan, Scalable crowd-sourcing of video from mobile devices, in: Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services, ser. MobiSys, 2013. [14] D. Zhang, T. He, F.Y. Raghu, G.D. Zhang, T.H.R. Ganti, H. Lei, Where is the crowd?: crowdedness detection scheme for mobile crowdsensing applications, in: IEEE Internation Conference on Computer Communications (INFOCOM), 2011. [15] N.D. Lane, E. Miluzzo, H. Lu, D. Peebles, T. Choudhury, A.T. Campbell, A survey of mobile phone sensing, Comm. Mag. 48 (2010) 140–150. [16] M.-R. Ra, B. Liu, T.F. La Porta, R. Govindan, Medusa: a programming framework for crowd-sensing applications, in: Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services, ser. MobiSys, 2012. [17] T. Das, P. Mohan, V.N. Padmanabhan, R. Ramjee, A. Sharma, Prism: Platform for remote sensing using smartphones, in: Proceedings of the 8th International Conference on Mobile Systems, Applications, and Services, ser. MobiSys, 2010. [18] D. Yang, G. Xue, X. Fang, J. Tang, Crowdsourcing to smartphones: incentive mechanism design for mobile phone sensing, in: Proceedings of the 18th Annual International Conference on Mobile Computing and Networking, ser. Mobicom, 2012. [19] K. Han, C. Zhang, J. Luo, Truthful scheduling mechanisms for powering mobile crowdsensing, CoRR, vol. abs/1308.4501, 2013. [20] S. Faridani, B. Hartmann, P. Ipeirotis, What’s the right price? pricing tasks for finishing on time, in: Conference on Artificial Intelligence (AAAI), 2011. [21] M. Hirth, T. Hossfeld, P. Tran-Gia, Analyzing costs and accuracy of validation mechanisms for crowdsourcing platforms, Math. Comput. Model. (2012). [22] Simulator. .