Autonomous intelligent systems outperform human workers in an expanding range of domains, typically those in which success
is a function of speed, precision and repeatability, but many cognitive tasks remain beyond the reach of automation. In this work, we propose the use of video games to
crowdsource the cognitive versatility and creativity of human players for complex robotics applications. We introduce a theoretical framework in which robotics problems
are embedded into video game environments and gameplay from crowds of players is aggregated to inform robot actions. Such a framework could enable a future of
synergistic human-machine collaboration for industrial automation, in which members of the public not only freely offer the fruits of their intelligent
reasoning for productive use, but have fun whilst doing so. There is also potential for significant negative consequences surrounding safety, accountability
and ethics if great care is not taken in the implementation. Further work is needed to explore these wider implications, as well as to develop the technical
theory and build prototype applications.
More details can be found at the following publication:
Tom Bewley and Minas Liarokapis, "On the Combination of Gamification and Crowd Computation in Industrial Automation and Robotics Applications,"
IEEE International Conference on Robotics and Autonomous Systems (ICRA), 2019 (under review).
![]() |
Tom Bewley |
Department of Computer Science University of Bristol e-mail: tom.bewley.2014@bristol.ac.uk |
![]() |
Minas Liarokapis |
Lecturer / Research advisor of the New Dexterity research group Department of Mechanical Engineering The University of Auckland e-mail: minas.liarokapis@auckland.ac.nz |
Description of the project.
Autonomous intelligent systems outperform human workers in an expanding range of domains, typically those in which success
is a function of speed, precision and repeatability, but many tasks remain beyond the reach of automation. As humans,
we excel at solving problems demanding planning, spatial reasoning, semantic interpretation, social collaboration and creativity.
Each of these skills is in high demand in the robotics industry, and each is tested rigorously during video gameplay.
By way of example: in the Shakespeare Anthology puzzle of the 2003 survival game Silent Hill 3, the player must enter a four-digit
code into a keypad. To obtain the numbers, they must discover a poem scrawled on a nearby scrap of paper, peruse the stanzas to
identify cryptic allusions to a variety of Shakespearean plays, and follow a convoluted series of deductions requiring an intimate
knowledge of the bard's literary output. Such a feat is immeasurably beyond the capacities of today's artificial intelligence.
One could view the several hundred billion hours spent gaming annually by the world's 2.3 billion players [1][2]
as a vast, untapped resource of valuable intelligence. This time is worth upwards of $2 trillion if paid with the median US salary [3],
but the gamer community actually pays to participate in the round-the-clock exhibition of complex, subtle reasoning. The legions are connected
by robust, high-speed networks, and communicate their intentions precisely via the simple mechanisms of gaming controllers, bypassing the need
for expensive, error-prone sensors. Individually, they are incentivised by competition to adapt and learn from mistakes.
Collectively, they provide robust, parallel computation. Yet while developers collect gameplay statistics for their own purposes, this perfect
problem-solving storm is otherwise entirely wasted from an economic perspective.
Could the robotics industry harness the power of gamer intelligence to address its challenges efficiently and securely? More specifically:
is it possible to map the salient aspects of a robotics problem to features of a video game, analyse the relevant in-game actions of many
players, and reinterpret the most effective strategies into control instructions for the robot itself? This would represent a novel take
on gamification, a paradigm in which game mechanics are used for productive ends. Crucially, gamification should not detract from
the fun of gameplay. In fact, it is not necessary for players to be aware they are solving a robotics problem at all.
This proposal lies at the intersection between gamification and the related fields of crowdsourcing, human computation and
multi-operator-single-robot control [4]. The intersection is complex, loosely-defined, and complicated by the historic use of diverse
application-specific terminology to describe similar ideas, as noted in a prior review of the crowdsourcing literature [5].
Such a lack of integration makes meaningful discussion challenging, hence one of the aims of this paper is to introduce a vocabulary of generic
terms, which we then employ to discuss the specific challenge of robotic task gamification.
Numerous experimental results have attested to the efficacy of crowdsourcing for solving complex reasoning problems, with the approach
outperforming experts in domains requiring significant adaptability and semantic understanding [6]. The validity of crowdsourcing
is also underpinned by theoretical findings that collaborative control by a diverse ensemble of imperfect agents can be more robust and
fault-tolerant than using any single agent [7]. However, we recognise a number of weaknesses in past approaches to crowdsourcing
for robotics applications, relating primarily to cost-effectiveness, privacy, scalability and ethics, which may be hampering widespread industrial
application. We hypothesise that the gamification approach outlined in this paper is capable of rectifying these issues, whilst retaining
the strongest aspects of prior work.
The primary aim of our research paper is to provide a high-level description of how gamified crowdsourcing could operate to enhance industrial
automation. This companion web page explains how we developed a standardised terminology for describing crowdsourcing techniques for robotics
and related applications, through analysis and cross-comparison of prior work. It also contains two concrete example applications of gamification
to robotics, to illustrate how the technique may be implemented in practice, and a discussion of the broad commercial and societal implications
of widespread adoption of gamification for industrial applications.
Here you can find the terminology used in this work.
While the details of past implementations of crowd computers have varied greatly, there are significant commonalities in the overall structure and workflow.
Here, we present a generic, application-neutral crowd computer description, which we intend to be malleable enough to describe to all prior work in the field.
In doing so, we introduce terminology for a range of standard concepts (bold), which we employ the paper to describe how a gamified crowd computer could be built to solve robotics problems.
A crowd computer is built to solve a problem in a certain context: some physical, social or digital system. Success in solving the problem can be described in terms of a
performance measure to be maximised. While it is not possible to access the complete state of the context, an observation can be made to summarise it.
Whenever human computation is required, a context state observation is stored in a data structure, and pre-processed by a function called a context-task mapping.
This mapping outputs a set of parameters called a task, which in turn is used to modify features of a software environment to which a number of human participants have access.
Individually, these participants are called players (Our adoption of this term may appear to inherently favour the gamification approach
to crowdsourcing - "worker" is more common in the literature - but we believe it draws healthy attention to the importance of the incentives, strategies, biases and subjective experiences
of human participants), and the complete set of players which sees a particular task is a crowd . Depending on the problem to be solved, a different context-task mapping
function may be used for each player, resulting in task differentiation across the crowd.
Tasks within crowd computer persist for a prescribed interval, during which the role of each player is to produce an appropriate action in response. Players
synthesise actions by interacting with a set of hardware and software tools. During an interval, players may be able to communicate with each other, causing
their actions to correlate. At the end of the interval, player actions are recorded in a prescribed action format, and passed through a validation step to
correct errors and remove invalid results. Validated actions from the entire crowd are aggregated into a single unified action through an averaging, voting,
summation or leader-election operation. The unified action can be viewed as the crowd computer's output, where its input is the context state observation.
The unified action is given to an agent situated in the context, called the controller, who is influenced by its value but may act with a degree of autonomy
when choosing how to behave in the context. When the controller takes an action, the state of the context is modified, which has an effect on the performance measure.
The performance measure may be fed back to players, allowing them to observe whether their collective actions were beneficial. State information may also be immediately
obtained again, and new a task synthesised, causing the crowd computer to operate iteratively and with continuity.
Benefits of gamified crowd computation
Economics: For many tasks, the expected quality of a crowdsourced result increases as the crowd size increases; this is the wisdom of the crowd effect.
However, in traditional approaches, expansion is disincentivised by the need to pay each participant. As a result, and despite recent efforts to maximise the
efficiency of crowd usage, many researchers remain unconvinced by the economic viability of Mechanical Turk for large-scale applications. With gamification,
no such tradeoff between cost and quality must be faced. Video game players participate for free, or even pay to do so, and while
costs would accompany any required expansion of server infrastructure, economies of scale mean this would not scale linearly in the same manner as wages. In the
gamified crowd computer paradigm, incentives would be fully aligned towards expansion, and the upper bound on system performance lifted.
Architecture: Existing console and PC game servers have already solved the problems of real-time, low-latency interaction between large crowds and
rapid rendering of complex visual scenes. Additionally, game controller hardware has been meticulously optimised over decades to enable non-experts to
intuitively control many-degree-of-freedom characters within dynamic 3D spaces. Given that these would be some of the central technical challenges of crowd
computation for robotic control, it would be pragmatic to acknowledge the work already done by game developers and use their system architectures as a
scalable base for crowd computer development.
Downtime: Keeping crowds consistently engaged in gaps between infrequent, irregularly-spaced tasks has been cited as an important and costly
challenge for real-time crowdsourcing [8]. This problem largely disappears if gamification is used. Video game players may not care, or
even be aware, whether or not their gameplay is being actively harnessed for robotic applications, and would happily continue to play during such downtime
as long as their in-game experience remained enjoyable. Downtime may be an excellent opportunity to deploy so-called gold standard data -
tasks for which the correct action is known in advance - to help teach players how to act in a given situation, and identify strong performers whose actions
could be weighted more highly in the aggregation step.
Crowd Quality: Regular video gamers tend to be strongly motivated to improve, compete, face new challenges and find creative solutions. Generally
somewhat more highly educated and optimistic than the general population, they are more likely to be willing early adopters of radical new
technologies such as robotic gamification, and to give valuable feedback about their experiences to aid future development. This makes them a high-quality
demographic from which to form crowds.
Ethics: Mechanical Turk has been criticised by Harvard Law Professor Jonathan Zittrain as a "digital sweatshop", in which participants complete
repetitive, uninspiring work for unknown clients while negotiating a ruthlessly competitive marketplace. Viewed as contractors thus unprotected
by labour standards, workers receive very low wages (median $2/hour) that fluctuate wildly based on the available work. A newer platform called
Daemo aims to be a more equitable and transparent "self-governed" crowdsourcing marketplace, but we believe that gamification is a fundamentally
more ethical approach. In gamification, human intelligence is not exploited so directly: players engage because they genuinely want to, and only do what they
would have done anyway during leisure time. By taking the role of consumers, players have considerably more power, and game developers must make their games
truly enjoyable and rewarding to attract them.
Here, we present two exemplary applications that combine gamification and crowd computation in robotics.
A robotic harvester sits on a cartesian gantry above a bed of hydroponically-grown microgreens such as coriander and radish. The robot is equipped with a camera, a blade and gripper
for harvesting, and a pesticide spray nozzle. Its task is to harvest crops when they are fully-grown and intervene through targeted spraying to prevent the spread of insect infestations.
Since it would be difficult to autonomously quantify growth success or codify aesthetic qualities of plant appearance, and the visual signs of infestation are extremely subtle, a gamified
crowd computer approach is used to harness humans' superior intuitive judgement.
Every 30 minutes, at time t, the true latent state of the plants' health and growth progression St, is partially observed by taking a top-down photograph Ot with the camera.
A cartoon filter is applied to Ot, which is then cropped into a grid of square regions. The sub-regions are randomly clustered into batches (each batch forms a task Tt) which
are assigned to different sub-groups of a crowd C of video game players. The game visualisation v consists of a 3D cartoon garden world, in which one member of each sub-crowd
controls a 'gardener' character and the rest take the role of 'insects'. The garden world is populated by plant pots, and the texture map for the plant in each pot is taken from an
image sub-region within the assigned batch. Over a period of 2 minutes, consisting of many in-game frames, the insects must fly around the garden, and attempt to land in fully-grown
pots to 'eat' them. Meanwhile, the gardener must use a net to catch the insects and destroy pots that show signs of infestation.
If enough insects gather on a pot for several seconds, the game marks it as infested, the insects gain points and the gardener loses points. Crucially, insects are not shown each others'
movements, so the best strategy is for them to gather within genuinely fully-grown pots (according to their visual appearance) rather than choose randomly, and deviate from the very best
pots cautiously due to the danger of being caught in flight by the gardener. Validation is provided by augmenting the set of pots with gold examples which have previously been
correctly labelled as fully-grown, infested or neither. Insects and gardeners alike are given points if they act appropriately with respect to these known examples, and since they are
not told which pots are gold examples, they are incentivised to assume that all of them are.
Many details of gameplay gt - such as the specific flight patterns of the insects - are irrelevant to the external context and serve only to enrich players' enjoyment of the game.
Recorded per-player actions atp may be as simple as the list of visited pots (for insects) or destroyed pots (for the gardener). During the 30 minute task interval, image
sub-regions are recycled in several 2 minute games, then actions from the entire crowd C are aggregated through an agreement-based approach to assign a single 'growth probability'
and 'infestation probability' to each sub-region, which could be represented visually as heat maps across the plant bed. These heat maps form the crowd computer's overall action At;
its response to the photographic observation Ot. At is taken as advisory information by the robot controller, which may follow a threshold-based decision process to determine
which areas of the bed to harvest and which to spray with pesticide. It synthesises low-level motor commands mt to execute these tasks, then takes another photograph Ot+1
for the next task interval.
Occasionally, a human expert can compare the heat maps to actual growth in the plant bed and label the accuracy of the crowd computer's output for each sub-region. This manual labelling
creates new gold examples, and provides a context performance measure R which can be propagated through the system to update parameters of the context-task mapping function, game
visualisation and scoring system, and action aggregation mechanism.
A mobile robotic platform is being developed to navigate rubble-filled sites after natural disasters and find trapped survivors. It is equipped with a 3D LIDAR sensor to map its immediate
environment as a point cloud, and a combined Bluetooth/WiFi/radio module that detects signals from a person's mobile phone to approximate a vector to their location. A fleet of
prototype robots is tested in an artificial rubble environment, where the goal to navigate to a dummy human target while avoiding all obstacles. A crowd computer is constructed to assist
the robot in navigating the complex, unstructured environment.
At a time t, the robot's position and orientation within the environment, and the relative position of the dummy target, form the context state St. Readings from the sensors form an
imperfect context state observation Ot. Given the bandwidth constraints of the robot's communication module, the point cloud from the LIDAR sensor is rarefied before being transmitted
(this constitutes a minor context-task mapping). The task is embedded in a game that bears no ostensible resemblance to the disaster relief context, which involves players embarking on an
intergalactic voyage atop a spacefaring dragon. The pre-processed point cloud is visualised as a field of colourful stars and planets, and the foreground contains a dragon, jockeyed by the
players' controllable character.
The goal of the game is to steer the dragon (robot) through the field of astronomical bodies (rubble) in the direction indicated by a large on-screen arrow (vector to dummy target).
The game and robot operate synchronously, with exactly one in-game frame per set of sensor readings (F=1; task interval ~30ms). At each task interval, a crowd of players, all faced
with an identical task visualisation, each input a steering command atp for the dragon via a controller joystick. Additional enrichment is included by equipping players with
weapons with which to shoot enemy characters overlaid on the moving scene, which adds to their in-game score, but this aspect of gameplay is ignored in action synthesis. The steering
actions for each task interval are aggregated through a leader-based approach [7], in which the action of the single player whose recent commands have agreed most closely with
the average of their peers is used as the unified action At.
The unified action is sent to the robot controller, which translates it into wheel motor commands mt. This initiates a sequence of changes: the robot moves; the next sensor readings
Ot+1 differ from the previous ones; the in-game task and visualisation are modified. While each player's dragon remains fixed in the centre of their screen, their respective jockey
is visualised as leaning left or right by an amount corresponding to their deviation from the leader's action in the preceding task interval. If a player deviates too much from the leader's
action, they 'fall off', thereby losing points before regenerating back on the dragon. Given that direct communication between players is not permitted, the best policy for each player is
to attempt a safe and efficient path through the starscape, and trust that the bulk of the crowd does likewise.
Several such games may be ongoing simultaneously with different robots in the fleet. Given that the robots are in a testing environment, crashes are non-catastrophic.
Bumpers around the robots' perimeters detect any collision, which sent to the crowd computer as context performance feedback R. This information is provided directly
to players via a 'GAME OVER' message and also used to update internal system parameters. The entire system, which serves to map observations O into unified actions A,
is used to create a dataset for training a supervised machine learning model. Once the action-selection of the model matches that of the crowd to a specified degree of accuracy,
autonomous navigation can be trialled in the testing environment. When eventually deployed in the field, the rescue robots may be controlled through a hybrid of autonomous and
crowdsourced control, calling on the increased adaptability of the crowd when atypical situations are encountered.
Wider implications of gamified crowd computation
We envisage two possible routes to the industrial application of gamified crowd computers. The standard approach, adopted in all existing literature and implied by the two example applications
in the preceding section, would be to build a bespoke online video game to solve a well-defined class of problem in a particular context. The wide range of technically-similar prior work provides
strong evidence that this would be achievable with today's technology. However, this approach is not very scalable. Each commercial owner of automation equipment would either require in-house game
development expertise, or need to contract out the task of developing a functional crowd computer to an external studio, at great inconvenience and expense. The use of bespoke games would also be
rather rigid. By incorporating systemwide learning-from-feedback, a gamified crowd computer may adapt to gradual shifts in the context, but bespoke games would be inflexible to more qualitative changes,
such as the addition of wholly new stages to an assembly line.
More interesting, and likely carrying greater long-term potential, would be to take advantage of the power and popularity of existing game consoles, servers, engines and titles. Robotics tasks
would be embedded (after suitable mapping) into mainstream games, which had not been developed exclusively for gamification, but would nonetheless incorporate it as an architectural feature.
Developers could delineate a certain amount of gamification real estate within their games, for which robot owners could bid. Rather than gamifying entire robotics problems, it may be
more efficient to break them into many micro-tasks, which can be weaved into a rich game environment alongside tasks from many other robot owners (this mechanism has parallels to distributed
computing projects such as SETI@Home). While the cognitive labour of players would be available for free, game developers could charge a fee on a per-task basis rather than
as a function of players' gaming time, thereby eliminating the costly downtime issue encountered in prior work. With this payment structure, developers would be keen to maximise the amount of
gamification real estate without compromising the enjoyment of play, establishing a trend towards maximal productive use of the available human computation. Since modern open-world games have
many more degrees-of-freedom than traditional arcade titles, thus many more opportunities for parametric modification, they would likely be the most amenable to the strategic insertion of gamified micro-tasks.
This latter approach deserves further exploration, since it seems that every plausible stakeholder would benefit. Robot owners would be provided with a flexible, low-cost resource of human
computation to solve their problems. Machine learning researchers would receive vast new datasets with which to train their models. Game developers would have access to an additional revenue
stream, which they could choose to invest in R&D or use to subsidise reduced game prices. Players would welcome price reductions whilst still being able to play their favourite video games,
perhaps augmented by exciting new game mechanics to surround the gamified tasks. Finally, an entirely new industry would emerge to construct intermediate platforms to connect the robotics and
video game environments in an efficient and user-friendly manner. This industry would employ many talented roboticists, computer scientists and designers to face the significant technical challenges.
Achieving the described outcome would rely on excellent integration of multiple complex systems, and a developed theory of how context-task mappings can be constructed to create interesting and
varied game scenarios without compromising the quality of player actions.
We expect the use of crowdsourcing to bootstrap machine learning to become commonplace, though not for all applications. In domains where response speed and scale are of paramount importance,
it would make sense to hand off control from the crowd to a computer algorithm as soon as performance were comparable. In contrast, robots whose contexts are highly variable and uncertain may
perform best under hybrid or fully-human control. For tasks which are completed very infrequently, it may simply not be worth the effort of training a complex model, or insufficient data may
be available to do so effectively.
If gamified crowd computation were proven feasible for industrial automation, questions would arise around possible negative consequences. For example, given the safety-critical nature of many
industrial applications, how would distributed human-in-the-loop robotic control impact on safety, security and accountability? Most gamers play for fun more than pure achievement, with many
seeking to experiment and explore the boundaries of their game environment rather than rigidly responding to the task presented to them. Gameplay from such individuals could pollute a crowd
computer with irrelevant actions, in turn compromising the quality of the output and the performance of the controlled robotic system. This result may be both costly and dangerous, so great
care would need to be taken to build action validation mechanisms that reliably filter out non-directed gameplay. Additionally, even humans that are fully dedicated to a task are clearly fallible,
and the wisdom of the crowd is not absolute. If something did go wrong in an industrial context, who would be held to account?
Another concern could surround how to manage the vast bodies of player-generated data within a crowd computer. Particularly in more complex video games, gameplay histories could become a valuable
commercial commodity, which could be mined for patterns and perhaps used by advertisers to target campaigns towards individuals with certain skills, habits, biases, storyline preferences and
temperaments. More directly, gameplay would be of great value to the industrial entities which want to use it to control their equipment. Particularly talented players, whose actions would be used
disproportionately frequently under leader-based aggregation approaches, may be in a strong position to demand ownership over their gameplay, and even remuneration for their efforts.
Most dystopian of all is the prospect of unethical applications: is there a possibility that unsuspecting gamers may one day be used to control surveillance bots and killer drones from the
comfort of their sofas, in a scenario reminiscent of the plot of Orson Scott Card's Ender's Game? Regulation may be required to dictate what kinds of tasks could and could not be
ethically gamified, and players may need to be clearly notified whenever they were assisting the execution of a robotic task. However, any such knowledge of the context would naturally
compromise privacy, and may influence players' gameplay styles in unpredictable ways.
Challenges of practical implementation
The method of practical implementation of a crowd computer would naturally influence its degree of success. One consideration is how best to harness task
differentiation for a given robotics problem: in some domains, collaboration between diverse agents may be a central feature, whereas in others, a
non-differentiated task may be necessary to allow centralised planning. If each players' gameplay history were incorporated into the task mapping function,
differentiation could also be harnessed to adapt tasks to suit individual skillsets. Also of great importance is the choice of in-game user interface and
control scheme (collectively forming the tools, using our terminology). Much of the lag generated within a crowd computer comes not while players
are deciding what action to take, but rather as they mechanically communicate their intentions via low-bandwidth controls. For applications with short task
intervals, it becomes crucial that tools are chosen to generate usable actions as efficiently as possible (by avoiding, for example, complex nested menus
or the need to memorise long strings of button presses).
Attention must also be given to developing a system to validate, correct and screen players' actions. In prior crowdsourcing work, validation has taken the
form of applying hard constraints, such as only allowing entry of text strings if they match dictionary words, or using heuristics to correct
potential sloppiness automatically, such as by filling in minuscule gaps in screen area selections. Alternatively, the crowd itself could be
used as a means of validation: actions that deviate too far from the crowd average could be excluded, or specific tasks could be created for some players
that serve to validate the actions of others. Similarly, there are many possible ways of aggregating crowd actions to create a single high-quality
output. We hypothesise that the leader-based aggregation approach would be effective for a wide variety of short-interval, non-differentiated tasks
where multi-step planning is important. It has the favourable property of weighing the actions of consistent players more highly
and introduces an extra level of (potentially enjoyable) competition for the prestige of attaining leader status. However, the approach is unlikely to be
universally suitable. Other previously-explored aggregation methods such as simple continuous averaging [17], voting and clustering [19],
agreement between collaborating pairs [12] and bespoke expectation-maximising algorithms [18] may each need to be called upon.
Finally, despite the aforementioned speed of game servers and reliable availability of gamer crowds, a game-based crowd computer may benefit from
incorporating some of the intelligent scheduling mechanisms proposed in prior work. These have varied from the lookahead approach for creating tasks
from predicted future states [9] to the preemptive recruitment of crowds before context observations are available and even the
artificial extension of task intervals by segmenting time series data, dividing it between a crowd and warping the speed to give players more time to act [40].
Planned future directions
In this work, we have sought to provide only a broad introduction to game-based crowd computation for industrial automation, and our understanding of its advantages.
Much more must be done before the technique can be applied in industrial contexts. Future work may begin with example implementations, in which simple games are built
for simulated robotics problems, to add concrete detail to the high-level features presented here. Particularly pressing is the need for robust methods of action
aggregation and validation that work across many application areas.
A theoretical analysis of the task mapping problem would also be valuable. The more ostensibly different a gameplay scenario from the robotics problem that underlies it,
the better the privacy of the system, and the more flexibility in game design is afforded to developers. However, it is unclear whether extremely non-intuitive mappings
are actually viable. It will be important to understand how much nonlinear transformation can be applied to information from a robotic context, before any hope of
recovering usable actions from gameplay is lost. It will also be important to understand how such a mapping can be practically created, either manually or autonomously.
Further theoretical work may investigate ways of representing a crowd computer as a unified learning system, incorporating both the human players and the functions which
combine and transform the data flows. We have suggested framing the context as a POMDP and the crowd computer as a reinforcement learning agent, since this appears to fit
many of the high-level features, but better options may be proposed in future.
Finally, for widespread commercialisation to be realised, a gamification platform equivalent to Mechanical Turk would need to be developed. Such a platform could assist
an owner of automation equipment in describing their problem and connecting sensor feeds, and perhaps automatically generate a representation within a game environment.
It would then need to serve as a secure interface between the robotic system and the game during operation, incorporating variants of prior innovations for efficient
scheduling of tasks and recruitment of crowds.
References in order of appearance.
[1] Limelight Networks. The State of Online Gaming. 2018.
[2] Newzoo. 2018 Global Games Market Report. 2018.
[3] Bureau of Labor Statistics, United States Department of Labor. Usual Weekly Earnings of Wage and Salary Workers First Quarter 2017.
[4] Nak Young Chongy, Tetsuo Kotokul, Kohtaro Ohbal, Kiyoshi Ko-moriya, Nobuto Matsuhira, and Kazuo Tanie. Remote coordinated controls in multiple telerobot cooperation. 2000.
[5] Mahmood Hosseini, Alimohammad Shahri, Keith Phalp, Jacqui Taylor, and Raian Ali. Crowdsourcing: A taxonomy and systematic mapping study. Computer Science Review, 17:43-69, Aug 2015.
[6] Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y Ng. Cheap and Fast But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks. 2008.
[7] K. Goldberg and B. Chen. Collaborative control of robot motion: robustness to error. IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 655-660, 2001.
[8] Richard Dawkins. The Blind Watchmaker. Norton & Company, 1986.
[9] Victor F. Araman and Rene Caldentey. Crowdvoting the timing of new product introduction. Jan 2016.
[10] Rick Bonney, Caren B. Cooper, Janis Dickinson, Steve Kelling, Tina Phillips, Kenneth V. Rosenberg, and Jennifer Shirk. Citizen science: A developing tool for expanding science knowledge and scientific literacy. BioScience, 59(11):977-984, 2009.
[11] Karim R. Lakhani, Anne-Laure Fayard, Natalia Levina, and Stephanie Healy Pokrywa. Open IDEO. Harvard Business School Technology & Operations Mgt. Unit, (Case No. 612-066), Feb 2012.
[12] Luis von Ahn and Laura Dabbish. Designing games with a purpose. Communications of the ACM, 51(8), Aug 2008.
[13] Luis von Ahn and Laura Dabbish. Labeling images with a computer game. Proceedings of the 2004 conference on Human factors incomputing systems - CHI 04, pages 319- 326, 2004.
[14] Luis von Ahn, Mihir Kedia, and Manuel Blum. Verbosity: a game forcollecting common-sense facts. Proceedings of the SIGCHI conferenceon Human Factors in computing systems - CHI 06, page 75, 2006.
[15] Luis von Ahn, Benjamin Maurer, Colin McMillen, David Abraham, and Manuel Blum. reCAPTCHA: Human-Based Character Recognition via Web Security Measures. Science, 321(5895):1465-1468, 2008.
[16] Duolingo. https://www.duolingo.com/. Accessed: 23 Aug2018.
[17] K. Goldberg, B. Chen, R. Solomon, S. Bui, B. Farzin, J. Heitler, D. Poon, and G. Smith. Collaborative teleoperation via the internet. IEEE International Conference on Robotics and Automation, pages 2019-2024, 2000.
[18] D. Song, A. Pashkevich, and K. Goldberg. ShareCam part II: approximate and distributed algorithms for a collaboratively controlled robotic Webcam. IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1087-1093, 2003.
[19] K. Goldberg, D. Song, Y. Khor, D. Pescovitz, A. Levandowski, J. Himmelstein, J. Shih, A. Ho, E. Paulos, and J. Donath. Collaborative online teleoperation with spatial dynamic voting and a human tele-actor. IEEE International Conference On Robotics And Automation, pages 1179-1184, 2002.
[20] A Sorokin, D Berenson, S S Srinivasa, and M Hebert. People helping robots helping people: Crowdsourcing for grasping novel objects. IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2117-2122, 2010.
[21] Sarah Osentoski, Grayin Jay, Christopher Crick, and Odest Chadwicke Jenkins. Crowdsourcing for closed-loop control. 2010.
[22] Sonia Chernova, Jeff Orkin, and Cynthia Breazeal. Crowdsourcing HRI Through Online Multiplayer Games. 2010.
[23] Sonia Chernova, Nick DePalma, and Cynthia Breazeal. Crowd-sourcing real-world human-robot dialogue and teamwork through online multiplayer games. AI Magazine, 32(4):100-111, Dec 2011.
[24] Walter S. Lasecki, Kyle I. Murray, Samuel White, Robert C. Miller, and Jeffrey P. Bigham. Real-time crowd control of existing interfaces. Proceedings of the 24th annual ACM symposium on User interface software and technology, pages 23-32, 2011.
[25] Walter Lasecki, Christopher Miller, Adam Sadilek, Andrew Abu-moussa, Donato Borrello, Raja Kushalnagar, and Jeffrey Bigham. Real-time captioning by groups of non-experts. Proceedings of the 25th annual ACM symposium on User interface software and technology, pages 23-24, 2012.
[26] Gierad Laput, Walter S. Lasecki, Jason Wiese, Robert Xiao, Jeffrey P.Bigham, and Chris Harrison. Zensors: Adaptive, rapidly deployable, human-intelligent sensor feeds. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 1935-1944, 2015.
[27] Walter S. Lasecki, Juho Kim, Nick Rafter, Onkur Sen, Jeffrey P. Bigham, and Michael S. Bernstein. Apparition: Crowdsourced userinterfaces that come to life as you sketch them. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 1925-1934, 2015.
[28] Sai R Gouravajhala, Jinyeong Yim, Karthik Desingh, Yanda Huang, Odest Chadwicke Jenkins, and Walter S Lasecki. Eureca: Enhanced understanding of real environments via crowd assistance. 2018.
[29] Wai L. Khoo, Greg Olmschenk, Zhigang Zhu, and Tony Ro. Evaluating crowd sourced navigation for the visually impaired in a virtual environment. IEEE International Conference on Mobile Services, pages 431-437, 2015.
[30] Amazon Mechanical Turk. https://www.mturk.com/. Accessed: 23 Aug 2018.
[31] Akshay Rao, Harmanpreet Kaur, and Walter S Lasecki. Plexiglass: Multiplexing passive and active tasks for more efficient crowdsourcing. 2018.
[32] Alan Lundgard, Yiwei Yang, Maya L. Foster, and Walter S. Lasecki. Bolt: Instantaneous crowdsourcing via just-in-time training. Proceedings of the CHI Conference on Human Factors in Computing Systems,(2), 2018.
[33] Harmanpreet Kaur, Mitchell Gordon, Yiwei Yang, Jeffrey P Bigham, Jaime Teevan, Ece Kamar, and Walter S Lasecki. Crowdmask: Usingcrowds to preserve privacy in crowd-powered systems via progressive filtering. 2017.
[34] John Le, Andy Edmonds, Vaughn Hester, and Lukas Biewald. Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution. 2010.
[35] Snehal (Neil) Gaikwad, Jeff Regino, Aditi Mithal, Adam Ginzberg, Aditi Nath, Karolina R. Ziulkoski, Trygve Cossette, Dilrukshi Gamage, Angela Richmond-Fuller, Ryo Suzuki, and et al. Daemo: A self-governed crowdsourcing marketplace. pages 101-102. ACM Press, 2015.
[36] Gloria Re Calegari, Gioele Nasi, and Irene Celino. Human computation vs. machine learning: an experimental comparison for image classification. Human Computation, 5(1):13-30, 2018.
[37] M.T.J. Spaan. Partially Observable Markov Decision Processes. in Reinforcement Learning. Adaptation, Learning, and Optimization, vol.12. Springer, 2012.
[38] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap, Karen Simonyan, and Demis Hassabis. Mastering chess and shogiby self-play with a general reinforcement learning algorithm. CoRR, abs/1712.01815, 2017.
[39] OpenAI Five. https://blog.openai.com/openai-five/ 2018. Accessed: 28 Aug 2018.
[40] Walter S. Lasecki, Christopher D. Miller, and Jeffrey P. Bigham. Warping time for more effective real-time crowdsourcing. page 2033. ACM Press, 2013.
List of research papers.
[#1] Tom Bewley and Minas Liarokapis, "On the Combination of Gamification and Crowd Computation in Industrial Automation and Robotics Applications," IEEE International Conference on Robotics and Automation (ICRA), 2019.
Interested in our research? Contact us!