Microtask Platforms

From P2P Foundation
Jump to navigation Jump to search


Definition

Ross Dawson:

"Microtask platforms take projects and reduce them to a series of small, welldefined tasks that are distributed to workers all over the globe. The ability to allocate microtasks to many workers is likely to have a major impact, as business processes are increasingly broken into small pieces that are allocated as appropriate to computers or humans, and distributed around the world.

„„ Microtask platforms are suited for a range of tasks including data gathering and checking, content management, and search engine optimization tasks. „„ The first-launched and largest microtask platform is Amazon Mechanical Turk, however there are now a number of other platforms available." (Getting Results from Crowds)


How it works

"Microtask platforms are generally used for small, well-defined, repetitive tasks that usually do not require significant skills. These are applied within ongoing business processes or in some cases as part of a single project.

The basic model is that large projects are broken down into small constituent tasks, called microtasks, which are distributed to a large crowd of registered workers who work on them simultaneously." (Getting Results from Crowds)


Directory

Amazon Mechanical Turk

Mechanical Turk dominates the microtask landscape. It is the longest established platform, draws on a huge labor pool, and has advanced APIs. It describes microtasks as “Human Intelligence Tasks” (HITs). The platform can only be used by project owners with a US-based bank account.


Other microtask platforms

For non- U.S. based project owners and those looking to tap other worker pools there are a variety of other platforms including Clickworker, Microtask, ShortTask, and Samasource.


Service marketplaces

Some employers choose to post what are effectively microtask projects on to the larger service marketplaces, but here you will need to individually manage providers or teams.


Niche platforms

Some niche platforms such as Jana (for researching consumer insights) cover specific types of microtask work.


Aggregators and managed services

Aggregators and value-add services that provide interfaces and management to microtask workers.Aggregators provide a managed service and platform usually as a layer on top of Amazon Mechanical Turk. "


Crowd Process Providers

They "include both aggregating microtask platforms as well as performing a range of value-add functions. Some companies will project manage all aspects of a microtask-based assignment from task definition to assessing data quality through to providing the technology platform. Particularly for more complex tasks, the chances of successful outcomes are greatly enhanced by using these services.

Crowd process providers include CrowdFlower, Data Discoverers, and Scalable Workforce. To a lesser extent some microtask platforms such as Clickworkers or Microtask provide managed services themselves.

Crowd process firms effectively take the advantages of leveraging the power of the crowd – such as large throughput and lower cost – and combine these with the convenience and guaranteed service levels of a Business Process Outsourcer.

Some of the crowd service firms also provide branded technology platforms, usually as a layer which sits on top of Amazon Mechanical Turk. These can usually be used in a self-service capacity by their clients." (source: Getting Results from Crowds)

Examples

Shiny Or's use of Amazon's Mechanical Turk

Jennifer Chin and Elizabeth Yin:

"For Shiny Orb (wedding apparel), we ran two price tests. We first paid $0.03 to get the length, neckline, and sleeves classifications for each dress. For the second test, we decided to offer $0.01 for all three. We found no difference in quality, The downside to offering less compensation is that fewer workers do your gigs, making it slower to receive results. Still, we had no problem getting all dresses categorized within half a day.

From our tests, clarity affects quality more than anything else. By that I mean, we found significant improvement in results by clarifying the definitions for our categories and placing those definitions upfront and center.

In particular, in our first Turk test, one of the choices we had for neckline and sleeves was “Other,” which workers tended to select a lot.


Our success rate of correct categorizations for that test was: „„

  • 92% for length, 64% for neckline, and 64% for sleeves

In our second test, we made it very clear that “Other” basically shouldn’t be chosen, which increased our success rate in the neckline and sleeves categories to: „„

  • 90% for length, 86% for neckline, and 87% for sleeves


Lastly, we found that in order to get these fairly high quality numbers, we had to run the same gig with three workers. I.e. have three workers categorize each dress. We took the majority “vote” of the categories and found this to improve our quality significantly.” (Getting Results from Crowds)