Amazon Mechanical Turk — Crowdsourcing marketplace for human intelligence tasks (HITs) like data labeling, content moderation, and surveys.
What is Amazon Mechanical Turk?
Amazon Mechanical Turk (MTurk) is a crowdsourcing marketplace that connects businesses and developers (Requesters) with a global, on-demand workforce (Workers) to complete tasks that require human intelligence. Think data labeling, content moderation, surveys, transcription, and image annotation.
Key Insight: MTurk provides access to human intelligence at scale — tasks that are difficult for computers but easy for humans, like “Is this image appropriate?” or “Transcribe this handwritten text.”
Key Features
| Feature | Description |
|---|---|
| Global Workforce | 500,000+ workers from 190+ countries |
| Human Intelligence Tasks (HITs) | Discrete tasks posted by Requesters and completed by Workers |
| Flexible Pricing | You set the reward per task (as low as $0.01) |
| Quality Control | Qualifications, Master Workers, and approval workflows |
| Scalability | From 10 tasks to 10 million tasks |
| API Access | Programmatic access via AWS SDK and REST API |
| Requester Dashboard | Web interface for creating and managing HITs |
| Masters Qualification | Access to top-performing Workers for higher quality |
Use Cases
Training Data for ML Models
Label images, annotate text, verify transcriptions — create high-quality training datasets for machine learning models.
Content Moderation
Human review of user-generated content for policy violations, inappropriate material, or spam.
Data Verification
Verify business listings, product information, contact details — human validation at scale.
Surveys & Research
Conduct academic research, market research, user studies — access diverse respondents quickly.
Audio/Video Transcription
Transcribe interviews, podcasts, videos — especially useful for non-standard audio or multiple speakers.
How It Works
1. Create HITs: Define task, instructions, examples, and reward
2. Publish HITs: Make tasks available to Workers
3. Workers Complete Tasks: Workers browse available HITs and submit answers
4. Review & Approve: Review submissions and approve/reject (Workers paid on approval)
5. Collect Results: Aggregate results and integrate into your workflow
# Example: Creating a HIT via boto3
response = mturk.create_hit(
Title='Classify this image',
Description='Determine if image contains a cat',
Reward='0.05',
MaxAssignments=3,
LifetimeInSeconds=3600,
AssignmentDurationInSeconds=300,
Question=question_xml
)Pricing & Free Tier
| Aspect | Details |
|---|---|
| Free Tier | ❌ No free tier |
| Base Fee | 20% of reward paid to Workers |
| 10+ Assignments | 40% fee (20% base + 20% additional) |
| Masters Qualification | Additional 5% fee |
| Minimum Per Assignment | $0.01 |
Cost Example: If you pay Workers $0.10 per task with 5 assignments per HIT:
- Worker reward: 0.50
- MTurk fee (20%): $0.10
- Total cost: $0.60
Cost Tip: Keep assignments per HIT under 10 to avoid the 40% fee. Break large batches into smaller HITs.
⚠️ Pricing Disclaimer: AWS pricing is subject to change. Pricing shown is based on information available as of January 2026. Always verify current pricing at the official MTurk pricing page.
When to Use MTurk
| Use | Don’t Use |
|---|---|
| Data labeling for ML | Automated tasks (use APIs) |
| Content moderation | Low-latency real-time tasks (<1 min) |
| Human verification | Tasks requiring specialized expertise |
| Surveys & research | Confidential/sensitive data (use private workforce) |
MTurk vs Amazon A2I
| Aspect | Mechanical Turk | Augmented AI (A2I) |
|---|---|---|
| Purpose | General crowdsourcing marketplace | Human review for ML predictions |
| Integration | Standalone platform | Integrated with Textract, Rekognition, SageMaker |
| Workflow | Manual HIT management | Automated review workflows |
| Best For | Custom tasks, data collection | ML model validation |
Quality Control
Qualifications: Set requirements for Workers (approval rate, location, custom tests)
Master Workers: Top performers (5% additional fee, but higher quality)
Multiple Assignments: Same HIT completed by multiple Workers for consensus
Approval Workflow: Review and reject low-quality work before paying
Important Notes
- Workforce Quality: Worker quality varies significantly — use qualifications and Masters
- No Guarantees: Workers can abandon tasks — plan for incomplete HITs
- Ethics: Pay fair wages — median Worker earnings are ~$3-6/hour
- API-First: Programmatic access recommended for large-scale operations
- Not AWS-Managed AI: MTurk is not an AI service — it’s a marketplace for human work
Integration with AWS AI Services
MTurk is commonly used with:
- Amazon A2I — For human review loops in ML pipelines
- Amazon SageMaker Ground Truth — For creating labeled training datasets
- Amazon Rekognition — For human verification of low-confidence predictions
- Amazon Textract — For human review of document extraction
TL;DR
- MTurk = Crowdsourcing marketplace for human intelligence tasks
- Features: 500K+ Workers, global reach, API access, quality controls, flexible pricing
- Pricing: 20% fee (40% for 10+ assignments per HIT) + Worker reward
- Best for: Data labeling, content moderation, surveys, human verification
- Not for: Real-time tasks, automated workflows, confidential data
- Works with: Amazon A2I, SageMaker Ground Truth, Rekognition, Textract
Resources
Amazon Mechanical Turk Official marketplace for Requesters and Workers.
MTurk Developer Guide Complete API reference and guides.
MTurk Pricing Detailed pricing breakdown.