Job Details

Safety/Red Teaming Data Labelling Analyst III

  2026-03-20     Tundra Technical Solutions     Alameda,CA  
Description:

Job Title: Safety / Red Teaming Data Labeling Analyst III (DLA III)

Company: Meta AI (via Tundra Technical Solutions)

Location: Hybrid – 3 days onsite per week

Pay Rate: $30/hr USD

Experience Required: 4+ years

Contract: 3 months to start (Extension likely)

About the Role

Tundra Technical Solutions is hiring on behalf of Meta AI for a Safety / Red Teaming Data Labeling Analyst III (DLA III) to support AI model development and evaluation. This role is focused on improving model safety, quality, and reliability through data annotation, auditing, and adversarial testing.

You'll work closely with cross-functional teams to evaluate model outputs, identify risks, and help strengthen safety systems through structured red-teaming efforts.

Key Responsibilities

  • Execute high-quality data annotation and evaluation across multi-modal datasets
  • Perform QA auditing, including sampling, inter-annotator alignment, and error analysis
  • Design and run red-teaming / jailbreak prompts to test model safety across sensitive domains
  • Analyze model outputs to identify policy violations, risks, and edge cases
  • Apply knowledge of global political systems, events, and actors to inform content evaluation and policy enforcement
  • Collaborate with stakeholders to improve labeling guidelines and model performance

Required Qualifications

  • 4+ years of experience in data annotation, labeling, or evaluation
  • Proven experience with QA auditing methodologies (sampling, alignment, error analysis)
  • Hands-on experience with safety-focused red-teaming or adversarial testing
  • Strong understanding of US and global political landscapes and current events
  • Ability to apply policy frameworks to risk identification and content evaluation

Preferred Qualifications

  • Experience working with LLMs (Large Language Models)
  • Bachelor's degree (preferred, not required)

Why Apply?

  • Work at the forefront of AI safety and model evaluation
  • Opportunity to contribute to large-scale AI systems at Meta AI
  • Collaborative, fast-paced, and impactful environment

How to Apply

If you're interested, please apply directly or share your resume and availability for a screening call at ...@meta.com


Apply for this Job

Please use the APPLY HERE link below to view additional details and application instructions.

Apply Here

Back to Search