Our Team

Distributed experts delivering human-led AI evaluation and data annotation

Who We Are

AnnotaX is a boutique AI services company delivering human-led evaluation and data annotation through a distributed expert team.

We combine experienced technical leadership with trained reviewers to support AI teams building high-quality, reliable systems.

Leadership

Ronak Shakari - Founder & Lead AI Evaluator

Ronak Shakari

Founder & Lead AI Evaluator

Ronak leads AnnotaX's evaluation methodology, reviewer training, and client delivery.

She has professional experience working on large-scale AI systems, including contributions to Meta's LLaMA project, with a focus on:

  • LLM evaluation and quality assurance
  • Multilingual NLP
  • Safety-critical and mental-health AI use cases

As the primary point of contact for all engagements, Ronak ensures consistency, quality standards, and clear communication across every project.

Our Extended Evaluation Team

AnnotaX works with a network of trained evaluators and annotators engaged on a project basis to support scalability and turnaround time.

Our extended team includes specialists with:

  • Multilingual capabilities
  • Domain familiarity (including sensitive content review)
  • Experience following structured annotation and evaluation guidelines

Team members are onboarded, briefed, and quality-checked according to project-specific requirements.

How Our Team Works

Each engagement follows a team-based delivery model:

Technical Lead

One technical lead responsible for quality and oversight

Selected Reviewers

Trained reviewers matched to language and domain needs

Clear Guidelines

Clear guidelines and review processes to ensure consistency

Single Point of Contact

Clients benefit from team execution with one point of contact

Why This Model Works

  • Scales without sacrificing quality
  • Flexible capacity for pilots and sprints
  • Reduced risk for sensitive AI use cases
  • Clear accountability and communication

This structure allows us to support both early-stage teams and growing AI organizations.

Interested in Working With Us?

If you're looking for a reliable, team-based partner for AI evaluation or data annotation, we'd be happy to discuss your needs.

Let's Talk