Loading...

The Blog

We write about LLM training, software engineering, and building AI products that work in production.

Why AI Models Need Engineers, Not Annotators, for Code Evaluation
AI & ML
10 min read

Why AI Models Need Engineers, Not Annotators, for Code Evaluation

Most teams building AI coding assistants are training them on feedback from people who cannot actually read code. Here is what goes wrong, why it is hard to detect, and what the right approach looks like.

Nomos InsightsApr 1, 2026
What Makes a Good RLHF Rubric for Coding Tasks
AI & ML11 min read

What Makes a Good RLHF Rubric for Coding Tasks

A rubric sounds simple: a list of criteria, a scoring scale, and some instructions. But most rubrics for AI code evaluation are quietly broken in ways that poison the training data. Here is what good rubric design actually looks like.

Nomos InsightsMar 28, 2026
5 Rubric Quality Issues That Silently Degrade Your AI Training Data
AI & ML11 min read

5 Rubric Quality Issues That Silently Degrade Your AI Training Data

Your annotation pipeline is running. The data looks clean. But your model is behaving in ways you cannot explain. The problem might be hiding in your rubric. Here are five specific issues that silently corrupt AI training data, and how to fix each one.

Nomos Insights
How India's Competitive Programming Community Is Powering AI Training Data
AI & ML10 min read

How India's Competitive Programming Community Is Powering AI Training Data

Millions of engineers in India have spent years training their minds on the exact skills that AI code evaluation demands. This is the story of how a programming culture built around contests became one of the most important talent pools in the AI training data industry.

Nomos Insights