๐ Today’s Picks
Automate LLM Evaluation at Scale with MLflow make_judge()
Problem
When you ship LLM features without evaluating them, models might hallucinate, violate safety guidelines, or return incorrectly formatted responses.
Manual review doesn’t scale. Reviewers might miss subtle issues when evaluating thousands of outputs, and scoring standards often vary between people.
Solution
MLflow make_judge() applies the same evaluation standards to every output, whether you’re checking 10 or 10,000 responses.
Key capabilities:
- Define evaluation criteria once, reuse everywhere
- Automatic rationale explaining each judgment
- Built-in judges for safety, toxicity, and hallucination detection
- Typed outputs that never return unexpected formats
๐ Worth Revisiting
LangChain v1.0: Auto-Protect Sensitive Data with PIIMiddleware
Problem
User messages often contain sensitive information like emails and phone numbers.
Logging or storing this data without protection creates compliance and security risks.
Solution
LangChain v1.0 introduces PIIMiddleware to automatically protect sensitive data before model processing.
PIIMiddleware supports multiple protection modes:
- 5 built-in detectors (email, credit card, IP, MAC, URL)
- Custom regex for any PII pattern
- Replace with [REDACTED], mask as ****1234, or block entirely
โ๏ธ Weekly Finds
litellm [LLM] – Python SDK and Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI format with cost tracking, guardrails, and logging.
parlant [LLM] – LLM agents built for control with behavioral guidelines, ensuring predictable and consistent agent behavior.
GLiNER2 [ML] – Unified schema-based information extraction for NER, text classification, and structured data parsing in one pass.
Looking for a specific tool? Explore 70+ Python tools โ
Stay Current with CodeCut
Actionable Python tips, curated for busy data pros. Skim in under 2 minutes, three times a week.


