중국 AI 연구소 내부에서 얻은 교훈
개인 명성보다 모델 최적화에 집중한 중국식 LLM 개발 체계 분석
개인 명성보다 모델 최적화에 집중한 중국식 LLM 개발 체계 분석
Fine-tuning CLIP on a Niche Domain: How I Got +26pp Accuracy on Architectural Styles and What You Can Apply to Your Own Domain
Three small models for healthcare intake — and what shipping all three taught me
How We Built a Sub-200ms Multilingual Chat System Translating 100+ Languages with Our Own LLM
I fine-tuned a bias judge for $30. The training was the easy part.
CyberSecQwen-4B: Why Defensive Cyber Needs Small, Specialized, Locally-Runnable Models
MedQA: Fine-Tuning a Clinical AI on AMD ROCm — No CUDA Required
The model isn’t the hard part: the data pipeline I built to teach Gemma 4 E2B to read Indian GST invoices.
GemmaAir: Real-Time Aircraft Engine Safety Monitor using Gemma 4 and IoT
Claude Code Integration, Token Burn Analysis & Qwen2-VL Fine-tuning Insights
L'IA vocale en gestion de chantier : retour d'expérience après 50 projets BTP
Open-source AI I'm watching: DeepSeek V4, VibeVoice, and the n8n effect
Desktop app to generate LLM fine-tuning datasets — got +16pp on HumanEval
A Unified View of AI Evolution: From Machine Learning to LLMs, RAG, and Fine-Tuning
I Built an AI-Powered Link-in-Bio Tool — Here's the Tech Behind It and Why You Can Use It Free
Fine-tuning vs. RAG: A Cost-Benefit Framework
Are we Using AI at the Wrong Scale?
AI Code Editing Gone Too Far: Stop Over-Editing Now
Part 3: The Science - Hyperparameter Tuning & Getting to 100% Precision with Warp/Oz
I Fine-Tuned Gemma 4 for LaTeX OCR. The Success Was the Problem.