Research
DeepMind publishes new results on efficient sparse training for large models
New techniques aim to reduce training cost while preserving downstream quality across reasoning and coding evaluations.
Research Desk — 15 days ago
Research, arXiv, evaluation, and model releases.
New techniques aim to reduce training cost while preserving downstream quality across reasoning and coding evaluations.
The release tightens the open model race, with improved tool-use and stronger performance in long-context tasks.
New work suggests preference tuning must be paired with robustness metrics if models are to operate safely as agents.