🔥 allenai/OLMo-core
PyTorch building blocks for the OLMo ecosystem — Trending on GitHub today with 66 new stars.
Building blocks for OLMo modeling and training
Installation
First install PyTorch according to the instructions specific to your operating system and hardware.
For development, we recommend installing from source:
git clone https://github.com/allenai/OLMo-core.git cd OLMo-core pip install -e .[all]git clone https://github.com/allenai/OLMo-core.git cd OLMo-core pip install -e .[all]Or you can install from PyPI with:
pip install ai2-olmo-core
There are a number of optional dependencies that must be installed to use certain functionality as well, including:
-
flash-attn, ring-flash-attn, and TransformerEngine for the corresponding attention backends.
-
Liger-Kernel for a low-memory "fused-linear" loss implementation.
-
torchao for float8 training.
-
grouped_gemm for dropless mixture-of-experts (MoE) models. You may need to compile from source until PR #21 is released (post v0.1.6).
-
QuACK for some CuTe-based kernels.
The published Docker images contain all core and optional dependencies, and are regularly tested on our in-house H100 clusters. But there are several things to keep in mind if you intend to use these images:
-
They do not come with the OLMo-core package installed, only its dependencies, to accommodate for regular code changes.
-
They may not work on your own cluster if you have different hardware or driver/CUDA versions.
If the published images do not work for your use-case for any of the above reasons, you could adapt our Dockerfile to build your own images.
Official training scripts
Official training scripts for released models can be found in src/scripts/official/.
These scripts are meant to be launched with torchrun, or with OLMo-core's Beaker launch CLI if you have access to Beaker.
For example:
torchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-train.py \ --save-folder=/path/to/save/checkpointstorchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-train.py \ --save-folder=/path/to/save/checkpointsYou can override most configuration options from the command-line. For example, to override the learning rate you could launch the script like this:
torchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-train.py \ --save-folder=/path/to/save/checkpoints \ --train_module.optim.lr=6e-3torchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-train.py \ --save-folder=/path/to/save/checkpoints \ --train_module.optim.lr=6e-3To continue annealing from a checkpoint, we use a separate script which can be launched like this:
torchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-anneal.py \ --save-folder=/path/to/save/checkpoints \ --checkpoint=https://olmo-checkpoints.org/ai2-llm/peteish32/step721901torchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-anneal.py \ --save-folder=/path/to/save/checkpoints \ --checkpoint=https://olmo-checkpoints.org/ai2-llm/peteish32/step721901Available Training Scripts
Model Family Directory Description
OLMo-2
src/scripts/official/OLMo2/
Training scripts and model card for OLMo-2 32B models
OLMo-3
src/scripts/official/OLMo3/
Training scripts and model cards for OLMo-3 7B and 32B models
Inference
With Hugging Face Transformers
You can use our Hugging Face transformers integration to run inference on the OLMo checkpoints:
pip install transformers>=4.57.0
from transformers import AutoModelForCausalLM, AutoTokenizer olmo = AutoModelForCausalLM.from_pretrained("allenai/Olmo-3-1125-32B") tokenizer = AutoTokenizer.from_pretrained("allenai/Olmo-3-1125-32B") message = ["Language modeling is "] inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)from transformers import AutoModelForCausalLM, AutoTokenizer olmo = AutoModelForCausalLM.from_pretrained("allenai/Olmo-3-1125-32B") tokenizer = AutoTokenizer.from_pretrained("allenai/Olmo-3-1125-32B") message = ["Language modeling is "] inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)inputs = {k: v.to('cuda') for k,v in inputs.items()} # optional verifying cuda
olmo = olmo.to('cuda')
response = olmo.generate(inputs, max_new_tokens=100, do_sample=True, temperature=1.0, top_p=0.7) print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])`
Alternatively, with the Hugging Face pipeline abstraction:
from transformers import pipeline olmo_pipe = pipeline("text-generation", model="allenai/Olmo-3-1125-32B") print(olmo_pipe("Language modeling is"))from transformers import pipeline olmo_pipe = pipeline("text-generation", model="allenai/Olmo-3-1125-32B") print(olmo_pipe("Language modeling is"))With vLLM
vLLM provides high-throughput inference for OLMo models. You can use it for offline batched inference:
pip install vllm>=0.11.0
from vllm import LLM, SamplingParams llm = LLM(model="allenai/Olmo-3-1125-32B") sampling_params = SamplingParams(temperature=1.0, top_p=0.7) prompts = ["Language modeling is"] outputs = llm.generate(prompts, sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")from vllm import LLM, SamplingParams llm = LLM(model="allenai/Olmo-3-1125-32B") sampling_params = SamplingParams(temperature=1.0, top_p=0.7) prompts = ["Language modeling is"] outputs = llm.generate(prompts, sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")For more details, see the vLLM documentation.
With Olmo-core (beta)
Autoregressive generation is supported directly in Olmo-core. Using this capability, we provide a chat-loop demo that can be used to interact with models in an interactive chat session:
python -m olmo_core.generate.chat https://olmo-checkpoints.org/ai2-llm/Olmo-3-1025-7B/stage3/step11921/ --max-new-tokens 512
Evaluation
Additional tools for evaluating OLMo models are available at the OLMo Eval and olmes repositories.
Development
The Python library source code is located in src/olmo_core. The corresponding tests are located in src/test. The library docs are located in docs. You can build the docs locally with make docs.
Code checks:
-
We use pytest to run tests. You can run all tests with pytest -v src/test. You can also point pytest at a specific test file to run it individually.
-
We use isort and black for code formatting. Ideally you should integrate these into your editor, but you can also run them manually or configure them with a pre-commit hook. To validate that all files are formatted correctly, run make style-check.
-
We use ruff as our primary linter. You can run it with make lint-check.
-
We use mypy as our type checker. You can run it with make type-check.
Citing
@misc{olmo20242olmo2furious, title={{2 OLMo 2 Furious}}, author={{Team OLMo} and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi}, year={2024}, eprint={2501.00656}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.00656}, }@misc{olmo20242olmo2furious, title={{2 OLMo 2 Furious}}, author={{Team OLMo} and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi}, year={2024}, eprint={2501.00656}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.00656}, }Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
githubtrendingopen-source
Seeking arXiv cs.AI endorsement — neuroscience-inspired memory architecture for AI agents
Hi everyone, I’m an independent researcher (Zensation AI) seeking endorsement for my first arXiv submission in cs.AI. Paper: “ZenBrain: A Neuroscience-Inspired 7-Layer Memory Architecture for Autonomous AI Systems” Summary: ZenBrain is the first AI memory system grounded in cognitive neuroscience. It implements 7 memory layers (working, short-term, episodic, semantic, procedural, core, cross-context) with 12 algorithms including Hebbian learning, FSRS spaced repetition, sleep-time consolidation (Stickgold & Walker 2013), and Bayesian confidence propagation. Prior art: Published as defensive publication on TDCommons (dpubs_series/9683) and archived on Zenodo (DOI: 10.5281/zenodo.19353663). Open-source npm packages with 9,000+ tests. Why this matters: Recent surveys (arxiv:2603.07670) identi

Automating Android Build Repair: Bridging the Reasoning-Execution Gap in LLM Agents with Domain-Specific Tools
arXiv:2510.08640v3 Announce Type: replace Abstract: Android is the largest mobile platform, yet automatically building applications remains a practical challenge. While Large Language Models (LLMs) show promise for code repair, their use for fixing Android build errors remains underexplored. To address this gap, we first introduce AndroidBuildBench, a benchmark of 1,019 build failures curated from the commit histories of 43 open-source Android projects. Each problem is paired with a verified solution from a subsequent commit, ensuring that fixes are feasible. Second, we propose GradleFixer, an LLM agent with domain-specific tools for inspecting and manipulating the Gradle build environment. GradleFixer achieves a resolve rate of 81.4% (pass@1), significantly outperforming a state-of-the-ar
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

Peft 0.18.1 crashing when fine-tuning
Hi, peft Version: 0.18.1 is crashing when attempting to fine-tune google/gemma-4-E2B. The error msg is shown below. I checked and 0.18.1 is the latest version. Will there be an update soon or is there a workaround? I’d appreciate any help. thanks! ValueError: Target module Gemma4ClippableLinear( (linear): Linear(in_features=768, out_features=768, bias=False) ) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, `transformers.pytorch_utils.Conv1D`, `torch.nn.MultiheadAttention.`. 1 post - 1 participant Read full topic

Upload speeds extremely slow / stalling since April 1st
Since yesterday afternoon (April 1st), I’ve been experiencing extremely slow upload speeds when uploading GGUF model files to the Hub using hf upload . The uploads start at a reasonable speed but progressively slow down from ~1 MB/s, then downgrades to a few KB/s, and eventually stall completely at ~110 KB/s with seemingly no progress at all. What I’ve tried: Uploading all files at once vs single file, same issue Disabling xet ( HF_HUB_ENABLE_XET=0 ) and hf-transfer ( HF_HUB_ENABLE_HF_TRANSFER=0 ), same issue Using an older version of huggingface-hub (0.36.2) — same issue Checked status.huggingface.co , no reported issues My internet connection is fine for everything else The pattern is consistent: uploads begin at normal speed, then gradually degrade over a few minutes until they complete
v4.3.1
Changes Gemma 4 support with full tool-calling in the API and UI. 🆕 ik_llama.cpp support : Add ik_llama.cpp as a new backend through new textgen-portable-ik portable builds and a new --ik flag for full installs. ik_llama.cpp is a fork by the author of the imatrix quants, including support for new quant types, significantly more accurate KV cache quantization (via Hadamard KV cache rotation, enabled by default), and optimizations for MoE models and CPU inference. API: Add echo + logprobs for /v1/completions . The completions endpoint now supports the echo and logprobs parameters, returning token-level log probabilities for both prompt and generated tokens. Token IDs are also included in the output via a new top_logprobs_ids field. Further optimize my custom gradio fork, saving up to 50 ms


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!