π₯ ml-explore/mlx-lm
Run LLMs with MLX β Trending on GitHub today with 22 new stars.
MLX LM is a Python package for generating text and fine-tuning large language models on Apple silicon with MLX.
Some key features include:
-
Integration with the Hugging Face Hub to easily use thousands of LLMs with a single command.
-
Support for quantizing and uploading models to the Hugging Face Hub.
-
Low-rank and full model fine-tuning with support for quantized models.
-
Distributed inference and fine-tuning with mx.distributed
The easiest way to get started is to install the mlx-lm package:
With pip:
pip install mlx-lm
With conda:
conda install -c conda-forge mlx-lm
Quick Start
To generate text with an LLM use:
mlx_lm.generate --prompt "How tall is Mt Everest?"
To chat with an LLM use:
mlx_lm.chat
This will give you a chat REPL that you can use to interact with the LLM. The chat context is preserved during the lifetime of the REPL.
Commands in mlx-lm typically take command line options which let you specify the model, sampling parameters, and more. Use -h to see a list of available options for a command, e.g.:
mlx_lm.generate -h
The default model for generation and chat is mlx-community/Llama-3.2-3B-Instruct-4bit. You can specify any MLX-compatible model with the --model flag. Thousands are available in the MLX Community Hugging Face organization.
Python API
You can use mlx-lm as a module:
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Mistral-7B-Instruct-v0.3-4bit")
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True, )
text = generate(model, tokenizer, prompt=prompt, verbose=True)`
To see a description of all the arguments you can do:
>>> help(generate)
Check out the generation example to see how to use the API in more detail. Check out the batch generation example to see how to efficiently generate continuations for a batch of prompts.
The mlx-lm package also comes with functionality to quantize and optionally upload models to the Hugging Face Hub.
You can convert models using the Python API:
from mlx_lm import convert
repo = "mistralai/Mistral-7B-Instruct-v0.3" upload_repo = "mlx-community/My-Mistral-7B-Instruct-v0.3-4bit"
convert(repo, quantize=True, upload_repo=upload_repo)`
This will generate a 4-bit quantized Mistral 7B and upload it to the repo mlx-community/My-Mistral-7B-Instruct-v0.3-4bit. It will also save the converted model in the path mlx_model by default.
To see a description of all the arguments you can do:
>>> help(convert)
Streaming
For streaming generation, use the stream_generate function. This yields a generation response object.
For example,
from mlx_lm import load, stream_generate
repo = "mlx-community/Mistral-7B-Instruct-v0.3-4bit" model, tokenizer = load(repo)
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True, )
for response in stream_generate(model, tokenizer, prompt, max_tokens=512): print(response.text, end="", flush=True) print()`
Sampling
The generate and stream_generate functions accept sampler and logits_processors keyword arguments. A sampler is any callable which accepts a possibly batched logits array and returns an array of sampled tokens. The logits_processors must be a list of callables which take the token history and current logits as input and return the processed logits. The logits processors are applied in order.
Some standard sampling functions and logits processors are provided in mlx_lm.sample_utils.
Command Line
You can also use mlx-lm from the command line with:
mlx_lm.generate --model mistralai/Mistral-7B-Instruct-v0.3 --prompt "hello"
This will download a Mistral 7B model from the Hugging Face Hub and generate text using the given prompt.
For a full list of options run:
mlx_lm.generate --help
To quantize a model from the command line run:
mlx_lm.convert --model mistralai/Mistral-7B-Instruct-v0.3 -q
For more options run:
mlx_lm.convert --help
You can upload new models to Hugging Face by specifying --upload-repo to convert. For example, to upload a quantized Mistral-7B model to the MLX Hugging Face community you can do:
mlx_lm.convert \ --model mistralai/Mistral-7B-Instruct-v0.3 \ -q \ --upload-repo mlx-community/my-4bit-mistralmlx_lm.convert \ --model mistralai/Mistral-7B-Instruct-v0.3 \ -q \ --upload-repo mlx-community/my-4bit-mistralModels can also be converted and quantized directly in the mlx-my-repo Hugging Face Space.
Long Prompts and Generations
mlx-lm has some tools to scale efficiently to long prompts and generations:
-
A rotating fixed-size key-value cache.
-
Prompt caching
To use the rotating key-value cache pass the argument --max-kv-size n where n can be any integer. Smaller values like 512 will use very little RAM but result in worse quality. Larger values like 4096 or higher will use more RAM but have better quality.
Caching prompts can substantially speedup reusing the same long context with different queries. To cache a prompt use mlx_lm.cache_prompt. For example:
cat prompt.txt | mlx_lm.cache_prompt \ --model mistralai/Mistral-7B-Instruct-v0.3 \ --prompt - \ --prompt-cache-file mistral_prompt.safetensorscat prompt.txt | mlx_lm.cache_prompt \ --model mistralai/Mistral-7B-Instruct-v0.3 \ --prompt - \ --prompt-cache-file mistral_prompt.safetensorsThen use the cached prompt with mlx_lm.generate:
mlx_lm.generate \ --prompt-cache-file mistral_prompt.safetensors \ --prompt "\nSummarize the above text."mlx_lm.generate \ --prompt-cache-file mistral_prompt.safetensors \ --prompt "\nSummarize the above text."The cached prompt is treated as a prefix to the supplied prompt. Also notice when using a cached prompt, the model to use is read from the cache and need not be supplied explicitly.
Prompt caching can also be used in the Python API in order to avoid recomputing the prompt. This is useful in multi-turn dialogues or across requests that use the same context. See the example for more usage details.
Supported Models
mlx-lm supports thousands of LLMs available on the Hugging Face Hub. If the model you want to run is not supported, file an issue or better yet, submit a pull request. Many supported models are available in various quantization formats in the MLX Community Hugging Face organization.
For some models the tokenizer may require you to enable the trust_remote_code option. You can do this by passing --trust-remote-code in the command line. If you don't specify the flag explicitly, you will be prompted to trust remote code in the terminal when running the model.
Tokenizer options can also be set in the Python API. For example:
model, tokenizer = load( "qwen/Qwen-7B", tokenizer_config={"eos_token": "<|endoftext|>", "trust_remote_code": True}, )model, tokenizer = load( "qwen/Qwen-7B", tokenizer_config={"eos_token": "<|endoftext|>", "trust_remote_code": True}, )Large Models
Note
This requires macOS 15.0 or higher to work.
Models which are large relative to the total RAM available on the machine can be slow. mlx-lm will attempt to make them faster by wiring the memory occupied by the model and cache. This requires macOS 15 or higher to work.
If you see the following warning message:
[WARNING] Generating with a model that requires ...
then the model will likely be slow on the given machine. If the model fits in RAM then it can often be sped up by increasing the system wired memory limit. To increase the limit, set the following sysctl:
sudo sysctl iogpu.wired_limit_mb=N
The value N should be larger than the size of the model in megabytes but smaller than the memory size of the machine.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
githubtrendingopen-sourcetrunk/8c8414e5c03f21b5405acc2fd9115f4448dcd08a: revert https://github.com/pytorch/pytorch/pull/172340 (#179151)
Reverting for now before the culprit behind #177703 is discovered. This one enables Lt bias fusions even with the cuBLAS default backend. Will become obsolete once #170571 is merged -- current merge is not safe as it increases Lt coverage for a wide range for inputs, and we do not quite know yet why #174594 breaks things. Pull Request resolved: #179151 Approved by: https://github.com/ngimel
trunk/f2faf682a8e0762f5bf39799ed8b7f1da6f4cb99: inductor: link c10 on Windows cpp wrapper builds (#178976)
Summary Fix a Windows link failure in Inductor C++ wrapper compilation by adding c10 to the libtorch link libraries. Problem With TORCHINDUCTOR_CPP_WRAPPER=1, a targeted inductor repro test failed on Windows during JIT C++ extension linking with: LNK2019 unresolved external symbol c10::detail::torchInternalAssertFail LNK1120 unresolved externals Root Cause The Windows link list in torch._inductor.cpp_builder included torch and torch_cpu (and torch_python in non-AOT mode), but did not include c10, which owns the unresolved symbol. Why Linux Does Not Hit This Linux typically resolves this through shared-object and transitive symbol resolution paths when linking/loading libtorch and related shared libraries. Windows link.exe is stricter for import-library resolution and generally requires the
Knowledge Map
Connected Articles β Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

Speculative decoding works great for Gemma 4 31B in llama.cpp
I get a ~11% speed up with Gemma 3 270B as the draft model. Try it by adding: --no-mmproj -hfd unsloth/gemma-3-270m-it-GGUF:Q8_0 Testing with (on a 3090): ./build/bin/llama-cli -hf unsloth/gemma-4-31B-it-GGUF:Q4_1 --jinja --temp 1.0 --top-p 0.95 --top-k 64 -ngl 1000 -st -f prompt.txt --no-mmproj -hfd unsloth/gemma-3-270m-it-GGUF:Q8_0 Gave me: [ Prompt: 607.3 t/s | Generation: 36.6 t/s ] draft acceptance rate = 0.44015 ( 820 accepted / 1863 generated) vs. [ Prompt: 613.8 t/s | Generation: 32.9 t/s ] submitted by /u/Leopold_Boom [link] [comments]

Discussion
Sign in to join the discussion
No comments yet β be the first to share your thoughts!