Wedeo – a Rust Rewrite of FFmpeg
Article URL: https://github.com/sharifhsn/wedeo Comments URL: https://news.ycombinator.com/item?id=47601272 Points: 3 # Comments: 4
Rust rewrite of FFmpeg, verified against FFmpeg's output bit-for-bit.
This codebase is AI-generated. Written by Claude (Anthropic) via Claude Code, directed and reviewed by a human. The AI reads FFmpeg's C source and reimplements it in Rust. Every conformance claim below is verified by automated CI on every commit, comparing to FFmpeg's output.
The intention of this project is to push the boundaries of what is possible with AI rewriting codebases in Rust. It provides no additional features compared to FFmpeg, and despite incorporating FFmpeg's assembly code, is significantly slower.
Status
Component Conformance Notes
H.264 decode 79/79 BITEXACT CAVLC + CABAC, Baseline through High profile
H.264 FRext 23/55 bitexact Progressive 4:2:0 8-bit done; MBAFF/PAFF/10-bit remaining
H.264 NEON (aarch64)
1.75x speedup
MC, IDCT, deblock — FFmpeg's vendored assembly via cc
WAV demuxer + PCM bitexact RIFF/RIFX/RF64/BW64, 17 PCM formats, 13/13 FATE files
WAV muxer bitexact Roundtrip verified
FLAC, WavPack bitexact Via symphonia adapter
Vorbis, AAC, MP3 ~120-140 dB SNR Lossy codecs, float precision only
AV1 bitexact Via rav1d adapter
MP4 demuxer working H.264, AV1, AAC tracks
Video player 24fps 0-drop GPU (wgpu), ffplay-style A/V sync, pause, volume
H.264 decoder
The H.264 decoder is ~30K lines of Rust across 25 modules. It implements:
-
CAVLC and CABAC entropy coding
-
All intra prediction modes (4x4, 8x8, 16x16, chroma)
-
Quarter-pel motion compensation (6-tap FIR luma, bilinear chroma)
-
4x4 and 8x8 IDCT with Hadamard DC transforms
-
In-loop deblocking filter
-
MMCO and sliding-window reference management
-
Weighted prediction (uni/bi)
-
B-frames with direct prediction (spatial + temporal)
-
High profile: 8x8 transforms, custom scaling matrices
-
MBAFF interlaced (partial — field/frame MB switching, CABAC context adaptation)
-
Frame-level threading with wavefront deblocking
-
aarch64 NEON assembly for MC, IDCT, and deblocking (feature-gated)
Architecture details: H264.md. Known FFmpeg behavioral differences: DIVERGENCES.md.
Not yet implemented
FFmpeg has hundreds of codecs and formats. wedeo currently covers a small subset. Major gaps for parity:
-
Video codecs — VP9, HEVC/H.265, MPEG-2, MPEG-4 Part 2, VP8, Theora. H.264 is missing interlaced (MBAFF/PAFF), 10-bit, and 4:2:2/4:4:4.
-
Video encoding — no encoders exist yet (H.264, H.265, AV1 via rav1e)
-
Muxers — only WAV. No MP4/MOV, MKV/WebM, or MPEG-TS muxer.
-
Demuxers — no MKV/WebM, MPEG-TS, FLV, or AVI demuxer (MP4 and WAV only, plus symphonia-backed formats)
-
Filters — trait skeleton exists but no functional filter graph (no scale, crop, overlay, fps, etc.)
-
Player — no seek, no subtitle rendering, no hardware-accelerated decode
-
Infrastructure — no interruptible I/O (network streams), no chapter/program support, no avformat_find_stream_info equivalent
Quick start
Install the CLI
# From source (includes AV1 support via rav1d) cargo install --git https://github.com/sharifhsn/wedeo wedeo-cli# From source (includes AV1 support via rav1d) cargo install --git https://github.com/sharifhsn/wedeo wedeo-cliPlay a video
# Also from source — AV1, H.264, and audio all work cargo install --git https://github.com/sharifhsn/wedeo wedeo-play wedeo-play video.mp4# Also from source — AV1, H.264, and audio all work cargo install --git https://github.com/sharifhsn/wedeo wedeo-play wedeo-play video.mp4Use as a library
# Cargo.toml — core crates are on crates.io [dependencies] wedeo = "0.1.2" wedeo-codec-h264 = "0.1.2" # H.264 decoder wedeo-format-mp4 = "0.1.2" # MP4 demuxer wedeo-symphonia = "0.1.2" # audio codecs (AAC, MP3, FLAC, etc.)# Cargo.toml — core crates are on crates.io [dependencies] wedeo = "0.1.2" wedeo-codec-h264 = "0.1.2" # H.264 decoder wedeo-format-mp4 = "0.1.2" # MP4 demuxer wedeo-symphonia = "0.1.2" # audio codecs (AAC, MP3, FLAC, etc.)AV1 requires a git dependency (rav1d is not yet on crates.io)
wedeo-rav1d = { git = "https://github.com/sharifhsn/wedeo" }`
Build and test locally
cargo build cargo nextest run # or cargo test cargo clippycargo build cargo nextest run # or cargo test cargo clippyDecode a file and compare against FFmpeg:
cargo run --release --bin wedeo-framecrc -- input.264 ffmpeg -bitexact -i input.264 -f framecrc -cargo run --release --bin wedeo-framecrc -- input.264 ffmpeg -bitexact -i input.264 -f framecrc -FATE testing
FFmpeg's native test suite.
./scripts/fetch-fate-suite.sh # downloads full suite (~1.2 GB) FATE_SUITE=./fate-suite cargo nextest run -p wedeo-fate./scripts/fetch-fate-suite.sh # downloads full suite (~1.2 GB) FATE_SUITE=./fate-suite cargo nextest run -p wedeo-fateJVT conformance (ITU test vectors)
204 test vectors from the ITU JVT conformance suite, with MD5 ground truth from the Fluster project. No FFmpeg required — comparison is against ITU-provided checksums.
python3 scripts/fetch_jvt.py # download vectors (~50 MB) python3 scripts/suite_runner.py --suite jvt-avc-v1,jvt-fr-ext --format yuv420ppython3 scripts/fetch_jvt.py # download vectors (~50 MB) python3 scripts/suite_runner.py --suite jvt-avc-v1,jvt-fr-ext --format yuv420pThe unified suite runner also wraps the FATE suites:
python3 scripts/suite_runner.py --suite all --format yuv420p python3 scripts/suite_runner.py --suite fate-cavlc --save-snapshot python3 scripts/suite_runner.py --suite fate-cavlc --check-snapshot # regression checkpython3 scripts/suite_runner.py --suite all --format yuv420p python3 scripts/suite_runner.py --suite fate-cavlc --save-snapshot python3 scripts/suite_runner.py --suite fate-cavlc --check-snapshot # regression checkArchitecture
wedeo/ crates/ wedeo-core/ libavutil — Rational, Buffer, Frame, Packet, errors wedeo-codec/ libavcodec — Decoder/Encoder traits, codec registry wedeo-format/ libavformat — Demuxer/Muxer traits, I/O, InputContext wedeo-filter/ libavfilter (stub) wedeo-resample/ libswresample (rubato) wedeo-scale/ libswscale (dcv-color-primitives) codecs/ wedeo-codec-h264/ H.264 decoder — 30K lines, NEON assembly, 55 benchmarks wedeo-codec-pcm/ PCM codec — 17 formats formats/ wedeo-format-h264/ H.264 Annex B demuxer wedeo-format-wav/ WAV demuxer + muxer wedeo-format-mp4/ MP4/MOV demuxer adapters/ wedeo-symphonia/ Wraps symphonia (FLAC, Vorbis, MP3, AAC, WavPack) wedeo-rav1d/ Wraps rav1d (AV1) bins/ wedeo-cli/ CLI tool wedeo-play/ Video player — wgpu+winit, ffplay-style A/V sync tests/ fate/ FATE cross-validation harness scripts/ suite_runner.py Unified conformance runner (FATE + JVT) fetch_jvt.py JVT vector downloader conformance_full.py FATE conformance report regression_check.py Quick regression check test_suites/ h264/ JVT manifest JSONs (tracked)wedeo/ crates/ wedeo-core/ libavutil — Rational, Buffer, Frame, Packet, errors wedeo-codec/ libavcodec — Decoder/Encoder traits, codec registry wedeo-format/ libavformat — Demuxer/Muxer traits, I/O, InputContext wedeo-filter/ libavfilter (stub) wedeo-resample/ libswresample (rubato) wedeo-scale/ libswscale (dcv-color-primitives) codecs/ wedeo-codec-h264/ H.264 decoder — 30K lines, NEON assembly, 55 benchmarks wedeo-codec-pcm/ PCM codec — 17 formats formats/ wedeo-format-h264/ H.264 Annex B demuxer wedeo-format-wav/ WAV demuxer + muxer wedeo-format-mp4/ MP4/MOV demuxer adapters/ wedeo-symphonia/ Wraps symphonia (FLAC, Vorbis, MP3, AAC, WavPack) wedeo-rav1d/ Wraps rav1d (AV1) bins/ wedeo-cli/ CLI tool wedeo-play/ Video player — wgpu+winit, ffplay-style A/V sync tests/ fate/ FATE cross-validation harness scripts/ suite_runner.py Unified conformance runner (FATE + JVT) fetch_jvt.py JVT vector downloader conformance_full.py FATE conformance report regression_check.py Quick regression check test_suites/ h264/ JVT manifest JSONs (tracked)Each FFmpeg library maps to one Rust crate. Codecs and formats register themselves via inventory at link time — no central enum.
CI
Five parallel jobs run on every PR (all required to merge):
-
Lint — clippy + rustfmt
-
Test — 462 unit and integration tests via nextest
-
FATE Regression — no previously-passing FATE test may regress (framecrc vs FFmpeg)
-
JVT Regression — no previously-passing JVT test may regress (MD5 vs ITU checksums)
-
Deny — license allow-list and advisory audit via cargo-deny
Conformance baselines are committed in test_suites/baselines/. To update after expanding coverage:
python3 scripts/suite_runner.py --suite fate-cavlc,fate-cabac \ --save-snapshot --snapshot-dir test_suites/baselinespython3 scripts/suite_runner.py --suite fate-cavlc,fate-cabac \ --save-snapshot --snapshot-dir test_suites/baselinesPre-commit hook available: pip install pre-commit && pre-commit install
FFmpeg reference
The FFmpeg/ submodule is pinned to n8.1. Optional — only needed to read the C source or build a debug FFmpeg for development:
git submodule update --init cd FFmpeg && ./configure --disable-optimizations --enable-debug=3 \ --disable-stripping --disable-asm && make -j$(nproc) ffmpeggit submodule update --init cd FFmpeg && ./configure --disable-optimizations --enable-debug=3 \ --disable-stripping --disable-asm && make -j$(nproc) ffmpegContributing
See CONTRIBUTING.md.
For AI agents
This repo is designed for AI-assisted development. CLAUDE.md contains the full project context: architecture, conventions, debugging procedures, and technical requirements. It is the canonical reference for any AI agent working on this codebase — read it before writing code.
The llms.txt file provides a machine-readable project summary following the llms.txt convention.
License
LGPL-2.1-or-later. See LICENSE and COPYING.LGPLv2.1.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
github
Distributed 1-bit LLM inference over P2P - 50 nodes validated, 100% shard discovery, CPU-only
There are roughly 4 billion CPUs on Earth. Most of them sit idle 70% of the time. Meanwhile, the AI industry is burning $100B+ per year on GPU clusters to run models that 95% of real-world tasks don't actually need. ARIA Protocol is an attempt to flip that equation. It's a peer-to-peer distributed inference system built specifically for 1-bit quantized models (ternary weights: -1, 0, +1). No GPU. No cloud. No central server. Nodes discover each other over a Kademlia DHT, shard model layers across contributors, and pipeline inference across the network. Think Petals meets BitNet, minus the GPU requirement. This isn't Ollama or llama.cpp — those are great tools, but they're single-machine. ARIA distributes inference across multiple CPUs over the internet so that no single node needs to hold
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

B70: Quick and Early Benchmarks & Backend Comparison
llama.cpp: f1f793ad0 (8657) This is a quick attempt to just get it up and running. Lots of oneapi runtime still using "stable" from Intels repo. Kernel 6.19.8+deb13-amd64 with an updated xe firmware built. Vulkan is Debian but using latest Mesa compiled from source. Openvino is 2026.0. Feels like everything is "barely on the brink of working" (which is to be expected). sycl: $ build/bin/llama-bench -hf unsloth/Qwen3.5-27B-GGUF:UD-Q4_K_XL -p 512,16384 -n 128,512 | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 16.40 GiB | 26.90 B | SYCL | 99 | pp512 | 798.07 ± 2.72 | | qwen35 27B Q4_K - Medium | 16.40 GiB | 26.90 B | SYCL | 99 | pp16384



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!