Meta-Programming and Macro capabilities of various languages
Meta-programming = the broad idea of “programs that manipulate or generate programs” . It can happen at runtime (reflection) or compile-time (macros). Macros = one specific style of meta-programming, usually tied to transforming syntax at compile time (in a pre-processor or AST-transformer). It takes a piece of code as input and replaces it with another piece of code as output, often based on patterns or parameters. Rule‑based transformation: A macro is specified as a pattern (e.g., a template, an AST pattern, or token pattern) plus a replacement that is generated when that pattern is matched. Expansion, not function call: Macro use is not a runtime call; the macro is expanded before execution, so the final code is the result of replacing the macro invocation with its generated code. Here
Meta-programming = the broad idea of “programs that manipulate or generate programs”. It can happen at runtime (reflection) or compile-time (macros).
Macros = one specific style of meta-programming, usually tied to transforming syntax at compile time (in a pre-processor or AST-transformer). It takes a piece of code as input and replaces it with another piece of code as output, often based on patterns or parameters. Macros consist of:
-
Rule‑based transformation: A macro is specified as a pattern (e.g., a template, an AST pattern, or token pattern) plus a replacement that is generated when that pattern is matched.
-
Expansion, not function call: Macro use is not a runtime call; the macro is expanded before execution, so the final code is the result of replacing the macro invocation with its generated code.
Here are some programming languages and their meta-programming and macro capabilities.
NB! Take with a grain of salt. The result comes from working with perplexity.ai, and I have not had a chance to personally verify all of the cells. They do look generally correct to me overall, though. Corrections are welcome!
Metaprogramming + macro features
Here are the programming languages with their scores (out of 15) and links to their repos or homepages:
-
Racket: 15
-
Common Lisp (CL): 13
-
Scheme (R7RS‑small): 12
-
Rust: 11
-
Nim: 10
-
Clojure: 10
-
Carp: 9
-
Jai: 5
-
C++: 5
-
Zig: 4
-
Ruby: 4
Scores are out of 15 = 4 (metaprogramming) + 3 (compile‑time facilities) + 8 (macro features).
Each cell is either ✅ (yes) or – (no / limited).
Feature / language
Racket
Common Lisp
Scheme (R7RS‑small)
Rust
Nim
Clojure
Carp
Jai
C++
Zig comptime
Ruby
Metaprogramming features:
Runtime metaprogramming (e.g., open classes, define_method, method hooks)
✅
✅
–
–
–
✅
–
–
–
–
✅
Runtime reflection / introspection ✅ ✅ ✅ – – ✅ – – ✅ – ✅
Runtime eval / dynamic code loading
✅
✅
✅
–
–
✅
–
–
–
–
✅
Build‑ or tooling‑level code generation supported ✅ ✅ ✅ ✅ ✅ ✅ ✅ ✅ ✅ ✅ ✅
Metaprogramming score (out of 4): 4 4 3 1 1 4 1 1 2 1 4
Compile‑time facilities (not strictly macros):
Racket
Common Lisp
Scheme (R7RS‑small)
Rust
Nim
Clojure
Carp
Jai
C++
Zig comptime
Ruby
Run arbitrary code at compile time
✅
✅
✅
✅
✅
–
✅
✅
✅ (constexpr)
✅
–
Types as values at compile time
✅ (– but in Typed Racket)
–
–
✅
✅
–
–
✅
✅ (constexpr + templates)
✅
–
constexpr‑style type‑level / compile‑time computation
✅
–
–
✅ (const‑eval)
✅
–
✅
✅
✅ (via constexpr)
✅
–
Macro features:
Hygienic identifier binding ✅ ✅ ✅ ✅ ✅ ✅ ✅ (gensym but manual) ✅ – – –
Operate on AST / syntax tree ✅ ✅ ✅ ✅ ✅ ✅ ✅ – – – –
Pattern‑based transformations ✅ ✅ ✅ ✅ ✅ ✅ ✅ – – – –
Define new syntactic forms ✅ ✅ ✅ ✅ ✅ ✅ ✅ – – – –
Define new keywords / syntax ✅ ✅ ✅ ✅ ✅ ✅ ✅ – – – –
Override core language forms ✅ ✅ ✅ – – – – – – – –
Multi‑phase / macros of macros ✅ ✅ ✅ ✅ – – – – – – –
Full‑fledged DSL / language building (via macros) ✅ ✅ ✅ ✅ ✅ ✅ ✅ – – – –
Macro & compile time features score (out of 11) 11 9 9 10 9 6 6 4 3 3 0
Total score 15 13 12 11 10 10 9 5 5 4 4
Racket
Common Lisp
Scheme (R7RS‑small)
Rust
Nim
Clojure
Carp
Jai
C++
Zig comptime
Ruby
The score counts one point per row where the language can reasonably do what the feature describes (DSL‑building is counted as a full feature, even if “limited” in some languages).
The feature score is not an ultimate measure of meta-programming power, since a language (like C++) may have a higher score than another language (like Ruby), but generally be considered less tailored for meta-programming than the other language (Ruby is generally revered for its powerful meta-programming abilities).
Macro features are varied and many, and thus in the total score they gain an undue weight, although runtime meta-programming may be just as, or even more, powerful.
Lisp-style languages (with their homoiconic S-expressions) make out 5 of the 11 languages in our list: Racket, CL, Scheme, Clojure, Carp.
For further reading: https://github.com/oils-for-unix/oils/wiki/Metaprogramming
DEV Community
https://dev.to/redbar0n/meta-programming-and-macro-capabilities-of-various-languages-1hgdSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
transformerfeaturecode generationMean field sequence: an introduction
This is the first post in a planned series about mean field theory by Dmitry and Lauren (this post was generated by Dmitry with lots of input from Lauren, and was split into two parts, the second of which is written jointly). These posts are a combination of an explainer and some original research/ experiments. The goal of these posts is to explain an approach to understanding and interpreting model internals which we informally denote "mean field theory" or MFT. In the literature, the closest matching term is "adaptive mean field theory". We will use the term loosely to denote a rich emerging literature that applies many-body thermodynamic methods to neural net interpretability. It includes work on both Bayesian learning and dynamics (SGD), and work in wider "NNFT" (neural net field theor

Automate Churn Analysis and Win-Backs with AI-Powered Personalization
As a micro-SaaS founder, watching users churn feels like a slow leak you can't fix. Generic "we miss you" emails fall flat because they ignore why someone left. The real challenge is turning your user data into a personalized, automated recovery system. The Core Principle: Contextual, Not Creepy, Personalization The key is to automate emails filled with real user context from your app's data, moving beyond "Hello [Name]." This means using product-centric behavior—like feature usage or errors—to show you understand their specific situation, not their personal habits. Research consistently shows that emails leveraging behavioral triggers significantly outperform generic blasts. Tool in Action: Your own application database is the most crucial tool. By inventorying fields like Last_Error_Even
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
Mean field sequence: an introduction
This is the first post in a planned series about mean field theory by Dmitry and Lauren (this post was generated by Dmitry with lots of input from Lauren, and was split into two parts, the second of which is written jointly). These posts are a combination of an explainer and some original research/ experiments. The goal of these posts is to explain an approach to understanding and interpreting model internals which we informally denote "mean field theory" or MFT. In the literature, the closest matching term is "adaptive mean field theory". We will use the term loosely to denote a rich emerging literature that applies many-body thermodynamic methods to neural net interpretability. It includes work on both Bayesian learning and dynamics (SGD), and work in wider "NNFT" (neural net field theor


AI Agents for Local Business: $500-1,500 Setup + Monthly Retainer
AI Agents for Local Business: $500-1,500 Setup + Monthly Retainer How to build ₹15K-₹75K/chatbot businesses serving Indian SMEs (no coding required) The Opportunity Nobody's Talking About While everyone's fighting over AI side hustles online, there's a goldmine happening offline: Local businesses desperately need AI—but have zero clue how to implement it. Real estate agents, dentists, gyms, restaurants, coaching centers—they're all losing customers because they can't respond to inquiries fast enough. You become the solution. The Business Model: Build AI chatbot once (8-15 hours) Charge setup fee: ₹15K-₹75K Charge monthly retainer: ₹3K-₹15K Maintain: 1-2 hours/month Profit margin: 70-85% Real Example: The Real Estate Bot Deal Client: Real estate agency in Bangalore (3 agents, 50+ properties



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!