🦀 Rust Foundations — The Stuff That Finally Made Things Click
"Rust compiler and Clippy are the biggest tsunderes — they'll shout at you for every small mistake, but in the end… they just want your code to be perfect." Why I Even Started Rust I didn't pick Rust out of curiosity or hype. I had to. I'm working as a Rust dev at Garden Finance , where I built part of a Wallet-as-a-Service infrastructure. Along with an Axum backend, we had this core Rust crate ( standard-rs ) handling signing and broadcasting transactions across: Bitcoin EVM chains Sui Solana Starknet And suddenly… memory safety wasn't "nice to have" anymore. It was everything. Rust wasn't just a language — it was a guarantee . But yeah… in the beginning? It felt like the compiler hated me :( So I'm writing this to explain Rust foundations in the simplest way possible — from my personal n
"Rust compiler and Clippy are the biggest tsunderes — they'll shout at you for every small mistake, but in the end… they just want your code to be perfect."
Why I Even Started Rust
I didn't pick Rust out of curiosity or hype.
I had to.
I'm working as a Rust dev at Garden Finance, where I built part of a Wallet-as-a-Service infrastructure. Along with an Axum backend, we had this core Rust crate (standard-rs) handling signing and broadcasting transactions across:
-
Bitcoin
-
EVM chains
-
Sui
-
Solana
-
Starknet
And suddenly… memory safety wasn't "nice to have" anymore.
It was everything.
Rust wasn't just a language — it was a guarantee.
But yeah… in the beginning?
It felt like the compiler hated me :(
So I'm writing this to explain Rust foundations in the simplest way possible — from my personal notes while reading "Rust for Rustaceans".
What is a "value" in Rust?
A value in Rust is:
Type + actual data from that type's domain
let x = 42;
Enter fullscreen mode
Exit fullscreen mode
This isn't just 42. It's:
-
Type: i32
-
Value: 42
Rust always tracks both. That's why it feels stricter than dynamically typed languages like Python or JavaScript — there's no "figure it out at runtime." Rust knows everything at compile time.
Borrowing — The part that scares everyone
Let's look at this:
fn main() { let mut x = 42;fn main() { let mut x = 42;let y = &x; // immutable borrow let z = &mut x; // mutable borrow
println!("{}", z); println!("{}", y); }`
Enter fullscreen mode
Exit fullscreen mode
Your brain immediately says:
"You can't have a mutable and immutable borrow at the same time. This should fail."
And classically, you'd be right.
But here's the thing that changed everything for me:
Non-Lexical Lifetimes (NLL)
Before NLL, Rust kept a borrow alive until the end of the scope block {}. That made the code above illegal — y would still be "alive" when z tried to take a mutable borrow.
After NLL (which is now the default), Rust is smarter. It keeps a borrow alive only until its last actual use.
Let's trace what the compiler sees:
y is created (immutable borrow of x) y is never used again → borrow ends immediatelyy is created (immutable borrow of x) y is never used again → borrow ends immediatelyz is created (mutable borrow of x) → no active immutable borrows exist → ✅ Allowed`
Enter fullscreen mode
Exit fullscreen mode
Rust isn't being lenient. It's being precise.
NLL made Rust go from "technically correct but infuriating" to "actually ergonomic."
Memory — Stack vs Heap vs Static
This is the part where things start clicking.
┌─────────────────┐ │ Static/Data │ ← Binary code + static variables + string literals │ Segment │ (baked into your executable) ├─────────────────┤ │ Heap │ ← Box, Vec, String (grows upward →) │ ↕ │ │ ↕ │ │ Stack │ ← Local variables, function calls (grows downward ←) └─────────────────┘┌─────────────────┐ │ Static/Data │ ← Binary code + static variables + string literals │ Segment │ (baked into your executable) ├─────────────────┤ │ Heap │ ← Box, Vec, String (grows upward →) │ ↕ │ │ ↕ │ │ Stack │ ← Local variables, function calls (grows downward ←) └─────────────────┘Enter fullscreen mode
Exit fullscreen mode
Stack
-
Fastest read/write memory
-
Each function call creates a stack frame — a contiguous chunk holding all local variables and arguments for that call
-
Stack frames are the physical basis of lifetimes — when a function returns, its frame is popped, and everything in it is gone
Heap
-
A pool of memory not tied to the call stack
-
Values here can live past the function that created them
-
Access via pointers — you allocate, get back a pointer, pass it around
-
Easiest way to heap-allocate: Box::new(value)
-
Want something to live for the entire program? Use Box::leak():
// This gives you a 'static reference — lives forever let config: &'static Config = Box::leak(Box::new(load_config()));// This gives you a 'static reference — lives forever let config: &'static Config = Box::leak(Box::new(load_config()));Enter fullscreen mode
Exit fullscreen mode
Static Memory
-
Part of your actual binary on disk
-
static variables and string literals ("hello") are baked directly into the executable
-
Loaded into memory when the OS runs your program
&str vs String — The Confusion Killer
These two look similar but live in totally different places.
&str — a borrowed view into existing bytes
Stack: ┌─────────────┐ │ s: &str │ ← pointer + length (that's it) │ ptr: 0x1000 │ ───┐ │ len: 5 │ │ └─────────────┘ │ │ Static Memory: │ ┌─────────────┐ │ │ 0x1000: │ ←──┘ │ "hello" │ ← baked into your binary └─────────────┘Stack: ┌─────────────┐ │ s: &str │ ← pointer + length (that's it) │ ptr: 0x1000 │ ───┐ │ len: 5 │ │ └─────────────┘ │ │ Static Memory: │ ┌─────────────┐ │ │ 0x1000: │ ←──┘ │ "hello" │ ← baked into your binary └─────────────┘Enter fullscreen mode
Exit fullscreen mode
The reference (ptr + len) lives on the stack. The actual bytes live in static memory, embedded in your binary at compile time.
&str is immutable and borrowed — you don't own the data.
String — owned, heap-allocated bytes
Stack: ┌─────────────┐ │ s: String │ │ ptr: 0x2000 │ ───┐ │ len: 5 │ │ │ cap: 5 │ │ ← also tracks capacity for growing └─────────────┘ │ │ Heap: │ ┌─────────────┐ │ │ 0x2000: │ ←──┘ │ "hello" │ ← allocated at runtime, can grow └─────────────┘Stack: ┌─────────────┐ │ s: String │ │ ptr: 0x2000 │ ───┐ │ len: 5 │ │ │ cap: 5 │ │ ← also tracks capacity for growing └─────────────┘ │ │ Heap: │ ┌─────────────┐ │ │ 0x2000: │ ←──┘ │ "hello" │ ← allocated at runtime, can grow └─────────────┘Enter fullscreen mode
Exit fullscreen mode
String owns its data. It can grow, shrink, be mutated.
When to use which:
Use &str when...
Use String when...
You just need to read/pass text You need to own, build, or modify text
Taking function arguments (prefer &str)
Returning text from a function
Working with string literals Reading user input or building dynamic strings
Quick rule of thumb: Function parameters? Use &str. Owned data you return or store in a struct? Use String.
const vs static
These look similar. They're not.
const — compile-time copy-paste
const MAX_RETRIES: u32 = 3;
Enter fullscreen mode
Exit fullscreen mode
-
No memory address. The compiler literally copy-pastes the value everywhere it's used
-
Every usage creates a fresh copy
-
Must be known at compile time (must implement Copy)
-
Think of it like a find-and-replace the compiler does before running anything
static — single memory location, lives forever
static MAX_POINTS: u32 = 100;
Enter fullscreen mode
Exit fullscreen mode
-
Has a real memory address — same one every time
-
Lives for the entire program ('static lifetime)
-
All references point to the same place
-
Exists in your binary's static memory section
You can actually see the difference:
const X: u32 = 5; static Y: u32 = 5;const X: u32 = 5; static Y: u32 = 5;fn main() { println!("{:p}", &X); // may print different addresses each time println!("{:p}", &X); // copy-pasted, so may differ
println!("{:p}", &Y); // always the same address println!("{:p}", &Y); // always the same address }`
Enter fullscreen mode
Exit fullscreen mode
Quick rule: Use const for magic numbers and fixed values. Use static when you need a single global memory location (like a config that multiple places reference).
Ownership — Rust's Core Rule
A value has exactly one owner. Always.
When the owner goes out of scope, the value is dropped. Memory is freed. No garbage collector needed.
let s1 = String::from("hello"); let s2 = s1; // ownership MOVES to s2let s1 = String::from("hello"); let s2 = s1; // ownership MOVES to s2println!("{}", s1); // ❌ compile error: s1 is moved`
Enter fullscreen mode
Exit fullscreen mode
Two important behaviors:
-
Primitive types (i32, f64, bool, etc.) implement Copy — they're duplicated on assignment, not moved
-
Heap data (String, Vec, Box) is moved — ownership transfers, the original is invalid
This is enforced by the borrow checker at compile time. Zero runtime cost.
Drop Order — Underrated but Powerful
Rust drops variables in reverse declaration order.
Why? Because a variable declared later might reference one declared earlier. Dropping in forward order could leave dangling references.
struct HasDrop(&'static str);
impl Drop for HasDrop { fn drop(&mut self) { println!("Dropping: {}", self.0); } }
fn main() { let var1 = HasDrop("Variable 1"); let var2 = HasDrop("Variable 2");
let tuple = ( HasDrop("Tuple elem 0"), HasDrop("Tuple elem 1"), HasDrop("Tuple elem 2"), );
let var3 = HasDrop("Variable 3");
println!("End of scope!"); }`
Enter fullscreen mode
Exit fullscreen mode
Output:
End of scope! Dropping: Variable 3 ← last declared, drops first Dropping: Tuple elem 0 ← tuple drops, elements in SOURCE order Dropping: Tuple elem 1 Dropping: Tuple elem 2 Dropping: Variable 2 ← variables drop in reverse Dropping: Variable 1End of scope! Dropping: Variable 3 ← last declared, drops first Dropping: Tuple elem 0 ← tuple drops, elements in SOURCE order Dropping: Tuple elem 1 Dropping: Tuple elem 2 Dropping: Variable 2 ← variables drop in reverse Dropping: Variable 1Enter fullscreen mode
Exit fullscreen mode
Notice the split behavior:
What Drop order Why
Variables Reverse Later vars may reference earlier ones
Nested (tuple/struct fields) Source order Rust doesn't allow self-references within a single value
The &mut move problem
Here's a footgun related to ownership + mutable references:
// This is ILLEGAL — and for good reason: fn broken(s: &mut String) { let stolen = *s; // trying to move OUT of a mutable reference // s now points to... nothing? 💀 }// This is ILLEGAL — and for good reason: fn broken(s: &mut String) { let stolen = *s; // trying to move OUT of a mutable reference // s now points to... nothing? 💀 }Enter fullscreen mode
Exit fullscreen mode
If you move a value out of a &mut reference, the original owner still exists and will try to drop it — causing a double free. Undefined behavior. Chaos.
The fix: use mem::replace() or mem::take() to safely swap in a new value:
use std::mem;
fn take_it(s: &mut String) -> String { mem::take(s) // moves out the String, leaves an empty String in its place }`
Enter fullscreen mode
Exit fullscreen mode
Interior Mutability — Rust Bending Its Own Rules
Normally Rust's rule is:
Either many immutable references OR one mutable reference. Never both.
But sometimes you genuinely need to mutate something through a shared (&T) reference. That's where interior mutability comes in — it moves the borrow check from compile time to runtime.
For single-threaded code:
Cell — for types that implement Copy:
use std::cell::Cell;
let x = Cell::new(42); x.set(100); // mutate through shared reference println!("{}", x.get()); // 100`
Enter fullscreen mode
Exit fullscreen mode
RefCell — for types that don't implement Copy:
use std::cell::RefCell;
let v = RefCell::new(vec![1, 2, 3]); v.borrow_mut().push(4); // runtime borrow check`
Enter fullscreen mode
Exit fullscreen mode
⚠️ The footgun with RefCell: The borrow check happens at runtime. If you violate the rules (two mutable borrows), it doesn't fail at compile time — it panics at runtime:
let v = RefCell::new(vec![1, 2, 3]); let b1 = v.borrow_mut(); let b2 = v.borrow_mut(); // 💥 panics: already mutably borrowedlet v = RefCell::new(vec![1, 2, 3]); let b1 = v.borrow_mut(); let b2 = v.borrow_mut(); // 💥 panics: already mutably borrowedEnter fullscreen mode
Exit fullscreen mode
Use RefCell sparingly — it trades compile-time safety for runtime flexibility. If it panics in production, that's on you.
For multi-threaded code:
// Share AND mutate across threads: let x = Arc::new(Mutex::new(vec![1, 2, 3]));// Share AND mutate across threads: let x = Arc::new(Mutex::new(vec![1, 2, 3]));// Just need a fast counter across threads: let x = Arc::new(AtomicU32::new(0));
// Read-only sharing across threads: let x = Arc::new(vec![1, 2, 3]);`
Enter fullscreen mode
Exit fullscreen mode
Note: Arc alone only gives you shared ownership — it does not give you mutability. You need Mutex (or RwLock) for that.
Quick cheat sheet:
Scenario Use
Single thread, Copy type
Cell
Single thread, non-Copy type
RefCell
Multi-thread, any type
Arc>
Multi-thread, fast counter
Arc
Multi-thread, read-mostly
Arc>
Lifetimes — Don't Overcomplicate This
A lifetime ensures a reference doesn't outlive the data it points to.
Lifetimes are compile-time only — they're erased after compilation and have zero runtime cost.
The compiler infers most lifetimes automatically. You only write them explicitly when the compiler can't figure it out on its own.
// The compiler needs help here — which input does the output reference? fn longest<'a>(x: &'a str, y: &'a str) -> &'a str { if x.len() > y.len() { x } else { y } }// The compiler needs help here — which input does the output reference? fn longest<'a>(x: &'a str, y: &'a str) -> &'a str { if x.len() > y.len() { x } else { y } }Enter fullscreen mode
Exit fullscreen mode
The 'a annotation says: "the output reference lives at least as long as the shorter of x and y."
Without it, the compiler has no idea whether the returned reference points to x or y — and therefore can't verify it's safe.
The dangling reference problem lifetimes solve:
fn dangle() -> &String { // ❌ what lifetime does this have? let s = String::from("hello"); &s // s drops here, reference would dangle }fn dangle() -> &String { // ❌ what lifetime does this have? let s = String::from("hello"); &s // s drops here, reference would dangle }Enter fullscreen mode
Exit fullscreen mode
The compiler rejects this because the reference would outlive the data it points to. Lifetimes make this impossible to sneak through.
Final Thought
Rust isn't hard.
It just forces you to think clearly about:
-
who owns what
-
who borrows what
-
how long things live
The compiler isn't your enemy. It's a tsundere — it yells at you precisely because it refuses to let your code betray you in production.
Once that mental model clicks?
You stop fighting Rust… and start trusting it.
All notes from "Rust for Rustaceans" by Jon Gjengset. Highly recommend.
DEV Community
https://dev.to/lordhacker756/rust-foundations-the-stuff-that-finally-made-things-click-12kbSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelproductservice
Claude Code 101: Introduction to Agentic Programming
Software development as we know it The era of AI assistance From assistant to agent The agentic tooling ecosystem Agentic programming: the paradigm shift Final thoughts 🇧🇷 Leia em Português September 2025. I was leading a critical dependency upgrade on a mobile app with millions of users. The kind of change that breaks tests in a cascade. The deadline was October: if it wasn't ready, the app wouldn't ship to the store. The problem: nearly 10,000 tests needed to be adapted for the new version. Code owned by over 20 teams, spread across hundreds of modules. I thought "what if I give these AI tools everyone keeps talking about a real shot?" After watching a few videos and reading the docs, I spun up four terminals running Claude Code in parallel, each one migrating a slice of the tests. One

Takedown is not a ticket, but a campaign-suppression system
Most security teams still talk about takedown as if it were one workflow: detect a phishing page, file an abuse report, wait for the host or registrar, close the ticket, move on. That model was always too simple, and it is getting weaker. The better way to think about takedown is this: takedown is the process of reducing attacker operating time across the assets, channels, and trust surfaces a campaign depends on . If your process only removes one URL but leaves the spoofed number, the cloned social profile, the fake app listing, the paid ad, or the next domain in the chain untouched, you did not really suppress the campaign. You trimmed one branch. That distinction matters because modern phishing and scam operations are not domain-only problems. APWG recorded 892,494 phishing attacks in Q
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Takedown is not a ticket, but a campaign-suppression system
Most security teams still talk about takedown as if it were one workflow: detect a phishing page, file an abuse report, wait for the host or registrar, close the ticket, move on. That model was always too simple, and it is getting weaker. The better way to think about takedown is this: takedown is the process of reducing attacker operating time across the assets, channels, and trust surfaces a campaign depends on . If your process only removes one URL but leaves the spoofed number, the cloned social profile, the fake app listing, the paid ad, or the next domain in the chain untouched, you did not really suppress the campaign. You trimmed one branch. That distinction matters because modern phishing and scam operations are not domain-only problems. APWG recorded 892,494 phishing attacks in Q

UniCon: A Unified System for Efficient Robot Learning Transfers
arXiv:2601.14617v2 Announce Type: replace Abstract: Deploying learning-based controllers across heterogeneous robots is challenging due to platform differences, inconsistent interfaces, and inefficient middleware. To address these issues, we present UniCon, a lightweight framework that standardizes states, control flow, and instrumentation across platforms. It decomposes workflows into execution graphs with reusable components, separating system states from control logic to enable plug-and-play deployment across various robot morphologies. Unlike traditional middleware, it prioritizes efficiency through batched, vectorized data flow, minimizing communication overhead and improving inference latency. This modular, data-oriented approach enables seamless sim-to-real transfer with minimal re-

A Survey of Real-Time Support, Analysis, and Advancements in ROS 2
arXiv:2601.10722v2 Announce Type: replace Abstract: The Robot Operating System 2 (ROS~2) has emerged as a relevant middleware framework for robotic applications, offering modularity, distributed execution, and communication. In the last six years, ROS~2 has drawn increasing attention from the real-time systems community and industry. This survey presents a comprehensive overview of research efforts that analyze, enhance, and extend ROS~2 to support real-time execution. We first provide a detailed description of the internal scheduling mechanisms of ROS~2 and its layered architecture, including the interaction with DDS-based communication and other communication middleware. We then review key contributions from the literature, covering timing analysis for both single- and multi-threaded exe




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!