Course overview
This course is targeted at developers experienced in other procedural or object-oriented programming languages.
- Day 1: Rust foundations and the concept of ownership
- Day 2: Type system and error handling
- Day 3: Systems programming & concurrency
- Transfer day: other languages to Rust
Each day is a mix of theory and exercises. day 1 and 2 feature exercises in a std environment (building cli applications on desktop). day 3 and transfer day feature no_std and building embedded applications on an ESP32C3 microcontroller.
This repository
Contains the course slides/script as an mdbook and solutions to the exercises in the solutions directory. Will be updated before and during the course.
Installation Instructions Day 1 and 2
Please ensure the following software is installed on the device you bring to the course.
If there are any questions or difficulties during the installation please don’t hesitate to contact the instructor (rolandbrand11@gmail.com).
Rust
Install Rust using rustup (Rust’s official installer)
- Visit rust-lang.org and follow the installation instructions for your operating system.
- Verify installation with:
rustc --versionandcargo --version
Git
Git for version control: git-scm.com
- Make sure you can access it through the command line:
git --version
Zed Editor
Download from zed.dev
During the course the trainer will use Zed - participants are recommended to use the same editor, but are free to choose any other editor or IDE. The trainer will not be able to provide setup or configuration support for other editors or IDEs during the course.
Create a Test Project
Create a new Rust project and build it:
cargo new hello-rust
cd hello-rust
cargo build
Run the Project
Execute the project to verify your Rust installation:
cargo run
You should see “Hello, world!” printed to your terminal.
Troubleshooting
If you encounter any issues:
Rust Installation Issues
- On Unix-like systems, you might need to install build essentials:
sudo apt install build-essential(Ubuntu/Debian) - On Windows, you might need to install Visual Studio C++ Build Tools
Cargo Issues
- Try clearing the cargo cache:
cargo clean - Update rust:
rustup update
Cleanup
To remove the test project:
cd
rm -rf hello-rust
If you can complete all these steps successfully, your environment is ready for the first two days of the Rust course!
Installation Instructions Day 3 and 4 - ESP32-C3 Embedded Development
For Thursday, we will be using ESP32-C3 boards. Please install the following tooling in advance:
Required ESP32-C3 Tooling
1. Rust Source Code
This downloads the rust source code. Needed to build the std or core library, no pre-compiled provided:
rustup component add rust-src
2. ESP32-C3 Target Architecture
The toolchain for the ESP32-C3 (RISC-V architecture):
rustup target add riscv32imc-unknown-none-elf
3. cargo-espflash for Flashing
cargo-espflash is the recommended tool for flashing ESP32-C3 boards across all platforms.
Installation:
# Install cargo-espflash
cargo install cargo-espflash
4. probe-rs for Debugging (Optional - Linux/macOS)
probe-rs provides debugging capabilities and works best on Linux and macOS.
Installation (Optional):
# Install probe-rs (optional, primarily for debugging)
cargo install probe-rs --features cli
5. esp-generate for Project Scaffolding
Tool for creating no_std projects targeting ESP32 chips:
cargo install esp-generate
Verification Steps
Test ESP32-C3 Setup
- Connect your ESP32-C3 board via USB cable
- Generate a test project:
esp-generate --chip esp32c3 test-esp32c3 cd test-esp32c3 - Build the project:
cargo build --release - Flash to the board:
cargo run --release
Zed Editor ESP32 Debugging Setup
If using Zed editor:
- Install probe-rs extension in Zed: https://zed.dev/extensions/probe-rs
- probe-rs integrates seamlessly with Zed for debugging ESP32-C3 projects
Platform-Specific Instructions
Windows
- Use PowerShell or Command Prompt
- Consider adding Windows Defender exclusions for Cargo directories
- Ensure you have the latest USB drivers
macOS/Linux
- Installation should work out of the box
- Use Terminal for all commands
- May need to add user to dialout group on Linux:
sudo usermod -a -G dialout $USER
Troubleshooting ESP32-C3 Setup
Common Issues and Solutions
Flashing Issues:
- If cargo-espflash fails to detect the board, ensure the ESP32-C3 is connected via USB and the correct port is being used
Port Detection Issues:
- On Windows: Check Device Manager for COM port assignments
- On Linux: Ensure user is in dialout group (see below)
- On macOS: Look for
/dev/cu.usbserial-*or/dev/cu.usbmodem*devices
ESP32-C3 Chip Revision:
- Most ESP32-C3 boards work with cargo-espflash regardless of revision
- Check revision during flashing: Look for “Chip is ESP32-C3 (revision 3)” message
Permission Issues (Linux):
- Add user to dialout group:
sudo usermod -a -G dialout $USER - Log out and back in for changes to take effect
Alternative Debugging Tools
For advanced debugging beyond cargo-espflash:
- probe-rs: Best on Linux/macOS for hardware debugging
- ESP-IDF monitor: Traditional ESP toolchain option
- Serial monitor: Use any serial terminal for basic output monitoring
Resources
→ Regularly pull updates to the repo. There will also be additional setup instructions for days 3 and 4.
Chapter 1: Course Introduction & Setup
Development Environment Setup
Let’s get your Rust development environment ready. Rust’s tooling is excellent - you’ll find it more unified than C++ and more performant than .NET.
Installing Rust
The recommended way to install Rust is through rustup, Rust’s official toolchain manager.
On Unix-like systems (Linux/macOS):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
On Windows:
Download and run the installer from rustup.rs
After installation, verify:
rustc --version
cargo --version
Understanding the Rust Toolchain
| Tool | Purpose | C++ Equivalent | .NET Equivalent |
|---|---|---|---|
rustc | Compiler | g++, clang++ | csc, dotnet build |
cargo | Build system & package manager | cmake + conan/vcpkg | dotnet CLI + NuGet |
rustup | Toolchain manager | - | .NET SDK manager |
clippy | Linter | clang-tidy | Code analyzers |
rustfmt | Formatter | clang-format | dotnet format |
Your First Rust Project
Let’s create a Hello World project to verify everything works:
cargo new hello_rust
cd hello_rust
This creates:
hello_rust/
├── Cargo.toml # Like CMakeLists.txt or .csproj
└── src/
└── main.rs # Entry point
Look at src/main.rs:
fn main() { println!("Hello, world!"); }
Run it:
cargo run
Understanding Cargo
Cargo is Rust’s build system and package manager. Coming from C++ or .NET, you’ll love its simplicity.
Key Cargo Commands
| Command | Purpose | Similar to |
|---|---|---|
cargo new | Create new project | dotnet new, cmake init |
cargo build | Compile project | make, dotnet build |
cargo run | Build & run | ./a.out, dotnet run |
cargo test | Run tests | ctest, dotnet test |
cargo doc | Generate documentation | doxygen |
cargo check | Fast syntax/type check | Incremental compilation |
Debug vs Release Builds
cargo build # Debug build (./target/debug/)
cargo build --release # Optimized build (./target/release/)
Performance difference is significant! Debug builds include:
- Overflow checks
- Debug symbols
- No optimizations
Project Structure Best Practices
A typical Rust project structure:
my_project/
├── Cargo.toml # Project manifest
├── Cargo.lock # Dependency lock file (like package-lock.json)
├── src/
│ ├── main.rs # Binary entry point
│ ├── lib.rs # Library entry point
│ └── module.rs # Additional modules
├── tests/ # Integration tests
│ └── integration_test.rs
├── benches/ # Benchmarks
│ └── benchmark.rs
├── examples/ # Example programs
│ └── example.rs
└── target/ # Build artifacts (gitignored)
Comparing with C++/.NET
C++ Developers
- No header files! Modules are automatically resolved
- No makefiles to write - Cargo handles everything
- Dependencies are downloaded automatically (like vcpkg/conan)
- No undefined behavior in safe Rust
.NET Developers
- Similar project structure to .NET Core
Cargo.tomlis like.csproj- crates.io is like NuGet
- No garbage collector - deterministic destruction
Quick Wins: Why You’ll Love Rust’s Tooling
- Unified tooling: Everything works together seamlessly
- Excellent error messages: The compiler teaches you Rust
- Fast incremental compilation: cargo check is lightning fast
- Built-in testing: No need for external test frameworks
- Documentation generation: Automatic API docs from comments
Setting Up for Success
Enable Useful Rustup Components
rustup component add clippy # Linter
rustup component add rustfmt # Formatter
rustup component add rust-src # Source code for std library
Create a Learning Workspace
Let’s set up a workspace for this course:
mkdir rust-course-workspace
cd rust-course-workspace
cargo new --bin day1_exercises
cargo new --lib day1_library
Common Setup Issues and Solutions
| Issue | Solution |
|---|---|
| “rustc not found” | Restart terminal after installation |
| Slow compilation | Enable sccache: cargo install sccache |
| Can’t debug | Zed has built-in debugging support |
| Windows linker errors | Install Visual Studio Build Tools |
Exercises
Exercise 1.1: Toolchain Exploration
Create a new project and explore these cargo commands:
cargo tree- View dependency treecargo doc --open- Generate and view documentationcargo clippy- Run the linter
Exercise 1.2: Build Configurations
- Create a simple program that prints the numbers 1 to 1_000_000
- Time the difference between debug and release builds
- Compare binary sizes
Exercise 1.3: First Debugging Session
- Create a program with an intentional panic
- Set a breakpoint in Zed
- Step through the code with the debugger
Key Takeaways
✅ Rust’s tooling is unified and modern - no need for complex build systems
✅ Cargo handles dependencies, building, testing, and documentation
✅ Debug vs Release builds have significant performance differences
✅ The development experience is similar to modern .NET, better than typical C++
✅ Zed with built-in rust-analyzer provides excellent IDE support
Next up: Chapter 2: Rust Fundamentals - Let’s write some Rust!
Chapter 2: Rust Fundamentals
Type System, Variables, Functions, and Basic Collections
Learning Objectives
By the end of this chapter, you’ll be able to:
- Understand Rust’s type system and its relationship to C++/.NET
- Work with variables, mutability, and type inference
- Write and call functions with proper parameter passing
- Handle strings effectively (String vs &str)
- Use basic collections (Vec, HashMap, etc.)
- Apply pattern matching with match expressions
Rust’s Type System: Safety First
Rust’s type system is designed around two core principles:
- Memory Safety: Prevent segfaults, buffer overflows, and memory leaks
- Thread Safety: Eliminate data races at compile time
Comparison with Familiar Languages
| Concept | C++ | C#/.NET | Rust |
|---|---|---|---|
| Null checking | Runtime (segfaults) | Runtime (NullReferenceException) | Compile-time (Option |
| Memory management | Manual (new/delete) | GC | Compile-time (ownership) |
| Thread safety | Runtime (mutexes) | Runtime (locks) | Compile-time (Send/Sync) |
| Type inference | auto (C++11+) | var | Extensive |
Variables and Mutability
The Default: Immutable
In Rust, variables are immutable by default - a key philosophical difference:
#![allow(unused)] fn main() { // Immutable by default let x = 5; x = 6; // ❌ Compile error! // Must explicitly opt into mutability let mut y = 5; y = 6; // ✅ This works }
Why This Matters:
- Prevents accidental modifications
- Enables compiler optimizations
- Makes concurrent code safer
- Forces you to think about what should change
Comparison to C++/.NET
// C++: Mutable by default
int x = 5; // Mutable
const int y = 5; // Immutable
// C#: Mutable by default
int x = 5; // Mutable
readonly int y = 5; // Immutable (field-level)
#![allow(unused)] fn main() { // Rust: Immutable by default let x = 5; // Immutable let mut y = 5; // Mutable }
Type Annotations and Inference
Rust has excellent type inference, but you can be explicit when needed:
#![allow(unused)] fn main() { // Type inference (preferred when obvious) let x = 42; // inferred as i32 let name = "Alice"; // inferred as &str let numbers = vec![1, 2, 3]; // inferred as Vec<i32> // Explicit types (when needed for clarity or disambiguation) let x: i64 = 42; let pi: f64 = 3.14159; let is_ready: bool = true; }
Variable Shadowing
Rust allows “shadowing” - reusing variable names with different types:
#![allow(unused)] fn main() { let x = 5; // x is i32 let x = "hello"; // x is now &str (different variable!) let x = x.len(); // x is now usize }
This is different from mutation and is often used for transformations.
Basic Types
Integer Types
Rust is explicit about integer sizes to prevent overflow issues:
#![allow(unused)] fn main() { // Signed integers let a: i8 = -128; // 8-bit signed (-128 to 127) let b: i16 = 32_000; // 16-bit signed let c: i32 = 2_000_000_000; // 32-bit signed (default) let d: i64 = 9_223_372_036_854_775_807; // 64-bit signed let e: i128 = 1; // 128-bit signed // Unsigned integers let f: u8 = 255; // 8-bit unsigned (0 to 255) let g: u32 = 4_000_000_000; // 32-bit unsigned let h: u64 = 18_446_744_073_709_551_615; // 64-bit unsigned // Architecture-dependent let size: usize = 64; // Pointer-sized (32 or 64 bit) let diff: isize = -32; // Signed pointer-sized }
Note: Underscores in numbers are just for readability (like 1'000'000 in C++14+).
Floating Point Types
#![allow(unused)] fn main() { let pi: f32 = 3.14159; // Single precision let e: f64 = 2.718281828; // Double precision (default) }
Boolean and Character Types
#![allow(unused)] fn main() { let is_rust_awesome: bool = true; let emoji: char = '🦀'; // 4-byte Unicode scalar value // Note: char is different from u8! let byte_value: u8 = b'A'; // ASCII byte let unicode_char: char = 'A'; // Unicode character }
Tuples: Fixed-Size Heterogeneous Collections
Tuples group values of different types into a compound type. They have a fixed size once declared:
#![allow(unused)] fn main() { // Creating tuples let tup: (i32, f64, u8) = (500, 6.4, 1); let tup = (500, 6.4, 1); // Type inference works too // Destructuring let (x, y, z) = tup; println!("The value of y is: {}", y); // Direct access using dot notation let five_hundred = tup.0; let six_point_four = tup.1; let one = tup.2; // Empty tuple (unit type) let unit = (); // Type () - represents no meaningful value // Common use: returning multiple values from functions fn get_coordinates() -> (f64, f64) { (37.7749, -122.4194) // San Francisco coordinates } let (lat, lon) = get_coordinates(); }
Comparison with C++/C#:
- C++:
std::tuple<int, double, char>orstd::pair<T1, T2> - C#:
(int, double, byte)value tuples orTuple<int, double, byte> - Rust:
(i32, f64, u8)- simpler syntax, built into the language
Arrays: Fixed-Size Homogeneous Collections
Arrays in Rust have a fixed size known at compile time and store elements of the same type:
#![allow(unused)] fn main() { // Creating arrays let months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]; let a: [i32; 5] = [1, 2, 3, 4, 5]; // Type annotation: [type; length] let a = [1, 2, 3, 4, 5]; // Type inference // Initialize with same value let zeros = [0; 100]; // Creates array with 100 zeros // Accessing elements let first = months[0]; // "January" let second = months[1]; // "February" // Array slicing let slice = &months[0..3]; // ["January", "February", "March"] // Iterating over arrays for month in &months { println!("{}", month); } // Arrays vs Vectors comparison let arr = [1, 2, 3]; // Stack-allocated, fixed size let vec = vec![1, 2, 3]; // Heap-allocated, growable }
Key Differences from Vectors:
| Feature | Array [T; N] | Vector Vec<T> |
|———|––––––––|—————–|
| Size | Fixed at compile time | Growable at runtime |
| Memory | Stack-allocated | Heap-allocated |
| Performance | Faster for small, fixed data | Better for dynamic data |
| Use case | Known size, performance critical | Unknown or changing size |
Comparison with C++/C#:
- C++:
int arr[5]orstd::array<int, 5> - C#:
int[] arr = new int[5](heap) orSpan<int>(stack) - Rust:
let arr: [i32; 5]- size is part of the type
Functions: The Building Blocks
Function Syntax
#![allow(unused)] fn main() { // Basic function fn greet() { println!("Hello, world!"); } // Function with parameters fn add(x: i32, y: i32) -> i32 { x + y // No semicolon = return value } // Alternative explicit return fn subtract(x: i32, y: i32) -> i32 { return x - y; // Explicit return with semicolon } }
Key Differences from C++/.NET
| Aspect | C++ | C#/.NET | Rust |
|---|---|---|---|
| Return syntax | return x; | return x; | x (no semicolon) |
| Parameter types | int x | int x | x: i32 |
| Return type | int func() | int Func() | fn func() -> i32 |
Parameters: By Value vs By Reference
// By value (default) - ownership transferred fn take_ownership(s: String) { println!("{}", s); // s is dropped here } // By immutable reference - borrowing fn borrow_immutable(s: &String) { println!("{}", s); // s reference is dropped, original still valid } // By mutable reference - mutable borrowing fn borrow_mutable(s: &mut String) { s.push_str(" world"); } // Example usage fn main() { let mut message = String::from("Hello"); borrow_immutable(&message); // ✅ Can borrow immutably borrow_mutable(&mut message); // ✅ Can borrow mutably take_ownership(message); // ✅ Transfers ownership // println!("{}", message); // ❌ Error: value moved }
Control Flow: Making Decisions and Repeating
Rust provides familiar control flow constructs with some unique features that enhance safety and expressiveness.
if Expressions
In Rust, if is an expression, not just a statement - it returns a value:
#![allow(unused)] fn main() { // Basic if/else let number = 7; if number < 5 { println!("Less than 5"); } else if number == 5 { println!("Equal to 5"); } else { println!("Greater than 5"); } // if as an expression returning values let condition = true; let number = if condition { 5 } else { 10 }; // number = 5 // Must have same type in both branches // let value = if condition { 5 } else { "ten" }; // ❌ Type mismatch! }
Loops: Three Flavors
Rust offers three loop constructs, each with specific use cases:
loop - Infinite Loop with Break
#![allow(unused)] fn main() { // Infinite loop - must break explicitly let mut counter = 0; let result = loop { counter += 1; if counter == 10 { break counter * 2; // loop can return a value! } }; println!("Result: {}", result); // Prints: Result: 20 // Loop labels for nested loops 'outer: loop { println!("Entered outer loop"); 'inner: loop { println!("Entered inner loop"); break 'outer; // Break the outer loop } println!("This won't execute"); } }
while - Conditional Loop
#![allow(unused)] fn main() { // Standard while loop let mut number = 3; while number != 0 { println!("{}!", number); number -= 1; } println!("LIFTOFF!!!"); // Common pattern: checking conditions let mut stack = vec![1, 2, 3]; while !stack.is_empty() { let value = stack.pop(); println!("Popped: {:?}", value); } }
for - Iterator Loop
The for loop is the most idiomatic way to iterate in Rust:
#![allow(unused)] fn main() { // Iterate over a collection let numbers = vec![1, 2, 3, 4, 5]; for num in &numbers { println!("{}", num); } // Range syntax (exclusive end) for i in 0..5 { println!("{}", i); // Prints 0, 1, 2, 3, 4 } // Inclusive range for i in 1..=5 { println!("{}", i); // Prints 1, 2, 3, 4, 5 } // Enumerate for index and value let items = vec!["a", "b", "c"]; for (index, value) in items.iter().enumerate() { println!("{}: {}", index, value); } // Reverse iteration for i in (1..=3).rev() { println!("{}", i); // Prints 3, 2, 1 } }
Comparison with C++/.NET
| Feature | C++ | C#/.NET | Rust |
|---|---|---|---|
| for-each | for (auto& x : vec) | foreach (var x in list) | for x in &vec |
| Index loop | for (int i = 0; i < n; i++) | for (int i = 0; i < n; i++) | for i in 0..n |
| Infinite | while (true) | while (true) | loop |
| Break with value | Not supported | Not supported | break value |
Control Flow Best Practices
#![allow(unused)] fn main() { // Prefer iterators over index loops // ❌ Not idiomatic let vec = vec![1, 2, 3]; let mut i = 0; while i < vec.len() { println!("{}", vec[i]); i += 1; } // ✅ Idiomatic for item in &vec { println!("{}", item); } // Use if-let for simple pattern matching let optional = Some(5); // Verbose match match optional { Some(value) => println!("Got: {}", value), None => {}, } // Cleaner if-let if let Some(value) = optional { println!("Got: {}", value); } // while-let for repeated pattern matching let mut stack = vec![1, 2, 3]; while let Some(top) = stack.pop() { println!("Popped: {}", top); } }
Strings: The Complex Topic
Strings in Rust are more complex than C++/.NET due to UTF-8 handling and ownership.
String vs &str: The Key Distinction
#![allow(unused)] fn main() { // String: Owned, growable, heap-allocated let mut owned_string = String::from("Hello"); owned_string.push_str(" world"); // &str: String slice, borrowed, usually stack-allocated let string_slice: &str = "Hello world"; let slice_of_string: &str = &owned_string; }
Comparison Table
| Type | C++ Equivalent | C#/.NET Equivalent | Rust |
|---|---|---|---|
| Owned | std::string | string | String |
| View/Slice | std::string_view | ReadOnlySpan<char> | &str |
Common String Operations
#![allow(unused)] fn main() { // Creation let s1 = String::from("Hello"); let s2 = "World".to_string(); let s3 = String::new(); // Concatenation let combined = format!("{} {}", s1, s2); // Like printf/String.Format let mut s4 = String::from("Hello"); s4.push_str(" world"); // Append string s4.push('!'); // Append character // Length and iteration println!("Length: {}", s4.len()); // Byte length! println!("Chars: {}", s4.chars().count()); // Character count // Iterating over characters (proper Unicode handling) for c in s4.chars() { println!("{}", c); } // Iterating over bytes for byte in s4.bytes() { println!("{}", byte); } }
String Slicing
#![allow(unused)] fn main() { let s = String::from("hello world"); let hello = &s[0..5]; // "hello" - byte indices! let world = &s[6..11]; // "world" let full = &s[..]; // Entire string // ⚠️ Warning: Slicing can panic with Unicode! let unicode = "🦀🔥"; // let bad = &unicode[0..1]; // ❌ Panics! Cuts through emoji let good = &unicode[0..4]; // ✅ One emoji (4 bytes) }
Collections: Vectors and Hash Maps
Vec: The Workhorse Collection
Vectors are Rust’s equivalent to std::vector or List<T>:
#![allow(unused)] fn main() { // Creation let mut numbers = Vec::new(); // Empty vector let mut numbers: Vec<i32> = Vec::new(); // With type annotation let numbers = vec![1, 2, 3, 4, 5]; // vec! macro // Adding elements let mut v = Vec::new(); v.push(1); v.push(2); v.push(3); // Accessing elements let first = &v[0]; // Panics if out of bounds let first_safe = v.get(0); // Returns Option<&T> match v.get(0) { Some(value) => println!("First: {}", value), None => println!("Vector is empty"), } // Iteration for item in &v { // Borrow each element println!("{}", item); } for item in &mut v { // Mutable borrow *item *= 2; } for item in v { // Take ownership (consumes v) println!("{}", item); } }
HashMap<K, V>: Key-Value Storage
#![allow(unused)] fn main() { use std::collections::HashMap; // Creation let mut scores = HashMap::new(); scores.insert("Alice".to_string(), 100); scores.insert("Bob".to_string(), 85); // Or with collect let teams = vec!["Blue", "Yellow"]; let initial_scores = vec![10, 50]; let scores: HashMap<_, _> = teams .iter() .zip(initial_scores.iter()) .collect(); // Accessing values let alice_score = scores.get("Alice"); match alice_score { Some(score) => println!("Alice: {}", score), None => println!("Alice not found"), } // Iteration for (key, value) in &scores { println!("{}: {}", key, value); } // Entry API for complex operations scores.entry("Charlie".to_string()).or_insert(0); *scores.entry("Alice".to_string()).or_insert(0) += 10; }
Pattern Matching with match
The match expression is Rust’s powerful control flow construct:
Basic Matching
#![allow(unused)] fn main() { let number = 7; match number { 1 => println!("One"), 2 | 3 => println!("Two or three"), 4..=6 => println!("Four to six"), _ => println!("Something else"), // Default case } }
Matching with Option
#![allow(unused)] fn main() { let maybe_number: Option<i32> = Some(5); match maybe_number { Some(value) => println!("Got: {}", value), None => println!("Nothing here"), } // Or use if let for simple cases if let Some(value) = maybe_number { println!("Got: {}", value); } }
Destructuring
#![allow(unused)] fn main() { let point = (3, 4); match point { (0, 0) => println!("Origin"), (x, 0) => println!("On x-axis at {}", x), (0, y) => println!("On y-axis at {}", y), (x, y) => println!("Point at ({}, {})", x, y), } }
Common Pitfalls and Solutions
Pitfall 1: String vs &str Confusion
#![allow(unused)] fn main() { // ❌ Common mistake fn greet(name: String) { // Takes ownership println!("Hello, {}", name); } let name = String::from("Alice"); greet(name); // greet(name); // ❌ Error: value moved // ✅ Better approach fn greet(name: &str) { // Borrows println!("Hello, {}", name); } let name = String::from("Alice"); greet(&name); greet(&name); // ✅ Still works }
Pitfall 2: Integer Overflow in Debug Mode
#![allow(unused)] fn main() { let mut x: u8 = 255; x += 1; // Panics in debug mode, wraps in release mode // Use checked arithmetic for explicit handling match x.checked_add(1) { Some(result) => x = result, None => println!("Overflow detected!"), } }
Pitfall 3: Vec Index Out of Bounds
#![allow(unused)] fn main() { let v = vec![1, 2, 3]; // let x = v[10]; // ❌ Panics! // ✅ Safe alternatives let x = v.get(10); // Returns Option<&T> let x = v.get(0).unwrap(); // Explicit panic with better message }
Key Takeaways
- Immutability by default encourages safer, more predictable code
- Type inference is powerful but explicit types help with clarity
- String handling is more complex but prevents many Unicode bugs
- Collections are memory-safe with compile-time bounds checking
- Pattern matching is exhaustive and catches errors at compile time
Memory Insight: Unlike C++ or .NET, Rust tracks ownership at compile time, preventing entire classes of bugs without runtime overhead.
Exercises
Exercise 1: Basic Types and Functions
Create a program that:
- Defines a function
calculate_bmi(height: f64, weight: f64) -> f64 - Uses the function to calculate BMI for several people
- Returns a string description (“Underweight”, “Normal”, “Overweight”, “Obese”)
// Starter code fn calculate_bmi(height: f64, weight: f64) -> f64 { // Your implementation here } fn bmi_category(bmi: f64) -> &'static str { // Your implementation here } fn main() { let height = 1.75; // meters let weight = 70.0; // kg let bmi = calculate_bmi(height, weight); let category = bmi_category(bmi); println!("BMI: {:.1}, Category: {}", bmi, category); }
Exercise 2: String Manipulation
Write a function that:
- Takes a sentence as input
- Returns the longest word in the sentence
- Handle the case where multiple words have the same length
#![allow(unused)] fn main() { fn find_longest_word(sentence: &str) -> Option<&str> { // Your implementation here // Hint: Use split_whitespace() and max_by_key() } #[cfg(test)] mod tests { use super::*; #[test] fn test_longest_word() { assert_eq!(find_longest_word("Hello world rust"), Some("Hello")); assert_eq!(find_longest_word(""), None); assert_eq!(find_longest_word("a bb ccc"), Some("ccc")); } } }
Exercise 3: Collections and Pattern Matching
Build a simple inventory system:
- Use HashMap to store item names and quantities
- Implement functions to add, remove, and check items
- Use pattern matching to handle different scenarios
use std::collections::HashMap; struct Inventory { items: HashMap<String, u32>, } impl Inventory { fn new() -> Self { Inventory { items: HashMap::new(), } } fn add_item(&mut self, name: String, quantity: u32) { // Your implementation here } fn remove_item(&mut self, name: &str, quantity: u32) -> Result<(), String> { // Your implementation here // Return error if not enough items } fn check_stock(&self, name: &str) -> Option<u32> { // Your implementation here } } fn main() { let mut inventory = Inventory::new(); inventory.add_item("Apples".to_string(), 10); inventory.add_item("Bananas".to_string(), 5); match inventory.remove_item("Apples", 3) { Ok(()) => println!("Removed 3 apples"), Err(e) => println!("Error: {}", e), } match inventory.check_stock("Apples") { Some(quantity) => println!("Apples in stock: {}", quantity), None => println!("Apples not found"), } }
Additional Resources
Next Up: In Chapter 3, we’ll explore structs and enums - Rust’s powerful data modeling tools that go far beyond what you might expect from C++/.NET experience.
Chapter 3: Structs and Enums
Data Modeling and Methods in Rust
Learning Objectives
By the end of this chapter, you’ll be able to:
- Define and use structs effectively for data modeling
- Understand when and how to implement methods and associated functions
- Master enums for type-safe state representation
- Apply pattern matching with complex data structures
- Choose between structs and enums for different scenarios
- Implement common patterns from OOP languages in Rust
Structs: Structured Data
Structs in Rust are similar to structs in C++ or classes in C#, but with some key differences around memory layout and method definition.
Basic Struct Definition
#![allow(unused)] fn main() { // Similar to C++ struct or C# class struct Person { name: String, age: u32, email: String, } // Creating instances let person = Person { name: String::from("Alice"), age: 30, email: String::from("alice@example.com"), }; // Accessing fields println!("Name: {}", person.name); println!("Age: {}", person.age); }
Comparison with C++/.NET
| Feature | C++ | C#/.NET | Rust |
|---|---|---|---|
| Definition | struct Person { std::string name; }; | class Person { public string Name; } | struct Person { name: String } |
| Instantiation | Person p{"Alice"}; | var p = new Person { Name = "Alice" }; | Person { name: "Alice".to_string() } |
| Field Access | p.name | p.Name | p.name |
| Methods | Inside struct | Inside class | Separate impl block |
Struct Update Syntax
#![allow(unused)] fn main() { let person1 = Person { name: String::from("Alice"), age: 30, email: String::from("alice@example.com"), }; // Create a new instance based on existing one let person2 = Person { name: String::from("Bob"), ..person1 // Use remaining fields from person1 }; // Note: person1 is no longer usable if any non-Copy fields were moved! }
Tuple Structs
When you don’t need named fields:
#![allow(unused)] fn main() { // Tuple struct - like std::pair in C++ or Tuple in C# struct Point(f64, f64); struct Color(u8, u8, u8); let origin = Point(0.0, 0.0); let red = Color(255, 0, 0); // Access by index println!("X: {}, Y: {}", origin.0, origin.1); }
Unit Structs
Structs with no data - useful for type safety:
#![allow(unused)] fn main() { // Unit struct - zero size struct Marker; // Useful for phantom types and markers let marker = Marker; }
Methods and Associated Functions
In Rust, methods are defined separately from the struct definition in impl blocks.
Instance Methods
#![allow(unused)] fn main() { struct Rectangle { width: f64, height: f64, } impl Rectangle { // Method that takes &self (immutable borrow) fn area(&self) -> f64 { self.width * self.height } // Method that takes &mut self (mutable borrow) fn scale(&mut self, factor: f64) { self.width *= factor; self.height *= factor; } // Method that takes self (takes ownership) fn into_square(self) -> Rectangle { let size = (self.width + self.height) / 2.0; Rectangle { width: size, height: size, } } } // Usage let mut rect = Rectangle { width: 10.0, height: 5.0 }; println!("Area: {}", rect.area()); // Borrows immutably rect.scale(2.0); // Borrows mutably let square = rect.into_square(); // Takes ownership // rect is no longer usable here! }
Associated Functions (Static Methods)
#![allow(unused)] fn main() { impl Rectangle { // Associated function (like static method in C#) fn new(width: f64, height: f64) -> Rectangle { Rectangle { width, height } } // Constructor-like function fn square(size: f64) -> Rectangle { Rectangle { width: size, height: size, } } } // Usage - called on the type, not an instance let rect = Rectangle::new(10.0, 5.0); let square = Rectangle::square(7.0); }
Multiple impl Blocks
You can have multiple impl blocks for organization:
#![allow(unused)] fn main() { impl Rectangle { // Construction methods fn new(width: f64, height: f64) -> Self { Self { width, height } } } impl Rectangle { // Calculation methods fn area(&self) -> f64 { self.width * self.height } fn perimeter(&self) -> f64 { 2.0 * (self.width + self.height) } } }
Enums: More Powerful Than You Think
Rust enums are much more powerful than C++ enums or C# enums. They’re similar to discriminated unions or algebraic data types.
Basic Enums
#![allow(unused)] fn main() { // Simple enum - like C++ enum class #[derive(Debug)] // Allows printing with {:?} enum Direction { North, South, East, West, } let dir = Direction::North; println!("{:?}", dir); // Prints: North }
Enums with Data
This is where Rust enums shine - each variant can hold different types of data:
#![allow(unused)] fn main() { enum IpAddr { V4(u8, u8, u8, u8), // IPv4 with 4 bytes V6(String), // IPv6 as string } let home = IpAddr::V4(127, 0, 0, 1); let loopback = IpAddr::V6(String::from("::1")); // More complex example enum Message { Quit, // No data Move { x: i32, y: i32 }, // Anonymous struct Write(String), // Single value ChangeColor(i32, i32, i32), // Tuple } }
Pattern Matching with Enums
#![allow(unused)] fn main() { fn process_message(msg: Message) { match msg { Message::Quit => { println!("Quit received"); } Message::Move { x, y } => { println!("Move to ({}, {})", x, y); } Message::Write(text) => { println!("Write: {}", text); } Message::ChangeColor(r, g, b) => { println!("Change color to RGB({}, {}, {})", r, g, b); } } } }
Methods on Enums
Enums can have methods too:
#![allow(unused)] fn main() { impl Message { fn is_quit(&self) -> bool { matches!(self, Message::Quit) } fn process(&self) { match self { Message::Quit => std::process::exit(0), Message::Write(text) => println!("{}", text), _ => println!("Processing other message"), } } } }
Option: Null Safety
The most important enum in Rust is Option<T> - Rust’s way of handling nullable values:
#![allow(unused)] fn main() { enum Option<T> { Some(T), None, } }
Comparison with Null Handling
| Language | Null Representation | Safety |
|---|---|---|
| C++ | nullptr, raw pointers | Runtime crashes |
| C#/.NET | null, Nullable<T> | Runtime exceptions |
| Rust | Option<T> | Compile-time safety |
Working with Option
#![allow(unused)] fn main() { fn find_user(id: u32) -> Option<String> { if id == 1 { Some(String::from("Alice")) } else { None } } // Pattern matching match find_user(1) { Some(name) => println!("Found user: {}", name), None => println!("User not found"), } // Using if let for simple cases if let Some(name) = find_user(1) { println!("Hello, {}", name); } // Chaining operations let user_name_length = find_user(1) .map(|name| name.len()) // Transform if Some .unwrap_or(0); // Default value if None }
Common Option Methods
#![allow(unused)] fn main() { let maybe_number: Option<i32> = Some(5); // Unwrapping (use carefully!) let number = maybe_number.unwrap(); // Panics if None let number = maybe_number.unwrap_or(0); // Default value let number = maybe_number.unwrap_or_else(|| compute_default()); // Safe checking if maybe_number.is_some() { println!("Has value: {}", maybe_number.unwrap()); } // Transformation let doubled = maybe_number.map(|x| x * 2); // Some(10) or None let as_string = maybe_number.map(|x| x.to_string()); // Filtering let even = maybe_number.filter(|&x| x % 2 == 0); }
Result<T, E>: Error Handling
Another crucial enum is Result<T, E> for error handling:
#![allow(unused)] fn main() { enum Result<T, E> { Ok(T), Err(E), } }
Basic Usage
#![allow(unused)] fn main() { use std::fs::File; use std::io::ErrorKind; fn open_file(filename: &str) -> Result<File, std::io::Error> { File::open(filename) } // Pattern matching match open_file("config.txt") { Ok(file) => println!("File opened successfully"), Err(error) => match error.kind() { ErrorKind::NotFound => println!("File not found"), ErrorKind::PermissionDenied => println!("Permission denied"), other_error => println!("Other error: {:?}", other_error), }, } }
When to Use Structs vs Enums
Use Structs When:
- You need to group related data together
- All fields are always present and meaningful
- You’re modeling “entities” or “things”
#![allow(unused)] fn main() { // Good use of struct - user profile struct UserProfile { username: String, email: String, created_at: std::time::SystemTime, is_active: bool, } }
Use Enums When:
- You have mutually exclusive states or variants
- You need type-safe state machines
- You’re modeling “choices” or “alternatives”
#![allow(unused)] fn main() { // Good use of enum - connection state enum ConnectionState { Disconnected, Connecting { attempt: u32 }, Connected { since: std::time::SystemTime }, Error { message: String, retry_count: u32 }, } }
Combining Structs and Enums
#![allow(unused)] fn main() { struct GamePlayer { name: String, health: u32, state: PlayerState, } enum PlayerState { Idle, Moving { destination: Point }, Fighting { target: String }, Dead { respawn_time: u64 }, } struct Point { x: f64, y: f64, } }
Advanced Patterns
Generic Structs
#![allow(unused)] fn main() { struct Pair<T> { first: T, second: T, } impl<T> Pair<T> { fn new(first: T, second: T) -> Self { Pair { first, second } } fn get_first(&self) -> &T { &self.first } } // Usage let int_pair = Pair::new(1, 2); let string_pair = Pair::new("hello".to_string(), "world".to_string()); }
Deriving Common Traits
#![allow(unused)] fn main() { #[derive(Debug, Clone, PartialEq)] // Auto-implement common traits struct Point { x: f64, y: f64, } let p1 = Point { x: 1.0, y: 2.0 }; let p2 = p1.clone(); // Clone trait println!("{:?}", p1); // Debug trait println!("Equal: {}", p1 == p2); // PartialEq trait }
Common Pitfalls and Solutions
Pitfall 1: Forgetting to Handle All Enum Variants
#![allow(unused)] fn main() { enum Status { Active, Inactive, Pending, } fn handle_status(status: Status) { match status { Status::Active => println!("Active"), Status::Inactive => println!("Inactive"), // ❌ Missing Status::Pending - won't compile! } } // ✅ Solution: Handle all variants or use default fn handle_status_fixed(status: Status) { match status { Status::Active => println!("Active"), Status::Inactive => println!("Inactive"), Status::Pending => println!("Pending"), // Handle all variants } } }
Pitfall 2: Moving Out of Borrowed Content
#![allow(unused)] fn main() { struct Container { value: String, } fn bad_example(container: &Container) -> String { container.value // ❌ Cannot move out of borrowed content } // ✅ Solutions: fn return_reference(container: &Container) -> &str { &container.value // Return a reference } fn return_clone(container: &Container) -> String { container.value.clone() // Clone the value } }
Pitfall 3: Unwrapping Options/Results in Production
#![allow(unused)] fn main() { // ❌ Dangerous in production code fn bad_parse(input: &str) -> i32 { input.parse::<i32>().unwrap() // Can panic! } // ✅ Better approaches fn safe_parse(input: &str) -> Option<i32> { input.parse().ok() } fn parse_with_default(input: &str, default: i32) -> i32 { input.parse().unwrap_or(default) } }
Key Takeaways
- Structs group related data - similar to classes but with explicit memory layout
- Methods are separate from data definition in
implblocks - Enums are powerful - they can hold data and represent complex state
- Pattern matching is exhaustive - compiler ensures all cases are handled
- Option and Result eliminate null pointer exceptions and improve error handling
- Choose the right tool: structs for entities, enums for choices
Exercises
Exercise 1: Building a Library System
Create a library management system using structs and enums:
// Define the data structures struct Book { title: String, author: String, isbn: String, status: BookStatus, } enum BookStatus { Available, CheckedOut { borrower: String, due_date: String }, Reserved { reserver: String }, } impl Book { fn new(title: String, author: String, isbn: String) -> Self { // Your implementation } fn checkout(&mut self, borrower: String, due_date: String) -> Result<(), String> { // Your implementation - return error if not available } fn return_book(&mut self) -> Result<(), String> { // Your implementation } fn is_available(&self) -> bool { // Your implementation } } fn main() { let mut book = Book::new( "The Rust Programming Language".to_string(), "Steve Klabnik".to_string(), "978-1718500440".to_string(), ); // Test the implementation println!("Available: {}", book.is_available()); match book.checkout("Alice".to_string(), "2023-12-01".to_string()) { Ok(()) => println!("Book checked out successfully"), Err(e) => println!("Checkout failed: {}", e), } }
Exercise 2: Calculator with Different Number Types
Build a calculator that can handle different number types:
#[derive(Debug, Clone)] enum Number { Integer(i64), Float(f64), Fraction { numerator: i64, denominator: i64 }, } impl Number { fn add(self, other: Number) -> Number { // Your implementation // Convert everything to float for simplicity, or implement proper fraction math } fn to_float(&self) -> f64 { // Your implementation } fn display(&self) -> String { // Your implementation } } fn main() { let a = Number::Integer(5); let b = Number::Float(3.14); let c = Number::Fraction { numerator: 1, denominator: 2 }; let result = a.add(b); println!("5 + 3.14 = {}", result.display()); }
Exercise 3: State Machine for a Traffic Light
Implement a traffic light state machine:
struct TrafficLight { current_state: LightState, timer: u32, } enum LightState { Red { duration: u32 }, Yellow { duration: u32 }, Green { duration: u32 }, } impl TrafficLight { fn new() -> Self { // Start with Red for 30 seconds } fn tick(&mut self) { // Decrease timer and change state when timer reaches 0 // Red(30) -> Green(25) -> Yellow(5) -> Red(30) -> ... } fn current_color(&self) -> &str { // Return the current color as a string } fn time_remaining(&self) -> u32 { // Return remaining time in current state } } fn main() { let mut light = TrafficLight::new(); for _ in 0..100 { println!("Light: {}, Time remaining: {}", light.current_color(), light.time_remaining()); light.tick(); // Simulate 1 second delay std::thread::sleep(std::time::Duration::from_millis(100)); } }
Next Up: In Chapter 4, we’ll dive deep into ownership - Rust’s unique approach to memory management that eliminates entire classes of bugs without garbage collection.
Chapter 4: Ownership - THE MOST IMPORTANT CONCEPT
Understanding Rust’s Unique Memory Management
Learning Objectives
By the end of this chapter, you’ll be able to:
- Understand ownership rules and how they differ from C++/.NET memory management
- Work confidently with borrowing and references
- Navigate lifetime annotations and understand when they’re needed
- Transfer ownership safely between functions and data structures
- Debug common ownership errors with confidence
- Apply ownership principles to write memory-safe, performant code
Why Ownership Matters: The Problem It Solves
Memory Management Comparison
| Language | Memory Management | Common Issues | Performance | Safety |
|---|---|---|---|---|
| C++ | Manual (new/delete, RAII) | Memory leaks, double-free, dangling pointers | High | Runtime crashes |
| C#/.NET | Garbage Collector | GC pauses, memory pressure | Medium | Runtime exceptions |
| Rust | Compile-time ownership | Compiler errors (not runtime!) | High | Compile-time safety |
The Core Problem
// C++ - Dangerous code that compiles
std::string* dangerous() {
std::string local = "Hello";
return &local; // ❌ Returning reference to local variable!
}
// This compiles but crashes at runtime
// C# - Memory managed but can still have issues
class Manager {
private List<string> items;
public IEnumerable<string> GetItems() {
items = null; // Oops!
return items; // ❌ NullReferenceException at runtime
}
}
#![allow(unused)] fn main() { // Rust - Won't compile, saving you from runtime crashes fn safe_rust() -> &str { let local = String::from("Hello"); &local // ❌ Compile error: `local` does not live long enough } // Error caught at compile time! }
The Three Rules of Ownership
Rule 1: Each Value Has a Single Owner
#![allow(unused)] fn main() { let s1 = String::from("Hello"); // s1 owns the string let s2 = s1; // Ownership moves to s2 // println!("{}", s1); // ❌ Error: value borrowed after move // Compare to C++: // std::string s1 = "Hello"; // s1 owns the string // std::string s2 = s1; // s2 gets a COPY (expensive!) // std::cout << s1; // ✅ Still works, s1 unchanged }
Rule 2: There Can Only Be One Owner at a Time
fn take_ownership(s: String) { // s comes into scope println!("{}", s); } // s goes out of scope and `drop` is called, memory freed fn main() { let s = String::from("Hello"); take_ownership(s); // s's value moves into function // println!("{}", s); // ❌ Error: value borrowed after move }
Rule 3: When the Owner Goes Out of Scope, the Value is Dropped
#![allow(unused)] fn main() { { let s = String::from("Hello"); // s comes into scope // do stuff with s } // s goes out of scope, memory freed automatically }
Move Semantics: Ownership Transfer
Understanding Moves
#![allow(unused)] fn main() { // Primitive types implement Copy trait let x = 5; let y = x; // x is copied, both x and y are valid println!("x: {}, y: {}", x, y); // ✅ Works fine // Complex types move by default let s1 = String::from("Hello"); let s2 = s1; // s1 is moved to s2 // println!("{}", s1); // ❌ Error: value borrowed after move println!("{}", s2); // ✅ Only s2 is valid // Clone when you need a copy let s3 = String::from("World"); let s4 = s3.clone(); // Explicit copy println!("s3: {}, s4: {}", s3, s4); // ✅ Both valid }
Copy vs Move Types
#![allow(unused)] fn main() { // Types that implement Copy (stored on stack) let a = 5; // i32 let b = true; // bool let c = 'a'; // char let d = (1, 2); // Tuple of Copy types // Types that don't implement Copy (may use heap) let e = String::from("Hello"); // String let f = vec![1, 2, 3]; // Vec<i32> let g = Box::new(42); // Box<i32> // Copy types can be used after assignment let x = a; // a is copied println!("a: {}, x: {}", a, x); // ✅ Both work // Move types transfer ownership let y = e; // e is moved // println!("{}", e); // ❌ Error: moved }
References and Borrowing
Immutable References (Shared Borrowing)
fn calculate_length(s: &String) -> usize { // s is a reference s.len() } // s goes out of scope, but doesn't own data, so nothing happens fn main() { let s1 = String::from("Hello"); let len = calculate_length(&s1); // Pass reference println!("Length of '{}' is {}.", s1, len); // ✅ s1 still usable }
Mutable References (Exclusive Borrowing)
fn change(s: &mut String) { s.push_str(", world"); } fn main() { let mut s = String::from("Hello"); change(&mut s); // Pass mutable reference println!("{}", s); // Prints: Hello, world }
The Borrowing Rules
Rule 1: Either one mutable reference OR any number of immutable references
#![allow(unused)] fn main() { let mut s = String::from("Hello"); // ✅ Multiple immutable references let r1 = &s; let r2 = &s; println!("{} and {}", r1, r2); // OK // ❌ Cannot have mutable reference with immutable ones let r3 = &s; let r4 = &mut s; // Error: cannot borrow as mutable }
Rule 2: References must always be valid (no dangling references)
#![allow(unused)] fn main() { fn dangle() -> &String { // Returns reference to String let s = String::from("hello"); &s // ❌ Error: `s` does not live long enough } // s is dropped, reference would be invalid // ✅ Solution: Return owned value fn no_dangle() -> String { let s = String::from("hello"); s // Move s out, no reference needed } }
Reference Patterns in Practice
// Good: Take references when you don't need ownership fn print_length(s: &str) { // &str works with String and &str println!("Length: {}", s.len()); } // Good: Take mutable reference when you need to modify fn append_exclamation(s: &mut String) { s.push('!'); } // Sometimes you need ownership fn take_and_process(s: String) -> String { // Do expensive processing that consumes s format!("Processed: {}", s.to_uppercase()) } fn main() { let mut text = String::from("Hello"); print_length(&text); // Borrow immutably append_exclamation(&mut text); // Borrow mutably let result = take_and_process(text); // Transfer ownership // text is no longer valid here println!("{}", result); }
Lifetimes: Ensuring Reference Validity
Why Lifetimes Exist
#![allow(unused)] fn main() { // The compiler needs to ensure this is safe: fn longest(x: &str, y: &str) -> &str { if x.len() > y.len() { x } else { y } } // Question: How long should the returned reference live? }
Lifetime Annotation Syntax
#![allow(unused)] fn main() { // Explicit lifetime annotations fn longest<'a>(x: &'a str, y: &'a str) -> &'a str { if x.len() > y.len() { x } else { y } } // The lifetime 'a means: // - x and y must both live at least as long as 'a // - The returned reference will live as long as 'a // - 'a is the shorter of the two input lifetimes }
Lifetime Elision Rules (When You Don’t Need Annotations)
Rule 1: Each reference parameter gets its own lifetime
#![allow(unused)] fn main() { // This: fn first_word(s: &str) -> &str { /* ... */ } // Is actually this: fn first_word<'a>(s: &'a str) -> &'a str { /* ... */ } }
Rule 2: If there’s exactly one input lifetime, it’s assigned to all outputs
#![allow(unused)] fn main() { // These are equivalent: fn get_first(list: &Vec<String>) -> &String { &list[0] } fn get_first<'a>(list: &'a Vec<String>) -> &'a String { &list[0] } }
Rule 3: Methods with &self give output the same lifetime as self
#![allow(unused)] fn main() { impl<'a> Person<'a> { fn get_name(&self) -> &str { // Implicitly &'a str self.name } } }
Complex Lifetime Examples
#![allow(unused)] fn main() { // Multiple lifetimes fn compare_and_return<'a, 'b>( x: &'a str, y: &'b str, return_first: bool ) -> &'a str { // Always returns something with lifetime 'a if return_first { x } else { y } // ❌ Error: y has wrong lifetime } // Fixed version - both inputs must have same lifetime fn compare_and_return<'a>( x: &'a str, y: &'a str, return_first: bool ) -> &'a str { if return_first { x } else { y } // ✅ OK } }
Structs with Lifetimes
// Struct holding references needs lifetime annotation struct ImportantExcerpt<'a> { part: &'a str, // This reference must live at least as long as the struct } impl<'a> ImportantExcerpt<'a> { fn level(&self) -> i32 { 3 } fn announce_and_return_part(&self, announcement: &str) -> &str { println!("Attention please: {}", announcement); self.part // Returns reference with same lifetime as &self } } fn main() { let novel = String::from("Call me Ishmael. Some years ago..."); let first_sentence = novel.split('.').next().expect("Could not find a '.'"); let i = ImportantExcerpt { part: first_sentence, }; // i is valid as long as novel is valid }
Static Lifetime
#![allow(unused)] fn main() { // 'static means the reference lives for the entire program duration let s: &'static str = "I have a static lifetime."; // String literals // Static variables static GLOBAL_COUNT: i32 = 0; let count_ref: &'static i32 = &GLOBAL_COUNT; // Sometimes you need to store static references struct Config { name: &'static str, // Must be a string literal or static } }
Advanced Ownership Patterns
Returning References from Functions
#![allow(unused)] fn main() { // ❌ Cannot return reference to local variable fn create_and_return() -> &str { let s = String::from("hello"); &s // Error: does not live long enough } // ✅ Return owned value instead fn create_and_return_owned() -> String { String::from("hello") } // ✅ Return reference to input (with lifetime) fn get_first_word(text: &str) -> &str { text.split_whitespace().next().unwrap_or("") } }
Ownership with Collections
fn main() { let mut vec = Vec::new(); // Adding owned values vec.push(String::from("hello")); vec.push(String::from("world")); // ❌ Cannot move out of vector by index // let first = vec[0]; // Error: cannot move // ✅ Borrowing is fine let first_ref = &vec[0]; println!("First: {}", first_ref); // ✅ Clone if you need ownership let first_owned = vec[0].clone(); // ✅ Or use into_iter() to transfer ownership for item in vec { // vec is moved here println!("Owned item: {}", item); } // vec is no longer usable }
Splitting Borrows
#![allow(unused)] fn main() { // Sometimes you need to borrow different parts of a struct struct Point { x: f64, y: f64, } impl Point { // ❌ This won't work - can't return multiple mutable references // fn get_coords_mut(&mut self) -> (&mut f64, &mut f64) { // (&mut self.x, &mut self.y) // } // ✅ This works - different fields can be borrowed separately fn update_coords(&mut self, new_x: f64, new_y: f64) { self.x = new_x; // Borrow x mutably self.y = new_y; // Borrow y mutably (different field) } } }
Common Ownership Patterns and Solutions
Pattern 1: Function Parameters
#![allow(unused)] fn main() { // ❌ Don't take ownership unless you need it fn process_text(text: String) -> usize { text.len() // We don't need to own text for this } // ✅ Better: take a reference fn process_text(text: &str) -> usize { text.len() } // ✅ When you do need ownership: fn store_text(text: String) -> Box<String> { Box::new(text) // We're storing it, so ownership makes sense } }
Pattern 2: Return Values
#![allow(unused)] fn main() { // ✅ Return owned values when creating new data fn create_greeting(name: &str) -> String { format!("Hello, {}!", name) } // ✅ Return references when extracting from input fn get_file_extension(filename: &str) -> Option<&str> { filename.split('.').last() } }
Pattern 3: Structs Holding Data
// ✅ Own data when struct should control lifetime #[derive(Debug)] struct User { name: String, // Owned email: String, // Owned } // ✅ Borrow when data lives elsewhere #[derive(Debug)] struct UserRef<'a> { name: &'a str, // Borrowed email: &'a str, // Borrowed } // Usage fn main() { // Owned version - can outlive source data let user = User { name: String::from("Alice"), email: String::from("alice@example.com"), }; // Borrowed version - tied to source data lifetime let name = "Bob"; let email = "bob@example.com"; let user_ref = UserRef { name, email }; }
Debugging Ownership Errors
Common Error Messages and Solutions
1. “Value borrowed after move”
#![allow(unused)] fn main() { // ❌ Problem let s = String::from("hello"); let s2 = s; // s moved here println!("{}", s); // Error: value borrowed after move // ✅ Solutions // Option 1: Use references let s = String::from("hello"); let s2 = &s; // Borrow instead println!("{} {}", s, s2); // Option 2: Clone when you need copies let s = String::from("hello"); let s2 = s.clone(); // Explicit copy println!("{} {}", s, s2); }
2. “Cannot borrow as mutable”
#![allow(unused)] fn main() { // ❌ Problem let s = String::from("hello"); // Immutable s.push_str(" world"); // Error: cannot borrow as mutable // ✅ Solution: Make it mutable let mut s = String::from("hello"); s.push_str(" world"); }
3. “Borrowed value does not live long enough”
#![allow(unused)] fn main() { // ❌ Problem fn get_string() -> &str { let s = String::from("hello"); &s // Error: does not live long enough } // ✅ Solutions // Option 1: Return owned value fn get_string() -> String { String::from("hello") } // Option 2: Use string literal (static lifetime) fn get_string() -> &'static str { "hello" } }
Tools for Understanding Ownership
#![allow(unused)] fn main() { fn debug_ownership() { let s1 = String::from("hello"); println!("s1 created"); let s2 = s1; // Move occurs here println!("s1 moved to s2"); // println!("{}", s1); // This would error let s3 = &s2; // Borrow s2 println!("s2 borrowed as s3: {}", s3); drop(s2); // Explicit drop println!("s2 dropped"); // println!("{}", s3); // This would error - s2 was dropped } }
Performance Implications
Zero-Cost Abstractions
#![allow(unused)] fn main() { // All of these have the same runtime performance: // Direct access let vec = vec![1, 2, 3, 4, 5]; let sum1 = vec[0] + vec[1] + vec[2] + vec[3] + vec[4]; // Iterator (zero-cost abstraction) let sum2: i32 = vec.iter().sum(); // Reference passing (no copying) fn sum_vec(v: &Vec<i32>) -> i32 { v.iter().sum() } let sum3 = sum_vec(&vec); // All compile to similar assembly code! }
Memory Layout Guarantees
#![allow(unused)] fn main() { // Rust guarantees memory layout #[repr(C)] // Compatible with C struct layout struct Point { x: f64, // Guaranteed to be first y: f64, // Guaranteed to be second } // No hidden vtables, no GC headers // What you see is what you get in memory }
Key Takeaways
- Ownership prevents entire classes of bugs at compile time
- Move semantics are default - be explicit when you want copies
- Borrowing allows safe sharing without ownership transfer
- Lifetimes ensure references are always valid but often inferred
- The compiler is your friend - ownership errors are caught early
- Zero runtime cost - all ownership checks happen at compile time
Mental Model Summary
#![allow(unused)] fn main() { // Think of ownership like keys to a house: let house_keys = String::from("keys"); // You own the keys let friend = house_keys; // You give keys to friend // house_keys is no longer valid // You no longer have keys let borrowed_keys = &friend; // Friend lets you borrow keys // friend still owns keys // Friend still owns them drop(friend); // Friend moves away // borrowed_keys no longer valid // Your borrowed keys invalid }
Exercises
Exercise 1: Ownership Transfer Chain
Create a program that demonstrates ownership transfer through a chain of functions:
// Implement these functions following ownership rules fn create_message() -> String { // Create and return a String } fn add_greeting(message: String) -> String { // Take ownership, add "Hello, " prefix, return new String } fn add_punctuation(message: String) -> String { // Take ownership, add "!" suffix, return new String } fn print_and_consume(message: String) { // Take ownership, print message, let it be dropped } fn main() { // Chain the functions together // create -> add_greeting -> add_punctuation -> print_and_consume // Try to use the message after each step - what happens? }
Exercise 2: Reference vs Ownership
Fix the ownership issues in this code:
fn analyze_text(text: String) -> (usize, String) { let word_count = text.split_whitespace().count(); let uppercase = text.to_uppercase(); (word_count, uppercase) } fn main() { let article = String::from("Rust is a systems programming language"); let (count, upper) = analyze_text(article); println!("Original: {}", article); // ❌ This should work but doesn't println!("Word count: {}", count); println!("Uppercase: {}", upper); // Also make this work: let count2 = analyze_text(article).0; // ❌ This should also work }
Exercise 3: Lifetime Annotations
Implement a function that finds the longest common prefix of two strings:
// Fix the lifetime annotations fn longest_common_prefix(s1: &str, s2: &str) -> &str { let mut i = 0; let s1_chars: Vec<char> = s1.chars().collect(); let s2_chars: Vec<char> = s2.chars().collect(); while i < s1_chars.len() && i < s2_chars.len() && s1_chars[i] == s2_chars[i] { i += 1; } &s1[..i] // Return slice of first string } #[cfg(test)] mod tests { use super::*; #[test] fn test_common_prefix() { assert_eq!(longest_common_prefix("hello", "help"), "hel"); assert_eq!(longest_common_prefix("rust", "ruby"), "ru"); assert_eq!(longest_common_prefix("abc", "xyz"), ""); } } fn main() { let word1 = String::from("programming"); let word2 = "program"; let prefix = longest_common_prefix(&word1, word2); println!("Common prefix: '{}'", prefix); // Both word1 and word2 should still be usable here println!("Word1: {}, Word2: {}", word1, word2); }
Next Up: In Chapter 5, we’ll explore smart pointers - Rust’s tools for more complex memory management scenarios when simple ownership isn’t enough.
Chapter 5: Smart Pointers
Advanced Memory Management Beyond Basic Ownership
Learning Objectives
By the end of this chapter, you’ll be able to:
- Use Box
for heap allocation and recursive data structures - Share ownership safely with Rc
and Arc - Implement interior mutability with RefCell
and Mutex - Prevent memory leaks with Weak
references - Choose the right smart pointer for different scenarios
- Understand the performance implications of each smart pointer type
What Are Smart Pointers?
Smart pointers are data structures that act like pointers but have additional metadata and capabilities. Unlike regular references, smart pointers own the data they point to.
Smart Pointers vs Regular References
| Feature | Regular Reference | Smart Pointer |
|---|---|---|
| Ownership | Borrows data | Owns data |
| Memory location | Stack or heap | Usually heap |
| Deallocation | Automatic (owner drops) | Automatic (smart pointer drops) |
| Runtime overhead | None | Some (depends on type) |
Comparison with C++/.NET
| Rust | C++ Equivalent | C#/.NET Equivalent |
|---|---|---|
Box<T> | std::unique_ptr<T> | No direct equivalent |
Rc<T> | std::shared_ptr<T> | Reference counting GC |
Arc<T> | std::shared_ptr<T> (thread-safe) | Thread-safe references |
RefCell<T> | No equivalent | Lock-free interior mutability |
Weak<T> | std::weak_ptr<T> | WeakReference<T> |
Box: Single Ownership on the Heap
Box<T> is the simplest smart pointer - it provides heap allocation with single ownership.
When to Use Box
- Large data: Move large structs to heap to avoid stack overflow
- Recursive types: Enable recursive data structures
- Trait objects: Store different types behind a common trait
- Unsized types: Store dynamically sized types
Basic Usage
fn main() { // Heap allocation let b = Box::new(5); println!("b = {}", b); // Box implements Deref, so this works // Large struct - better on heap struct LargeStruct { data: [u8; 1024 * 1024], // 1MB } let large = Box::new(LargeStruct { data: [0; 1024 * 1024] }); // Only pointer stored on stack, data on heap }
Recursive Data Structures
// ❌ This won't compile - infinite size // enum List { // Cons(i32, List), // Nil, // } // ✅ This works - Box has known size #[derive(Debug)] enum List { Cons(i32, Box<List>), Nil, } impl List { fn new() -> List { List::Nil } fn prepend(self, elem: i32) -> List { List::Cons(elem, Box::new(self)) } fn len(&self) -> usize { match self { List::Cons(_, tail) => 1 + tail.len(), List::Nil => 0, } } } fn main() { let list = List::new() .prepend(1) .prepend(2) .prepend(3); println!("List: {:?}", list); println!("Length: {}", list.len()); }
Box with Trait Objects
trait Draw { fn draw(&self); } struct Circle { radius: f64, } struct Rectangle { width: f64, height: f64, } impl Draw for Circle { fn draw(&self) { println!("Drawing circle with radius {}", self.radius); } } impl Draw for Rectangle { fn draw(&self) { println!("Drawing rectangle {}x{}", self.width, self.height); } } fn main() { let shapes: Vec<Box<dyn Draw>> = vec![ Box::new(Circle { radius: 5.0 }), Box::new(Rectangle { width: 10.0, height: 5.0 }), ]; for shape in shapes { shape.draw(); } }
Rc: Reference Counted Single-Threaded Sharing
Rc<T> (Reference Counted) enables multiple ownership of the same data in single-threaded scenarios.
When to Use Rc
- Multiple owners need to read the same data
- Data lifetime is determined by multiple owners
- Single-threaded environment only
- Shared immutable data structures (graphs, trees)
Basic Usage
use std::rc::Rc; fn main() { let a = Rc::new(5); println!("Reference count: {}", Rc::strong_count(&a)); // 1 let b = Rc::clone(&a); // Shallow clone, increases ref count println!("Reference count: {}", Rc::strong_count(&a)); // 2 { let c = Rc::clone(&a); println!("Reference count: {}", Rc::strong_count(&a)); // 3 } // c dropped here println!("Reference count: {}", Rc::strong_count(&a)); // 2 } // a and b dropped here, memory freed when count reaches 0
Sharing Lists
use std::rc::Rc; #[derive(Debug)] enum List { Cons(i32, Rc<List>), Nil, } fn main() { let a = Rc::new(List::Cons(5, Rc::new(List::Cons(10, Rc::new(List::Nil))))); let b = List::Cons(3, Rc::clone(&a)); let c = List::Cons(4, Rc::clone(&a)); println!("List a: {:?}", a); println!("List b: {:?}", b); println!("List c: {:?}", c); println!("Reference count for a: {}", Rc::strong_count(&a)); // 3 }
Tree with Shared Subtrees
use std::rc::Rc; #[derive(Debug)] struct TreeNode { value: i32, left: Option<Rc<TreeNode>>, right: Option<Rc<TreeNode>>, } impl TreeNode { fn new(value: i32) -> Rc<Self> { Rc::new(TreeNode { value, left: None, right: None, }) } fn with_children(value: i32, left: Option<Rc<TreeNode>>, right: Option<Rc<TreeNode>>) -> Rc<Self> { Rc::new(TreeNode { value, left, right }) } } fn main() { // Shared subtree let shared_subtree = TreeNode::with_children( 10, Some(TreeNode::new(5)), Some(TreeNode::new(15)), ); // Two different trees sharing the same subtree let tree1 = TreeNode::with_children(1, Some(Rc::clone(&shared_subtree)), None); let tree2 = TreeNode::with_children(2, Some(Rc::clone(&shared_subtree)), None); println!("Tree 1: {:?}", tree1); println!("Tree 2: {:?}", tree2); println!("Shared subtree references: {}", Rc::strong_count(&shared_subtree)); // 3 }
RefCell: Interior Mutability
RefCell<T> provides “interior mutability” - the ability to mutate data even when there are immutable references to it. The borrowing rules are enforced at runtime instead of compile time.
When to Use RefCell
- You need to mutate data behind shared references
- You’re certain the borrowing rules are followed, but the compiler can’t verify it
- Implementing patterns that require mutation through shared references
- Building mock objects for testing
Basic Usage
use std::cell::RefCell; fn main() { let data = RefCell::new(5); // Borrow immutably { let r1 = data.borrow(); let r2 = data.borrow(); println!("r1: {}, r2: {}", r1, r2); // Multiple immutable borrows OK } // Borrows dropped here // Borrow mutably { let mut r3 = data.borrow_mut(); *r3 = 10; } // Mutable borrow dropped here println!("Final value: {}", data.borrow()); }
Runtime Borrow Checking
use std::cell::RefCell; fn main() { let data = RefCell::new(5); let r1 = data.borrow(); // let r2 = data.borrow_mut(); // ❌ Panic! Already borrowed immutably drop(r1); // Drop immutable borrow let r2 = data.borrow_mut(); // ✅ OK now println!("Mutably borrowed: {}", r2); }
Combining Rc and RefCell
This is a common pattern for shared mutable data:
use std::rc::Rc; use std::cell::RefCell; #[derive(Debug)] struct Node { value: i32, children: Vec<Rc<RefCell<Node>>>, } impl Node { fn new(value: i32) -> Rc<RefCell<Self>> { Rc::new(RefCell::new(Node { value, children: Vec::new(), })) } fn add_child(parent: &Rc<RefCell<Node>>, child: Rc<RefCell<Node>>) { parent.borrow_mut().children.push(child); } } fn main() { let root = Node::new(1); let child1 = Node::new(2); let child2 = Node::new(3); Node::add_child(&root, child1); Node::add_child(&root, child2); println!("Root: {:?}", root); // Modify child through shared reference root.borrow().children[0].borrow_mut().value = 20; println!("Modified root: {:?}", root); }
Arc: Atomic Reference Counting for Concurrency
Arc<T> (Atomically Reference Counted) is the thread-safe version of Rc<T>.
When to Use Arc
- Multiple threads need to share ownership of data
- Thread-safe reference counting is needed
- Sharing immutable data across thread boundaries
Basic Usage
use std::sync::Arc; use std::thread; fn main() { let data = Arc::new(vec![1, 2, 3, 4, 5]); let mut handles = vec![]; for i in 0..3 { let data_clone = Arc::clone(&data); let handle = thread::spawn(move || { println!("Thread {}: {:?}", i, data_clone); }); handles.push(handle); } for handle in handles { handle.join().unwrap(); } println!("Reference count: {}", Arc::strong_count(&data)); // Back to 1 }
Arc<Mutex>: Shared Mutable State
For mutable shared data across threads, combine Arc<T> with Mutex<T>:
use std::sync::{Arc, Mutex}; use std::thread; fn main() { let counter = Arc::new(Mutex::new(0)); let mut handles = vec![]; for _ in 0..10 { let counter_clone = Arc::clone(&counter); let handle = thread::spawn(move || { let mut num = counter_clone.lock().unwrap(); *num += 1; }); handles.push(handle); } for handle in handles { handle.join().unwrap(); } println!("Final count: {}", *counter.lock().unwrap()); // Should be 10 }
Weak: Breaking Reference Cycles
Weak<T> provides a non-owning reference that doesn’t affect reference counting. It’s used to break reference cycles that would cause memory leaks.
The Reference Cycle Problem
use std::rc::{Rc, Weak}; use std::cell::RefCell; #[derive(Debug)] struct Node { value: i32, parent: RefCell<Weak<Node>>, // Weak reference to parent children: RefCell<Vec<Rc<Node>>>, // Strong references to children } impl Node { fn new(value: i32) -> Rc<Self> { Rc::new(Node { value, parent: RefCell::new(Weak::new()), children: RefCell::new(Vec::new()), }) } fn add_child(parent: &Rc<Node>, child: Rc<Node>) { // Set parent weak reference *child.parent.borrow_mut() = Rc::downgrade(parent); // Add child strong reference parent.children.borrow_mut().push(child); } } fn main() { let parent = Node::new(1); let child = Node::new(2); Node::add_child(&parent, child); // Access parent from child let parent_from_child = parent.children.borrow()[0] .parent .borrow() .upgrade(); // Convert weak to strong reference if let Some(parent_ref) = parent_from_child { println!("Child's parent value: {}", parent_ref.value); } println!("Parent strong count: {}", Rc::strong_count(&parent)); // 1 println!("Parent weak count: {}", Rc::weak_count(&parent)); // 1 }
Observer Pattern with Weak References
use std::rc::{Rc, Weak}; use std::cell::RefCell; trait Observer { fn notify(&self, message: &str); } struct Subject { observers: RefCell<Vec<Weak<dyn Observer>>>, } impl Subject { fn new() -> Self { Subject { observers: RefCell::new(Vec::new()), } } fn subscribe(&self, observer: Weak<dyn Observer>) { self.observers.borrow_mut().push(observer); } fn notify_all(&self, message: &str) { let mut observers = self.observers.borrow_mut(); observers.retain(|weak_observer| { if let Some(observer) = weak_observer.upgrade() { observer.notify(message); true // Keep this observer } else { false // Remove dead observer } }); } } struct ConcreteObserver { id: String, } impl Observer for ConcreteObserver { fn notify(&self, message: &str) { println!("Observer {} received: {}", self.id, message); } } fn main() { let subject = Subject::new(); { let observer1 = Rc::new(ConcreteObserver { id: "1".to_string() }); let observer2 = Rc::new(ConcreteObserver { id: "2".to_string() }); subject.subscribe(Rc::downgrade(&observer1)); subject.subscribe(Rc::downgrade(&observer2)); subject.notify_all("Hello observers!"); } // Observers dropped here subject.notify_all("Anyone still listening?"); // Dead observers cleaned up }
Choosing the Right Smart Pointer
Decision Tree
Do you need shared ownership?
├─ No → Use Box<T>
└─ Yes
├─ Single threaded?
│ ├─ Yes
│ │ ├─ Need interior mutability? → Rc<RefCell<T>>
│ │ └─ Just sharing? → Rc<T>
│ └─ No (multi-threaded)
│ ├─ Need interior mutability? → Arc<Mutex<T>>
│ └─ Just sharing? → Arc<T>
└─ Breaking cycles? → Use Weak<T> in combination
Performance Characteristics
| Smart Pointer | Allocation | Reference Counting | Thread Safety | Interior Mutability |
|---|---|---|---|---|
Box<T> | Heap | No | No | No |
Rc<T> | Heap | Yes (non-atomic) | No | No |
Arc<T> | Heap | Yes (atomic) | Yes | No |
RefCell<T> | Stack/Heap | No | No | Yes (runtime) |
Weak<T> | No allocation | Weak counting | Depends on target | No |
Common Patterns
#![allow(unused)] fn main() { use std::rc::{Rc, Weak}; use std::cell::RefCell; use std::sync::{Arc, Mutex}; // Pattern 1: Immutable shared data (single-threaded) fn pattern1() { let shared_data = Rc::new(vec![1, 2, 3, 4, 5]); let clone1 = Rc::clone(&shared_data); let clone2 = Rc::clone(&shared_data); // Multiple readers, no writers } // Pattern 2: Mutable shared data (single-threaded) fn pattern2() { let shared_data = Rc::new(RefCell::new(vec![1, 2, 3])); shared_data.borrow_mut().push(4); let len = shared_data.borrow().len(); } // Pattern 3: Immutable shared data (multi-threaded) fn pattern3() { let shared_data = Arc::new(vec![1, 2, 3, 4, 5]); let clone = Arc::clone(&shared_data); std::thread::spawn(move || { println!("{:?}", clone); }); } // Pattern 4: Mutable shared data (multi-threaded) fn pattern4() { let shared_data = Arc::new(Mutex::new(vec![1, 2, 3])); let clone = Arc::clone(&shared_data); std::thread::spawn(move || { clone.lock().unwrap().push(4); }); } }
Common Pitfalls and Solutions
Pitfall 1: Reference Cycles with Rc
#![allow(unused)] fn main() { use std::rc::Rc; use std::cell::RefCell; // ❌ This creates a reference cycle and memory leak #[derive(Debug)] struct BadNode { children: RefCell<Vec<Rc<BadNode>>>, parent: RefCell<Option<Rc<BadNode>>>, // Strong reference = cycle! } // ✅ Use Weak for parent references #[derive(Debug)] struct GoodNode { children: RefCell<Vec<Rc<GoodNode>>>, parent: RefCell<Option<std::rc::Weak<GoodNode>>>, // Weak reference } }
Pitfall 2: RefCell Runtime Panics
#![allow(unused)] fn main() { use std::cell::RefCell; fn dangerous_refcell() { let data = RefCell::new(5); let _r1 = data.borrow(); let _r2 = data.borrow_mut(); // ❌ Panics at runtime! } // ✅ Safe RefCell usage fn safe_refcell() { let data = RefCell::new(5); { let r1 = data.borrow(); println!("Value: {}", r1); } // r1 dropped { let mut r2 = data.borrow_mut(); *r2 = 10; } // r2 dropped } }
Pitfall 3: Unnecessary Arc for Single-Threaded Code
#![allow(unused)] fn main() { // ❌ Unnecessary atomic operations use std::sync::Arc; fn single_threaded_sharing() { let data = Arc::new(vec![1, 2, 3]); // Atomic ref counting overhead // ... single-threaded code only } // ✅ Use Rc for single-threaded sharing use std::rc::Rc; fn single_threaded_sharing_optimized() { let data = Rc::new(vec![1, 2, 3]); // Faster non-atomic ref counting // ... single-threaded code only } }
Key Takeaways
- Box
for single ownership heap allocation and recursive types - Rc
for shared ownership in single-threaded contexts - RefCell
for interior mutability with runtime borrow checking - Arc
for shared ownership across threads - Weak
to break reference cycles and avoid memory leaks - Combine smart pointers for complex sharing patterns (e.g.,
Rc<RefCell<T>>) - Choose based on threading and mutability needs
Exercises
Exercise 1: Binary Tree with Parent References
Implement a binary tree where nodes can access both children and parents without creating reference cycles:
use std::rc::{Rc, Weak}; use std::cell::RefCell; #[derive(Debug)] struct TreeNode { value: i32, left: Option<Rc<RefCell<TreeNode>>>, right: Option<Rc<RefCell<TreeNode>>>, parent: RefCell<Weak<RefCell<TreeNode>>>, } impl TreeNode { fn new(value: i32) -> Rc<RefCell<Self>> { // Implement } fn add_left_child(node: &Rc<RefCell<TreeNode>>, value: i32) { // Implement: Add left child and set its parent reference } fn add_right_child(node: &Rc<RefCell<TreeNode>>, value: i32) { // Implement: Add right child and set its parent reference } fn get_parent_value(&self) -> Option<i32> { // Implement: Get parent's value if it exists } fn find_root(&self) -> Option<Rc<RefCell<TreeNode>>> { // Implement: Traverse up to find root node } } fn main() { let root = TreeNode::new(1); TreeNode::add_left_child(&root, 2); TreeNode::add_right_child(&root, 3); let left_child = root.borrow().left.as_ref().unwrap().clone(); TreeNode::add_left_child(&left_child, 4); // Test parent access let grandchild = left_child.borrow().left.as_ref().unwrap().clone(); println!("Grandchild's parent: {:?}", grandchild.borrow().get_parent_value()); // Test root finding if let Some(found_root) = grandchild.borrow().find_root() { println!("Root value: {}", found_root.borrow().value); } }
Exercise 2: Thread-Safe Cache
Implement a thread-safe cache using Arc and Mutex:
use std::collections::HashMap; use std::sync::{Arc, Mutex}; use std::thread; struct Cache<K, V> { data: Arc<Mutex<HashMap<K, V>>>, } impl<K, V> Cache<K, V> where K: Clone + Eq + std::hash::Hash + Send + 'static, V: Clone + Send + 'static, { fn new() -> Self { // Implement } fn get(&self, key: &K) -> Option<V> { // Implement: Get value from cache } fn set(&self, key: K, value: V) { // Implement: Set value in cache } fn size(&self) -> usize { // Implement: Get cache size } } impl<K, V> Clone for Cache<K, V> { fn clone(&self) -> Self { // Implement: Clone should share the same underlying data Cache { data: Arc::clone(&self.data), } } } fn main() { let cache = Cache::new(); let mut handles = vec![]; // Spawn multiple threads that use the cache for i in 0..5 { let cache_clone = cache.clone(); let handle = thread::spawn(move || { // Set some values cache_clone.set(format!("key{}", i), i * 10); // Get some values if let Some(value) = cache_clone.get(&format!("key{}", i)) { println!("Thread {}: got value {}", i, value); } }); handles.push(handle); } for handle in handles { handle.join().unwrap(); } println!("Final cache size: {}", cache.size()); }
Exercise 3: Observer Pattern with Automatic Cleanup
Extend the observer pattern to automatically clean up observers and provide subscription management:
use std::rc::{Rc, Weak}; use std::cell::RefCell; trait Observer { fn update(&self, data: &str); fn id(&self) -> &str; } struct Subject { observers: RefCell<Vec<Weak<dyn Observer>>>, } impl Subject { fn new() -> Self { // Implement } fn subscribe(&self, observer: Weak<dyn Observer>) { // Implement: Add observer } fn unsubscribe(&self, observer_id: &str) { // Implement: Remove observer by ID } fn notify(&self, data: &str) { // Implement: Notify all observers, cleaning up dead ones } fn observer_count(&self) -> usize { // Implement: Count living observers } } struct ConcreteObserver { id: String, } impl ConcreteObserver { fn new(id: String) -> Rc<Self> { Rc::new(ConcreteObserver { id }) } } impl Observer for ConcreteObserver { fn update(&self, data: &str) { println!("Observer {} received: {}", self.id, data); } fn id(&self) -> &str { &self.id } } fn main() { let subject = Subject::new(); let observer1 = ConcreteObserver::new("obs1".to_string()); let observer2 = ConcreteObserver::new("obs2".to_string()); subject.subscribe(Rc::downgrade(&observer1)); subject.subscribe(Rc::downgrade(&observer2)); subject.notify("First message"); println!("Observer count: {}", subject.observer_count()); // Drop one observer drop(observer1); subject.notify("Second message"); println!("Observer count after cleanup: {}", subject.observer_count()); subject.unsubscribe("obs2"); subject.notify("Third message"); println!("Final observer count: {}", subject.observer_count()); }
Additional Resources
- Rust Container Cheat Sheet by Ralph Levien - An excellent visual reference for Rust containers and smart pointers, including Vec, String, Box, Rc, Arc, RefCell, and more. Perfect for quick lookups and comparisons.
- The Rust Book - Smart Pointers
- Rust by Example - Smart Pointers
- RefCell and Interior Mutability
Next Up: In Day 2, we’ll explore collections, traits, and generics - the tools that make Rust code both safe and expressive.
Chapter 6: Collections Beyond Vec
HashMap and HashSet for Real-World Applications
Learning Objectives
By the end of this chapter, you’ll be able to:
- Use HashMap<K, V> efficiently for key-value storage
- Apply HashSet
for unique value collections - Master the Entry API for efficient map operations
- Choose between HashMap, BTreeMap, and other collections
- Work with custom types as keys
Quick Collection Reference
| Collection | Use When You Need | Performance |
|---|---|---|
Vec<T> | Ordered sequence, index access | O(1) index, O(n) search |
HashMap<K,V> | Fast key-value lookups | O(1) average all operations |
HashSet<T> | Unique values, fast membership test | O(1) average all operations |
BTreeMap<K,V> | Sorted keys, range queries | O(log n) all operations |
HashMap<K, V>: The Swiss Army Knife
Basic Operations
#![allow(unused)] fn main() { use std::collections::HashMap; fn hashmap_basics() { // Creation let mut scores = HashMap::new(); scores.insert("Alice", 100); scores.insert("Bob", 85); // From iterator let teams = vec!["Blue", "Red"]; let points = vec![10, 50]; let team_scores: HashMap<_, _> = teams.into_iter() .zip(points.into_iter()) .collect(); // Accessing values if let Some(score) = scores.get("Alice") { println!("Alice's score: {}", score); } // Check existence if scores.contains_key("Alice") { println!("Alice is in the map"); } } }
The Entry API: Powerful and Efficient
#![allow(unused)] fn main() { use std::collections::HashMap; fn entry_api_examples() { let mut word_count = HashMap::new(); let text = "the quick brown fox jumps over the lazy dog the"; // Count words efficiently for word in text.split_whitespace() { *word_count.entry(word).or_insert(0) += 1; } // Insert if absent let mut cache = HashMap::new(); cache.entry("key").or_insert_with(|| { // Expensive computation only runs if key doesn't exist expensive_calculation() }); // Modify or insert let mut scores = HashMap::new(); scores.entry("Alice") .and_modify(|score| *score += 10) .or_insert(100); } fn expensive_calculation() -> String { "computed_value".to_string() } }
HashMap with Custom Keys
#![allow(unused)] fn main() { use std::collections::HashMap; #[derive(Debug, Eq, PartialEq, Hash)] struct UserId(u64); #[derive(Debug, Eq, PartialEq, Hash)] struct CompositeKey { category: String, id: u32, } fn custom_keys() { let mut user_data = HashMap::new(); user_data.insert(UserId(1001), "Alice"); user_data.insert(UserId(1002), "Bob"); let mut composite_map = HashMap::new(); composite_map.insert( CompositeKey { category: "user".to_string(), id: 1 }, "User One" ); // Access with custom key if let Some(name) = user_data.get(&UserId(1001)) { println!("Found user: {}", name); } } }
HashSet: Unique Value Collections
Basic Operations and Set Theory
#![allow(unused)] fn main() { use std::collections::HashSet; fn hashset_operations() { // Create and populate let mut set1: HashSet<i32> = vec![1, 2, 3, 2, 4].into_iter().collect(); let set2: HashSet<i32> = vec![3, 4, 5, 6].into_iter().collect(); // Set operations let union: HashSet<_> = set1.union(&set2).cloned().collect(); let intersection: HashSet<_> = set1.intersection(&set2).cloned().collect(); let difference: HashSet<_> = set1.difference(&set2).cloned().collect(); println!("Union: {:?}", union); // {1, 2, 3, 4, 5, 6} println!("Intersection: {:?}", intersection); // {3, 4} println!("Difference: {:?}", difference); // {1, 2} // Check membership if set1.contains(&3) { println!("Set contains 3"); } // Insert returns bool indicating if value was new if set1.insert(10) { println!("10 was added (wasn't present before)"); } } fn practical_hashset_use() { // Track visited items let mut visited = HashSet::new(); let items = vec!["home", "about", "home", "contact", "about"]; for item in items { if visited.insert(item) { println!("First visit to: {}", item); } else { println!("Already visited: {}", item); } } } }
When to Use BTreeMap/BTreeSet
Use BTreeMap/BTreeSet when you need:
- Keys/values in sorted order
- Range queries (
map.range("a".."c")) - Consistent iteration order
- No hash function available for keys
#![allow(unused)] fn main() { use std::collections::BTreeMap; // Example: Leaderboard that needs sorted scores let mut leaderboard = BTreeMap::new(); leaderboard.insert(95, "Alice"); leaderboard.insert(87, "Bob"); leaderboard.insert(92, "Charlie"); // Iterate in score order (ascending) for (score, name) in &leaderboard { println!("{}: {}", name, score); } // Get top 3 scores let top_scores: Vec<_> = leaderboard .iter() .rev() // Reverse for descending order .take(3) .collect(); }
Common Pitfalls
HashMap Key Requirements
#![allow(unused)] fn main() { use std::collections::HashMap; // ❌ f64 doesn't implement Eq (NaN issues) // let mut map: HashMap<f64, String> = HashMap::new(); // ✅ Use ordered wrapper or integer representation #[derive(Debug, PartialEq, Eq, Hash)] struct OrderedFloat(i64); // Store as integer representation impl From<f64> for OrderedFloat { fn from(f: f64) -> Self { OrderedFloat(f.to_bits() as i64) } } }
Borrowing During Iteration
#![allow(unused)] fn main() { // ❌ Can't modify while iterating // for (key, value) in &map { // map.insert(new_key, new_value); // Error! // } // ✅ Collect changes first, apply after let changes: Vec<_> = map.iter() .filter(|(_, &v)| v > threshold) .map(|(k, v)| (format!("new_{}", k), v * 2)) .collect(); for (key, value) in changes { map.insert(key, value); } }
Exercise: Student Grade Management System
Create a system that manages student grades using HashMap and HashSet to practice collections operations and the Entry API:
use std::collections::{HashMap, HashSet}; #[derive(Debug)] struct GradeBook { // Student name -> HashMap of (subject -> grade) grades: HashMap<String, HashMap<String, f64>>, // Set of all subjects offered subjects: HashSet<String>, } impl GradeBook { fn new() -> Self { GradeBook { grades: HashMap::new(), subjects: HashSet::new(), } } fn add_subject(&mut self, subject: String) { // TODO: Add subject to the subjects set todo!() } fn add_grade(&mut self, student: String, subject: String, grade: f64) { // TODO: Add a grade for a student in a subject // Hints: // 1. Add subject to subjects set // 2. Use entry() API to get or create the student's grade map // 3. Insert the grade for the subject todo!() } fn get_student_average(&self, student: &str) -> Option<f64> { // TODO: Calculate average grade for a student across all their subjects // Return None if student doesn't exist // Hint: Use .values() and iterator methods todo!() } fn get_subject_average(&self, subject: &str) -> Option<f64> { // TODO: Calculate average grade for a subject across all students // Return None if no students have grades in this subject todo!() } fn get_students_in_subject(&self, subject: &str) -> Vec<&String> { // TODO: Return list of students who have a grade in the given subject // Hint: Filter students who have this subject in their grade map todo!() } fn get_top_students(&self, n: usize) -> Vec<(String, f64)> { // TODO: Return top N students by average grade // Format: Vec<(student_name, average_grade)> // Hint: Calculate averages, collect into Vec, sort, and take top N todo!() } fn remove_student(&mut self, student: &str) -> bool { // TODO: Remove a student and all their grades // Return true if student existed, false otherwise todo!() } fn list_subjects(&self) -> Vec<&String> { // TODO: Return all subjects as a sorted vector todo!() } } fn main() { let mut gradebook = GradeBook::new(); // Add subjects gradebook.add_subject("Math".to_string()); gradebook.add_subject("English".to_string()); gradebook.add_subject("Science".to_string()); // Add grades for students gradebook.add_grade("Alice".to_string(), "Math".to_string(), 95.0); gradebook.add_grade("Alice".to_string(), "English".to_string(), 87.0); gradebook.add_grade("Bob".to_string(), "Math".to_string(), 82.0); gradebook.add_grade("Bob".to_string(), "Science".to_string(), 91.0); gradebook.add_grade("Charlie".to_string(), "English".to_string(), 78.0); gradebook.add_grade("Charlie".to_string(), "Science".to_string(), 85.0); // Test the methods if let Some(avg) = gradebook.get_student_average("Alice") { println!("Alice's average: {:.2}", avg); } if let Some(avg) = gradebook.get_subject_average("Math") { println!("Math class average: {:.2}", avg); } let math_students = gradebook.get_students_in_subject("Math"); println!("Students in Math: {:?}", math_students); let top_students = gradebook.get_top_students(2); println!("Top 2 students: {:?}", top_students); println!("All subjects: {:?}", gradebook.list_subjects()); }
Implementation Hints:
-
add_grade() method:
- Use
self.grades.entry(student).or_insert_with(HashMap::new) - Then insert the grade:
.insert(subject, grade)
- Use
-
get_student_average():
- Use
self.grades.get(student)?to get the student’s grades - Use
.values().sum::<f64>() / values.len() as f64
- Use
-
get_subject_average():
- Iterate through all students:
self.grades.iter() - Filter students who have this subject:
filter_map(|(_, grades)| grades.get(subject)) - Calculate average from the filtered grades
- Iterate through all students:
-
get_top_students():
- Use
map()to convert students to (name, average) pairs - Use
collect::<Vec<_>>()andsort_by()with float comparison - Use
take(n)to get top N
- Use
What you’ll learn:
- HashMap’s Entry API for efficient insertions
- HashSet for tracking unique values
- Nested HashMap structures
- Iterator methods for data processing
- Working with Option types from HashMap lookups
Key Takeaways
- HashMap<K,V> for fast key-value lookups with the Entry API for efficiency
- HashSet
for unique values and set operations - BTreeMap/BTreeSet when you need sorted data or range queries
- Custom keys must implement Hash + Eq (or Ord for BTree*)
- Can’t modify while iterating - collect changes first
- Entry API prevents redundant lookups and improves performance
Next Up: In Chapter 7, we’ll explore traits - Rust’s powerful system for defining shared behavior and enabling polymorphism without inheritance.
Chapter 7: Traits - Shared Behavior and Polymorphism
Defining, Implementing, and Using Traits in Rust
Learning Objectives
By the end of this chapter, you’ll be able to:
- Define custom traits and implement them for various types
- Use trait bounds to constrain generic types
- Work with trait objects for dynamic dispatch
- Understand the difference between static and dynamic dispatch
- Apply common standard library traits effectively
- Use associated types and default implementations
- Handle trait coherence and orphan rules
What Are Traits?
Traits define shared behavior that types can implement. They’re similar to interfaces in C#/Java or concepts in C++20, but with some unique features.
Traits vs Other Languages
| Concept | C++ | C#/Java | Rust |
|---|---|---|---|
| Interface | Pure virtual class | Interface | Trait |
| Multiple inheritance | Yes (complex) | No (interfaces only) | Yes (traits) |
| Default implementations | No | Yes (C# 8+, Java 8+) | Yes |
| Associated types | No | No | Yes |
| Static dispatch | Templates | Generics | Generics |
| Dynamic dispatch | Virtual functions | Virtual methods | Trait objects |
Basic Trait Definition
#![allow(unused)] fn main() { // Define a trait trait Drawable { fn draw(&self); fn area(&self) -> f64; // Default implementation fn description(&self) -> String { format!("A drawable shape with area {}", self.area()) } } // Implement the trait for different types struct Circle { radius: f64, } struct Rectangle { width: f64, height: f64, } impl Drawable for Circle { fn draw(&self) { println!("Drawing a circle with radius {}", self.radius); } fn area(&self) -> f64 { std::f64::consts::PI * self.radius * self.radius } } impl Drawable for Rectangle { fn draw(&self) { println!("Drawing a rectangle {}x{}", self.width, self.height); } fn area(&self) -> f64 { self.width * self.height } // Override default implementation fn description(&self) -> String { format!("A rectangle with dimensions {}x{}", self.width, self.height) } } }
Standard Library Traits You Need to Know
Debug and Display
use std::fmt; #[derive(Debug)] // Automatic Debug implementation struct Point { x: f64, y: f64, } // Manual Display implementation impl fmt::Display for Point { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "({}, {})", self.x, self.y) } } fn main() { let p = Point { x: 1.0, y: 2.0 }; println!("{:?}", p); // Debug: Point { x: 1.0, y: 2.0 } println!("{}", p); // Display: (1.0, 2.0) }
Clone and Copy
#[derive(Clone, Copy, Debug)] struct SmallData { value: i32, } #[derive(Clone, Debug)] struct LargeData { data: Vec<i32>, } fn main() { let small = SmallData { value: 42 }; let small_copy = small; // Copy happens automatically println!("{:?}", small); // Still usable after copy let large = LargeData { data: vec![1, 2, 3] }; let large_clone = large.clone(); // Explicit clone needed // large moved here, but we have large_clone }
Generic Functions with Trait Bounds
Basic Trait Bounds
#![allow(unused)] fn main() { use std::fmt::Display; // Function that works with any type implementing Display fn print_info<T: Display>(item: T) { println!("Info: {}", item); } // Multiple trait bounds fn print_and_compare<T: Display + PartialEq>(item1: T, item2: T) { println!("Item 1: {}", item1); println!("Item 2: {}", item2); println!("Are equal: {}", item1 == item2); } // Where clause for complex bounds fn complex_function<T, U>(t: T, u: U) -> String where T: Display + Clone, U: std::fmt::Debug + Default, { format!("{} and {:?}", t, u) } }
Trait Objects and Dynamic Dispatch
Creating Trait Objects
trait Animal { fn make_sound(&self); fn name(&self) -> &str; } struct Dog { name: String } struct Cat { name: String } impl Animal for Dog { fn make_sound(&self) { println!("Woof!"); } fn name(&self) -> &str { &self.name } } impl Animal for Cat { fn make_sound(&self) { println!("Meow!"); } fn name(&self) -> &str { &self.name } } // Using trait objects fn main() { // Vec of trait objects let animals: Vec<Box<dyn Animal>> = vec![ Box::new(Dog { name: "Buddy".to_string() }), Box::new(Cat { name: "Whiskers".to_string() }), ]; for animal in &animals { println!("{} says:", animal.name()); animal.make_sound(); } // Function parameter as trait object pet_animal(&Dog { name: "Rex".to_string() }); } fn pet_animal(animal: &dyn Animal) { println!("Petting {}", animal.name()); animal.make_sound(); }
Associated Types
Basic Associated Types
#![allow(unused)] fn main() { trait Iterator { type Item; // Associated type fn next(&mut self) -> Option<Self::Item>; } struct Counter { current: u32, max: u32, } impl Counter { fn new(max: u32) -> Counter { Counter { current: 0, max } } } impl Iterator for Counter { type Item = u32; // Specify the associated type fn next(&mut self) -> Option<Self::Item> { if self.current < self.max { let current = self.current; self.current += 1; Some(current) } else { None } } } }
Operator Overloading with Traits
Implementing Standard Operators
use std::ops::{Add, Mul}; #[derive(Debug, Clone, Copy)] struct Point { x: f64, y: f64, } // Implement addition for Point impl Add for Point { type Output = Point; fn add(self, other: Point) -> Point { Point { x: self.x + other.x, y: self.y + other.y, } } } // Implement scalar multiplication impl Mul<f64> for Point { type Output = Point; fn mul(self, scalar: f64) -> Point { Point { x: self.x * scalar, y: self.y * scalar, } } } fn main() { let p1 = Point { x: 1.0, y: 2.0 }; let p2 = Point { x: 3.0, y: 4.0 }; let p3 = p1 + p2; // Uses Add trait let p4 = p1 * 2.5; // Uses Mul trait println!("p1 + p2 = {:?}", p3); println!("p1 * 2.5 = {:?}", p4); }
Supertraits and Trait Inheritance
#![allow(unused)] fn main() { use std::fmt::Debug; // Supertrait example trait Person { fn name(&self) -> &str; } // Student requires Person trait Student: Person { fn university(&self) -> &str; } // Must implement both traits #[derive(Debug)] struct GradStudent { name: String, uni: String, } impl Person for GradStudent { fn name(&self) -> &str { &self.name } } impl Student for GradStudent { fn university(&self) -> &str { &self.uni } } // Function requiring multiple traits fn print_student_info<T: Student + Debug>(student: &T) { println!("Name: {}", student.name()); println!("University: {}", student.university()); println!("Debug: {:?}", student); } }
Common Trait Patterns
The From and Into Traits
use std::convert::From; #[derive(Debug)] struct Millimeters(u32); #[derive(Debug)] struct Meters(f64); impl From<Meters> for Millimeters { fn from(m: Meters) -> Self { Millimeters((m.0 * 1000.0) as u32) } } // Into is automatically implemented! fn main() { let m = Meters(1.5); let mm: Millimeters = m.into(); // Uses Into (automatic from From) println!("{:?}", mm); // Millimeters(1500) let m2 = Meters(2.0); let mm2 = Millimeters::from(m2); // Direct From usage println!("{:?}", mm2); // Millimeters(2000) }
Exercise: Trait Objects with Multiple Behaviors
Build a plugin system using trait objects:
trait Plugin { fn name(&self) -> &str; fn execute(&self); } trait Configurable { fn configure(&mut self, config: &str); } // Create different plugin types struct LogPlugin { name: String, level: String, } struct MetricsPlugin { name: String, interval: u32, } // TODO: Implement Plugin and Configurable for both types struct PluginManager { plugins: Vec<Box<dyn Plugin>>, } impl PluginManager { fn new() -> Self { PluginManager { plugins: Vec::new() } } fn register(&mut self, plugin: Box<dyn Plugin>) { // TODO: Add plugin to the list } fn run_all(&self) { // TODO: Execute all plugins } } fn main() { let mut manager = PluginManager::new(); // TODO: Create and register plugins // manager.register(Box::new(...)); manager.run_all(); }
Key Takeaways
- Traits define shared behavior across different types
- Static dispatch (generics) is faster but increases code size
- Dynamic dispatch (trait objects) enables runtime polymorphism
- Associated types provide cleaner APIs than generic parameters
- Operator overloading is done through standard traits
- Supertraits allow building trait hierarchies
- From/Into traits enable type conversions
- Default implementations reduce boilerplate code
Next Up: In Chapter 8, we’ll explore generics - Rust’s powerful system for writing flexible, reusable code with type parameters.
Chapter 8: Generics & Type Safety
Learning Objectives
- Master generic functions, structs, and methods
- Understand trait bounds and where clauses
- Learn const generics for compile-time parameters
- Apply type-driven design patterns
- Compare with C++ templates and .NET generics
Introduction
Generics allow you to write flexible, reusable code that works with multiple types while maintaining type safety. Coming from C++ or .NET, you’ll find Rust’s generics familiar but more constrained—in a good way.
Generic Functions
Basic Generic Functions
// Generic function that works with any type T fn swap<T>(a: &mut T, b: &mut T) { std::mem::swap(a, b); } // Multiple generic parameters fn pair<T, U>(first: T, second: U) -> (T, U) { (first, second) } // Usage fn main() { let mut x = 5; let mut y = 10; swap(&mut x, &mut y); println!("x: {}, y: {}", x, y); // x: 10, y: 5 let p = pair("hello", 42); println!("{:?}", p); // ("hello", 42) }
Comparison with C++ and .NET
| Feature | Rust | C++ Templates | .NET Generics |
|---|---|---|---|
| Compilation | Monomorphization | Template instantiation | Runtime generics |
| Type checking | At definition | At instantiation | At definition |
| Constraints | Trait bounds | Concepts (C++20) | Where clauses |
| Code bloat | Yes (like C++) | Yes | No |
| Performance | Zero-cost | Zero-cost | Small overhead |
Generic Structs
// Generic struct struct Point<T> { x: T, y: T, } // Different types for each field struct Pair<T, U> { first: T, second: U, } // Implementation for generic struct impl<T> Point<T> { fn new(x: T, y: T) -> Self { Point { x, y } } } // Implementation for specific type impl Point<f64> { fn distance_from_origin(&self) -> f64 { (self.x.powi(2) + self.y.powi(2)).sqrt() } } fn main() { let integer_point = Point::new(5, 10); let float_point = Point::new(1.0, 4.0); // Only available for Point<f64> println!("Distance: {}", float_point.distance_from_origin()); }
Trait Bounds
Trait bounds specify what functionality a generic type must have.
#![allow(unused)] fn main() { use std::fmt::Display; // T must implement Display fn print_it<T: Display>(value: T) { println!("{}", value); } // Multiple bounds with + fn print_and_clone<T: Display + Clone>(value: T) -> T { println!("{}", value); value.clone() } // Trait bounds on structs struct Wrapper<T: Display> { value: T, } // Complex bounds fn complex_function<T, U>(t: T, u: U) -> String where T: Display + Clone, U: Display + Debug, { format!("{} and {:?}", t.clone(), u) } }
Where Clauses
Where clauses make complex bounds more readable:
#![allow(unused)] fn main() { use std::fmt::Debug; // Instead of this... fn ugly<T: Display + Clone, U: Debug + Display>(t: T, u: U) { // ... } // Write this... fn pretty<T, U>(t: T, u: U) where T: Display + Clone, U: Debug + Display, { // Much cleaner! } // Particularly useful with associated types fn process<I>(iter: I) where I: Iterator, I::Item: Display, { for item in iter { println!("{}", item); } } }
Generic Enums
The most common generic enums you’ll use:
#![allow(unused)] fn main() { // Option<T> - Rust's null replacement enum Option<T> { Some(T), None, } // Result<T, E> - For error handling enum Result<T, E> { Ok(T), Err(E), } // Custom generic enum enum BinaryTree<T> { Empty, Node { value: T, left: Box<BinaryTree<T>>, right: Box<BinaryTree<T>>, }, } impl<T> BinaryTree<T> { fn new() -> Self { BinaryTree::Empty } fn insert(&mut self, value: T) where T: Ord, { // Implementation here } } }
Const Generics
Const generics allow you to parameterize types with constant values:
// Array wrapper with compile-time size struct ArrayWrapper<T, const N: usize> { data: [T; N], } impl<T, const N: usize> ArrayWrapper<T, N> { fn new(value: T) -> Self where T: Copy, { ArrayWrapper { data: [value; N], } } } // Matrix type with compile-time dimensions struct Matrix<T, const ROWS: usize, const COLS: usize> { data: [[T; COLS]; ROWS], } fn main() { let arr: ArrayWrapper<i32, 5> = ArrayWrapper::new(0); let matrix: Matrix<f64, 3, 4> = Matrix { data: [[0.0; 4]; 3], }; }
Type Aliases and Newtype Pattern
// Type alias - just a synonym type Kilometers = i32; type Result<T> = std::result::Result<T, std::io::Error>; // Newtype pattern - creates a distinct type struct Meters(f64); struct Seconds(f64); impl Meters { fn to_feet(&self) -> f64 { self.0 * 3.28084 } } // Prevents mixing units fn calculate_speed(distance: Meters, time: Seconds) -> f64 { distance.0 / time.0 } fn main() { let distance = Meters(100.0); let time = Seconds(9.58); // Type safety prevents this: // let wrong = calculate_speed(time, distance); // Error! let speed = calculate_speed(distance, time); println!("Speed: {} m/s", speed); }
Phantom Types
Phantom types provide compile-time guarantees without runtime cost:
use std::marker::PhantomData; // States for a type-safe builder struct Locked; struct Unlocked; struct Door<State> { name: String, _state: PhantomData<State>, } impl Door<Locked> { fn new(name: String) -> Self { Door { name, _state: PhantomData, } } fn unlock(self) -> Door<Unlocked> { Door { name: self.name, _state: PhantomData, } } } impl Door<Unlocked> { fn open(&self) { println!("Opening door: {}", self.name); } fn lock(self) -> Door<Locked> { Door { name: self.name, _state: PhantomData, } } } fn main() { let door = Door::<Locked>::new("Front".to_string()); // door.open(); // Error: method not found let door = door.unlock(); door.open(); // OK }
Advanced Pattern: Type-Driven Design
#![allow(unused)] fn main() { // Email validation at compile time struct Unvalidated; struct Validated; struct Email<State = Unvalidated> { value: String, _state: PhantomData<State>, } impl Email<Unvalidated> { fn new(value: String) -> Self { Email { value, _state: PhantomData, } } fn validate(self) -> Result<Email<Validated>, String> { if self.value.contains('@') { Ok(Email { value: self.value, _state: PhantomData, }) } else { Err("Invalid email".to_string()) } } } impl Email<Validated> { fn send(&self) { println!("Sending email to: {}", self.value); } } // Function that only accepts validated emails fn send_newsletter(email: &Email<Validated>) { email.send(); } }
Common Pitfalls
1. Over-constraining Generics
#![allow(unused)] fn main() { // Bad: unnecessary Clone bound fn bad<T: Clone + Display>(value: &T) { println!("{}", value); // Clone not needed! } // Good: only required bounds fn good<T: Display>(value: &T) { println!("{}", value); } }
2. Missing Lifetime Parameters
#![allow(unused)] fn main() { // Won't compile // struct RefHolder<T> { // value: &T, // } // Correct struct RefHolder<'a, T> { value: &'a T, } }
3. Monomorphization Bloat
#![allow(unused)] fn main() { // Each T creates a new function copy fn generic<T>(value: T) -> T { value } // Consider using trait objects for large functions fn with_trait_object(value: &dyn Display) { println!("{}", value); } }
Exercise: Generic Priority Queue with Constraints
Create a priority queue system that demonstrates multiple generic programming concepts:
use std::fmt::{Debug, Display}; use std::cmp::Ord; use std::marker::PhantomData; // Part 1: Basic generic queue with trait bounds #[derive(Debug)] struct PriorityQueue<T> where T: Ord + Debug, { items: Vec<T>, } impl<T> PriorityQueue<T> where T: Ord + Debug, { fn new() -> Self { // TODO: Create a new empty priority queue todo!() } fn enqueue(&mut self, item: T) { // TODO: Add item and maintain sorted order (highest priority first) // Hint: Use Vec::push() then Vec::sort() todo!() } fn dequeue(&mut self) -> Option<T> { // TODO: Remove and return the highest priority item // Hint: Use Vec::pop() since we keep items sorted todo!() } fn peek(&self) -> Option<&T> { // TODO: Return reference to highest priority item without removing it todo!() } fn len(&self) -> usize { self.items.len() } fn is_empty(&self) -> bool { self.items.is_empty() } } // Part 2: Generic trait for items that can be prioritized trait Prioritized { type Priority: Ord; fn priority(&self) -> Self::Priority; } // Part 3: Advanced queue that works with any Prioritized type struct AdvancedQueue<T> where T: Prioritized + Debug, { items: Vec<T>, } impl<T> AdvancedQueue<T> where T: Prioritized + Debug, { fn new() -> Self { AdvancedQueue { items: Vec::new() } } fn enqueue(&mut self, item: T) { // TODO: Insert item in correct position based on priority // Use binary search for efficient insertion todo!() } fn dequeue(&mut self) -> Option<T> { // TODO: Remove highest priority item todo!() } } // Part 4: Example types implementing Prioritized #[derive(Debug, Eq, PartialEq)] struct Task { name: String, urgency: u32, } impl Prioritized for Task { type Priority = u32; fn priority(&self) -> Self::Priority { // TODO: Return the urgency level todo!() } } impl Ord for Task { fn cmp(&self, other: &Self) -> std::cmp::Ordering { // TODO: Compare based on urgency (higher urgency = higher priority) todo!() } } impl PartialOrd for Task { fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> { Some(self.cmp(other)) } } // Part 5: Generic function with multiple trait bounds fn process_queue<T, Q>(queue: &mut Q, max_items: usize) -> Vec<T> where T: Debug + Clone, Q: QueueOperations<T>, { // TODO: Process up to max_items from the queue // Return a vector of processed items todo!() } // Part 6: Trait for queue operations (demonstrates trait design) trait QueueOperations<T> { fn enqueue(&mut self, item: T); fn dequeue(&mut self) -> Option<T>; fn len(&self) -> usize; } // TODO: Implement QueueOperations for PriorityQueue<T> fn main() { // Test basic priority queue with numbers let mut num_queue = PriorityQueue::new(); num_queue.enqueue(5); num_queue.enqueue(1); num_queue.enqueue(10); num_queue.enqueue(3); println!("Number queue:"); while let Some(num) = num_queue.dequeue() { println!("Processing: {}", num); } // Test with custom Task type let mut task_queue = PriorityQueue::new(); task_queue.enqueue(Task { name: "Low".to_string(), urgency: 1 }); task_queue.enqueue(Task { name: "High".to_string(), urgency: 5 }); task_queue.enqueue(Task { name: "Medium".to_string(), urgency: 3 }); println!("\nTask queue:"); while let Some(task) = task_queue.dequeue() { println!("Processing: {:?}", task); } // Test advanced queue with Prioritized trait let mut advanced_queue = AdvancedQueue::new(); advanced_queue.enqueue(Task { name: "First".to_string(), urgency: 2 }); advanced_queue.enqueue(Task { name: "Second".to_string(), urgency: 4 }); println!("\nAdvanced queue:"); while let Some(task) = advanced_queue.dequeue() { println!("Processing: {:?}", task); } }
Implementation Guidelines:
-
PriorityQueue methods:
new(): ReturnPriorityQueue { items: Vec::new() }enqueue(): Push item then sort withself.items.sort()dequeue(): Useself.items.pop()(gets highest after sorting)peek(): Useself.items.last()
-
Task::priority():
- Return
self.urgency
- Return
-
Task::cmp():
- Use
self.urgency.cmp(&other.urgency)
- Use
-
AdvancedQueue::enqueue():
- Use
binary_search_by_key()to find insertion point - Use
insert()to maintain sorted order
- Use
-
QueueOperations trait implementation:
- Implement for
PriorityQueue<T>by delegating to existing methods
- Implement for
What this exercise teaches:
- Trait bounds (
Ord + Debug) restrict generic types - Associated types in traits (
Priority) - Complex where clauses for readable constraints
- Generic trait implementation with multiple bounds
- Real-world generic patterns beyond simple containers
- Trait design for abstraction over different implementations
Key Takeaways
✅ Generics provide type safety without code duplication - Write once, use with many types
✅ Trait bounds specify required functionality - More explicit than C++ templates
✅ Monomorphization means zero runtime cost - Like C++ templates, unlike .NET generics
✅ Const generics enable compile-time computations - Arrays and matrices with known sizes
✅ Phantom types provide compile-time guarantees - State machines in the type system
✅ Type-driven design prevents bugs at compile time - Invalid states are unrepresentable
Next: Chapter 9: Enums & Pattern Matching
Chapter 9: Pattern Matching - Exhaustive Control Flow
Advanced Pattern Matching, Option/Result Handling, and Match Guards
Learning Objectives
By the end of this chapter, you’ll be able to:
- Use exhaustive pattern matching to handle all possible cases
- Apply advanced patterns with destructuring and guards
- Handle Option and Result types idiomatically
- Use if let, while let for conditional pattern matching
- Understand when to use match vs if let vs pattern matching in function parameters
- Write robust error handling with pattern matching
- Apply match guards for complex conditional logic
Pattern Matching vs Switch Statements
Comparison with Other Languages
| Feature | C/C++ switch | C# switch | Rust match |
|---|---|---|---|
| Exhaustiveness | No | Partial (warnings) | Yes (enforced) |
| Complex patterns | No | Limited | Full destructuring |
| Guards | No | Limited (when) | Yes |
| Return values | No | Expression (C# 8+) | Always expression |
| Fall-through | Default (dangerous) | No | Not possible |
Basic Match Expression
#![allow(unused)] fn main() { enum TrafficLight { Red, Yellow, Green, } enum Message { Quit, Move { x: i32, y: i32 }, Write(String), ChangeColor(u8, u8, u8), } fn handle_traffic_light(light: TrafficLight) -> &'static str { match light { TrafficLight::Red => "Stop", TrafficLight::Yellow => "Prepare to stop", TrafficLight::Green => "Go", // Compiler ensures all variants are handled! } } fn handle_message(msg: Message) { match msg { Message::Quit => { println!("Quit message received"); std::process::exit(0); }, Message::Move { x, y } => { println!("Move to coordinates: ({}, {})", x, y); }, Message::Write(text) => { println!("Text message: {}", text); }, Message::ChangeColor(r, g, b) => { println!("Change color to RGB({}, {}, {})", r, g, b); }, } } }
Option and Result Pattern Matching
Handling Option
#![allow(unused)] fn main() { fn divide(x: f64, y: f64) -> Option<f64> { if y != 0.0 { Some(x / y) } else { None } } fn process_division(x: f64, y: f64) { match divide(x, y) { Some(result) => println!("Result: {}", result), None => println!("Cannot divide by zero"), } } // Nested Option handling fn parse_config(input: Option<&str>) -> Option<u32> { match input { Some(s) => match s.parse::<u32>() { Ok(num) => Some(num), Err(_) => None, }, None => None, } } // Better with combinators (covered later) fn parse_config_better(input: Option<&str>) -> Option<u32> { input?.parse().ok() } }
Handling Result<T, E>
#![allow(unused)] fn main() { use std::fs::File; use std::io::{self, Read}; fn read_file_contents(filename: &str) -> Result<String, io::Error> { let mut file = File::open(filename)?; let mut contents = String::new(); file.read_to_string(&mut contents)?; Ok(contents) } fn process_file(filename: &str) { match read_file_contents(filename) { Ok(contents) => { println!("File contents ({} bytes):", contents.len()); println!("{}", contents); }, Err(error) => { match error.kind() { io::ErrorKind::NotFound => { println!("File '{}' not found", filename); }, io::ErrorKind::PermissionDenied => { println!("Permission denied for file '{}'", filename); }, _ => { println!("Error reading file '{}': {}", filename, error); }, } } } } // Custom error types #[derive(Debug)] enum ConfigError { MissingFile, ParseError(String), ValidationError(String), } fn load_config(path: &str) -> Result<Config, ConfigError> { let contents = std::fs::read_to_string(path) .map_err(|_| ConfigError::MissingFile)?; let config: Config = serde_json::from_str(&contents) .map_err(|e| ConfigError::ParseError(e.to_string()))?; validate_config(&config) .map_err(|msg| ConfigError::ValidationError(msg))?; Ok(config) } #[derive(Debug)] struct Config { port: u16, host: String, } fn validate_config(config: &Config) -> Result<(), String> { if config.port == 0 { return Err("Port cannot be zero".to_string()); } if config.host.is_empty() { return Err("Host cannot be empty".to_string()); } Ok(()) } }
Advanced Patterns
Destructuring and Nested Patterns
#![allow(unused)] fn main() { struct Point { x: i32, y: i32, } enum Shape { Circle { center: Point, radius: f64 }, Rectangle { top_left: Point, bottom_right: Point }, Triangle(Point, Point, Point), } fn analyze_shape(shape: &Shape) { match shape { // Destructure nested structures Shape::Circle { center: Point { x, y }, radius } => { println!("Circle at ({}, {}) with radius {}", x, y, radius); }, // Partial destructuring with .. Shape::Rectangle { top_left: Point { x: x1, y: y1 }, .. } => { println!("Rectangle starting at ({}, {})", x1, y1); }, // Destructure tuple variants Shape::Triangle(p1, p2, p3) => { println!("Triangle with vertices: ({}, {}), ({}, {}), ({}, {})", p1.x, p1.y, p2.x, p2.y, p3.x, p3.y); }, } } // Pattern matching with references and dereferencing fn process_optional_point(point: &Option<Point>) { match point { Some(Point { x, y }) => println!("Point at ({}, {})", x, y), None => println!("No point"), } } // Multiple patterns fn classify_number(n: i32) -> &'static str { match n { 1 | 2 | 3 => "small", 4..=10 => "medium", 11..=100 => "large", _ => "very large", } } // Binding values in patterns fn process_message_advanced(msg: Message) { match msg { Message::Move { x: 0, y } => { println!("Move vertically to y: {}", y); }, Message::Move { x, y: 0 } => { println!("Move horizontally to x: {}", x); }, Message::Move { x, y } if x == y => { println!("Move diagonally to ({}, {})", x, y); }, Message::Move { x, y } => { println!("Move to ({}, {})", x, y); }, msg @ Message::Write(_) => { println!("Received write message: {:?}", msg); }, _ => println!("Other message"), } } }
Match Guards
#![allow(unused)] fn main() { fn categorize_temperature(temp: f64, is_celsius: bool) -> &'static str { match temp { t if is_celsius && t < 0.0 => "freezing (Celsius)", t if is_celsius && t > 100.0 => "boiling (Celsius)", t if !is_celsius && t < 32.0 => "freezing (Fahrenheit)", t if !is_celsius && t > 212.0 => "boiling (Fahrenheit)", t if t > 0.0 => "positive temperature", 0.0 => "exactly zero", _ => "negative temperature", } } // Complex guards with destructuring #[derive(Debug)] enum Request { Get { path: String, authenticated: bool }, Post { path: String, data: Vec<u8> }, } fn handle_request(req: Request) -> &'static str { match req { Request::Get { path, authenticated: true } if path.starts_with("/admin") => { "Admin access granted" }, Request::Get { path, authenticated: false } if path.starts_with("/admin") => { "Admin access denied" }, Request::Get { .. } => "Regular GET request", Request::Post { data, .. } if data.len() > 1024 => { "Large POST request" }, Request::Post { .. } => "Regular POST request", } } }
if let and while let
if let for Simple Cases
#![allow(unused)] fn main() { // Instead of verbose match fn process_option_verbose(opt: Option<i32>) { match opt { Some(value) => println!("Got value: {}", value), None => {}, // Do nothing } } // Use if let for cleaner code fn process_option_clean(opt: Option<i32>) { if let Some(value) = opt { println!("Got value: {}", value); } } // if let with else fn process_result(result: Result<String, &str>) { if let Ok(value) = result { println!("Success: {}", value); } else { println!("Something went wrong"); } } // Chaining if let fn process_nested(opt: Option<Result<i32, &str>>) { if let Some(result) = opt { if let Ok(value) = result { println!("Got nested value: {}", value); } } } }
while let for Loops
#![allow(unused)] fn main() { fn process_iterator() { let mut stack = vec![1, 2, 3, 4, 5]; // Pop elements while they exist while let Some(value) = stack.pop() { println!("Processing: {}", value); } } fn process_lines() { use std::io::{self, BufRead}; let stdin = io::stdin(); let mut lines = stdin.lock().lines(); // Process lines until EOF or error while let Ok(line) = lines.next().unwrap_or(Err(io::Error::new( io::ErrorKind::UnexpectedEof, "EOF" ))) { if line.trim() == "quit" { break; } println!("You entered: {}", line); } } }
Pattern Matching in Function Parameters
Destructuring in Parameters
// Destructure tuples in parameters fn print_coordinates((x, y): (i32, i32)) { println!("Coordinates: ({}, {})", x, y); } // Destructure structs fn print_point(Point { x, y }: Point) { println!("Point: ({}, {})", x, y); } // Destructure with references fn analyze_point_ref(&Point { x, y }: &Point) { println!("Analyzing point at ({}, {})", x, y); } // Closure patterns fn main() { let points = vec![ Point { x: 1, y: 2 }, Point { x: 3, y: 4 }, Point { x: 5, y: 6 }, ]; // Destructure in closure parameters points.iter().for_each(|&Point { x, y }| { println!("Point: ({}, {})", x, y); }); // Filter with pattern matching let origin_points: Vec<_> = points .into_iter() .filter(|Point { x: 0, y: 0 }| true) // Only points at origin .collect(); }
Common Pitfalls and Best Practices
Pitfall 1: Incomplete Patterns
#![allow(unused)] fn main() { // BAD: This won't compile - missing Some case fn bad_option_handling(opt: Option<i32>) { match opt { None => println!("Nothing"), // Error: non-exhaustive patterns } } // GOOD: Handle all cases fn good_option_handling(opt: Option<i32>) { match opt { Some(val) => println!("Value: {}", val), None => println!("Nothing"), } } }
Pitfall 2: Unreachable Patterns
#![allow(unused)] fn main() { // BAD: Unreachable pattern fn bad_range_matching(n: i32) { match n { 1..=10 => println!("Small"), 5 => println!("Five"), // This is unreachable! _ => println!("Other"), } } // GOOD: More specific patterns first fn good_range_matching(n: i32) { match n { 5 => println!("Five"), 1..=10 => println!("Small (not five)"), _ => println!("Other"), } } }
Best Practices
#![allow(unused)] fn main() { // 1. Use @ binding to capture while pattern matching fn handle_special_ranges(value: i32) { match value { n @ 1..=5 => println!("Small number: {}", n), n @ 6..=10 => println!("Medium number: {}", n), n => println!("Large number: {}", n), } } // 2. Use .. to ignore fields you don't need struct LargeStruct { important: i32, flag: bool, data1: String, data2: String, data3: Vec<u8>, } fn process_large_struct(s: LargeStruct) { match s { LargeStruct { important, flag: true, .. } => { println!("Important value with flag: {}", important); }, LargeStruct { important, .. } => { println!("Important value without flag: {}", important); }, } } // 3. Prefer early returns with guards fn validate_user_input(input: &str) -> Result<i32, &'static str> { match input.parse::<i32>() { Ok(n) if n >= 0 => Ok(n), Ok(_) => Err("Number must be non-negative"), Err(_) => Err("Invalid number format"), } } }
Exercise: HTTP Status Handler
Create a function that handles different HTTP status codes using pattern matching:
#![allow(unused)] fn main() { #[derive(Debug)] enum HttpStatus { Ok, // 200 NotFound, // 404 ServerError, // 500 Custom(u16), // Any other code } #[derive(Debug)] struct HttpResponse { status: HttpStatus, body: Option<String>, headers: Vec<(String, String)>, } // TODO: Implement this function fn handle_response(response: HttpResponse) -> String { // Pattern match on the response to return appropriate messages: // - Ok with body: "Success: {body}" // - Ok without body: "Success: No content" // - NotFound: "Error: Resource not found" // - ServerError: "Error: Internal server error" // - Custom(code) where code < 400: "Info: Status {code}" // - Custom(code) where code >= 400: "Error: Status {code}" todo!() } }
Key Takeaways
- Exhaustiveness - Rust’s compiler ensures you handle all possible cases
- Pattern matching is an expression - Every match arm must return the same type
- Use if let for simple Option/Result handling instead of verbose match
- Match guards enable complex conditional logic within patterns
- Destructuring allows you to extract values from complex data structures
- Order matters - More specific patterns should come before general ones
- @ binding lets you capture values while pattern matching
- Early returns with guards can make code more readable
Next Up: In Chapter 10, we’ll explore error handling - Rust’s approach to robust error management with Result types and the ? operator.
Chapter 10: Error Handling - Result, ?, and Custom Errors
Robust Error Management in Rust
Learning Objectives
By the end of this chapter, you’ll be able to:
- Use Result<T, E> for recoverable error handling
- Master the ? operator for error propagation
- Create custom error types with proper error handling
- Understand when to use Result vs panic!
- Work with popular error handling crates (anyhow, thiserror)
- Implement error conversion and chaining
- Handle multiple error types gracefully
Rust’s Error Handling Philosophy
Error Categories
| Type | Examples | Rust Approach |
|---|---|---|
| Recoverable | File not found, network timeout | Result<T, E> |
| Unrecoverable | Array out of bounds, null pointer | panic! |
Comparison with Other Languages
| Language | Approach | Pros | Cons |
|---|---|---|---|
| C++ | Exceptions, error codes | Familiar | Runtime overhead, can be ignored |
| C#/.NET | Exceptions | Clean syntax | Performance cost, hidden control flow |
| Go | Explicit error returns | Explicit, fast | Verbose |
| Rust | Result<T, E> | Explicit, zero-cost | Must be handled |
Result<T, E>: The Foundation
Basic Result Usage
use std::fs::File; use std::io::ErrorKind; fn open_file(filename: &str) -> Result<File, std::io::Error> { File::open(filename) } fn main() { // Pattern matching match open_file("test.txt") { Ok(file) => println!("File opened successfully"), Err(error) => match error.kind() { ErrorKind::NotFound => println!("File not found"), ErrorKind::PermissionDenied => println!("Permission denied"), other_error => println!("Other error: {:?}", other_error), }, } // Using if let if let Ok(file) = open_file("test.txt") { println!("File opened with if let"); } // Unwrap variants (use carefully!) // let file1 = open_file("test.txt").unwrap(); // Panics on error // let file2 = open_file("test.txt").expect("Failed to open"); // Panics with message }
The ? Operator: Error Propagation Made Easy
Basic ? Usage
#![allow(unused)] fn main() { use std::fs::File; use std::io::{self, Read}; // Without ? operator (verbose) fn read_file_old_way(filename: &str) -> Result<String, io::Error> { let mut file = match File::open(filename) { Ok(file) => file, Err(e) => return Err(e), }; let mut contents = String::new(); match file.read_to_string(&mut contents) { Ok(_) => Ok(contents), Err(e) => Err(e), } } // With ? operator (concise) fn read_file_new_way(filename: &str) -> Result<String, io::Error> { let mut file = File::open(filename)?; let mut contents = String::new(); file.read_to_string(&mut contents)?; Ok(contents) } // Even more concise fn read_file_shortest(filename: &str) -> Result<String, io::Error> { std::fs::read_to_string(filename) } }
Custom Error Types
Simple Custom Errors
#![allow(unused)] fn main() { use std::fmt; #[derive(Debug)] enum MathError { DivisionByZero, NegativeSquareRoot, } impl fmt::Display for MathError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match self { MathError::DivisionByZero => write!(f, "Cannot divide by zero"), MathError::NegativeSquareRoot => write!(f, "Cannot take square root of negative number"), } } } impl std::error::Error for MathError {} fn divide(a: f64, b: f64) -> Result<f64, MathError> { if b == 0.0 { Err(MathError::DivisionByZero) } else { Ok(a / b) } } fn square_root(x: f64) -> Result<f64, MathError> { if x < 0.0 { Err(MathError::NegativeSquareRoot) } else { Ok(x.sqrt()) } } }
Error Conversion and Chaining
The From Trait for Error Conversion
#![allow(unused)] fn main() { use std::fs::File; use std::io; use std::num::ParseIntError; #[derive(Debug)] enum AppError { Io(io::Error), Parse(ParseIntError), Custom(String), } // Automatic conversion from io::Error impl From<io::Error> for AppError { fn from(error: io::Error) -> Self { AppError::Io(error) } } // Automatic conversion from ParseIntError impl From<ParseIntError> for AppError { fn from(error: ParseIntError) -> Self { AppError::Parse(error) } } impl fmt::Display for AppError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match self { AppError::Io(e) => write!(f, "IO error: {}", e), AppError::Parse(e) => write!(f, "Parse error: {}", e), AppError::Custom(msg) => write!(f, "Error: {}", msg), } } } impl std::error::Error for AppError {} // Now ? operator works seamlessly fn read_number_from_file(filename: &str) -> Result<i32, AppError> { let contents = std::fs::read_to_string(filename)?; // io::Error -> AppError let number = contents.trim().parse::<i32>()?; // ParseIntError -> AppError if number < 0 { return Err(AppError::Custom("Number must be positive".to_string())); } Ok(number) } }
Chaining Multiple Operations
#![allow(unused)] fn main() { use std::path::Path; fn process_config_file(path: &Path) -> Result<Config, AppError> { std::fs::read_to_string(path)? .lines() .map(|line| line.trim()) .filter(|line| !line.is_empty() && !line.starts_with('#')) .map(|line| parse_config_line(line)) .collect::<Result<Vec<_>, _>>()? .into_iter() .fold(Config::default(), |mut cfg, (key, value)| { cfg.set(&key, value); cfg }) .validate() .map_err(|e| AppError::Custom(e)) } struct Config { settings: HashMap<String, String>, } impl Config { fn default() -> Self { Config { settings: HashMap::new() } } fn set(&mut self, key: &str, value: String) { self.settings.insert(key.to_string(), value); } fn validate(self) -> Result<Config, String> { if self.settings.is_empty() { Err("Configuration is empty".to_string()) } else { Ok(self) } } } fn parse_config_line(line: &str) -> Result<(String, String), AppError> { let parts: Vec<&str> = line.splitn(2, '=').collect(); if parts.len() != 2 { return Err(AppError::Custom(format!("Invalid config line: {}", line))); } Ok((parts[0].to_string(), parts[1].to_string())) } }
Working with External Error Libraries
Using anyhow for Applications
use anyhow::{Context, Result, bail}; // anyhow::Result is Result<T, anyhow::Error> fn load_config(path: &str) -> Result<Config> { let contents = std::fs::read_to_string(path) .context("Failed to read config file")?; let config: Config = serde_json::from_str(&contents) .context("Failed to parse JSON config")?; if config.port == 0 { bail!("Invalid port: 0"); } Ok(config) } fn main() -> Result<()> { let config = load_config("app.json")?; // Chain multiple operations with context let server = create_server(&config) .context("Failed to create server")?; server.run() .context("Server failed during execution")?; Ok(()) }
Using thiserror for Libraries
#![allow(unused)] fn main() { use thiserror::Error; #[derive(Error, Debug)] enum DataStoreError { #[error("data not found")] NotFound, #[error("permission denied: {0}")] PermissionDenied(String), #[error("invalid input: {msg}")] InvalidInput { msg: String }, #[error("database error")] Database(#[from] sqlx::Error), #[error("serialization error")] Serialization(#[from] serde_json::Error), #[error(transparent)] Other(#[from] anyhow::Error), } // Use in library code fn get_user(id: u64) -> Result<User, DataStoreError> { if id == 0 { return Err(DataStoreError::InvalidInput { msg: "ID cannot be 0".to_string() }); } let user = db::query_user(id)?; // Automatic conversion from sqlx::Error Ok(user) } }
Error Handling Patterns
Early Returns with ?
#![allow(unused)] fn main() { fn process_data(input: &str) -> Result<String, Box<dyn std::error::Error>> { let parsed = parse_input(input)?; let validated = validate(parsed)?; let processed = transform(validated)?; Ok(format_output(processed)) } // Compare with nested match statements (avoid this!) fn process_data_verbose(input: &str) -> Result<String, Box<dyn std::error::Error>> { match parse_input(input) { Ok(parsed) => { match validate(parsed) { Ok(validated) => { match transform(validated) { Ok(processed) => Ok(format_output(processed)), Err(e) => Err(e.into()), } }, Err(e) => Err(e.into()), } }, Err(e) => Err(e.into()), } } }
Collecting Results
#![allow(unused)] fn main() { fn process_files(paths: &[&str]) -> Result<Vec<String>, io::Error> { paths.iter() .map(|path| std::fs::read_to_string(path)) .collect::<Result<Vec<_>, _>>() } // Handle partial success fn process_files_partial(paths: &[&str]) -> (Vec<String>, Vec<io::Error>) { let results: Vec<Result<String, io::Error>> = paths.iter() .map(|path| std::fs::read_to_string(path)) .collect(); let mut successes = Vec::new(); let mut failures = Vec::new(); for result in results { match result { Ok(content) => successes.push(content), Err(e) => failures.push(e), } } (successes, failures) } }
Testing Error Cases
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_division_by_zero() { let result = divide(10.0, 0.0); assert!(result.is_err()); match result { Err(MathError::DivisionByZero) => (), _ => panic!("Expected DivisionByZero error"), } } #[test] fn test_file_not_found() { let result = read_file_contents("nonexistent.txt"); assert!(result.is_err()); } #[test] #[should_panic(expected = "assertion failed")] fn test_panic_condition() { assert!(false, "assertion failed"); } } }
Exercise: Build a Configuration Parser
Create a robust configuration parser with proper error handling:
use std::collections::HashMap; use std::fs; use std::path::Path; #[derive(Debug)] enum ConfigError { IoError(std::io::Error), ParseError(String), ValidationError(String), } // TODO: Implement Display and Error traits for ConfigError // TODO: Implement From<std::io::Error> for automatic conversion struct Config { settings: HashMap<String, String>, } impl Config { fn from_file<P: AsRef<Path>>(path: P) -> Result<Self, ConfigError> { // TODO: Read file, parse lines, handle comments (#) // TODO: Parse key=value pairs // TODO: Validate required keys exist todo!() } fn get(&self, key: &str) -> Option<&String> { self.settings.get(key) } fn get_required(&self, key: &str) -> Result<&String, ConfigError> { // TODO: Return error if key doesn't exist todo!() } fn get_int(&self, key: &str) -> Result<i32, ConfigError> { // TODO: Get value and parse as integer todo!() } } fn main() -> Result<(), ConfigError> { let config = Config::from_file("app.conf")?; let port = config.get_int("port")?; let host = config.get_required("host")?; println!("Starting server on {}:{}", host, port); Ok(()) }
Key Takeaways
- Use Result<T, E> for recoverable errors, panic! for unrecoverable ones
- The ? operator makes error propagation clean and efficient
- Custom error types should implement Display and Error traits
- Error conversion with From trait enables seamless ? usage
- anyhow is great for applications, thiserror for libraries
- Chain operations with Result for clean error handling
- Test error cases as thoroughly as success cases
- Collect multiple errors when appropriate instead of failing fast
Next Up: In Chapter 11, we’ll explore iterators and closures - Rust’s functional programming features that make data processing both efficient and expressive.
Chapter 11: Iterators and Functional Programming
Efficient Data Processing with Rust’s Iterator Pattern
Learning Objectives
By the end of this chapter, you’ll be able to:
- Use iterator adaptors like map, filter, fold effectively
- Understand lazy evaluation and its performance benefits
- Write closures with proper capture semantics
- Choose between loops and iterator chains
- Convert between collections using collect()
- Handle iterator errors gracefully
The Iterator Trait
#![allow(unused)] fn main() { trait Iterator { type Item; fn next(&mut self) -> Option<Self::Item>; // 70+ provided methods like map, filter, fold, etc. } }
Key Concepts
- Lazy evaluation: Operations don’t execute until consumed
- Zero-cost abstraction: Compiles to same code as hand-written loops
- Composable: Chain multiple operations cleanly
Creating Iterators
#![allow(unused)] fn main() { fn iterator_sources() { // From collections let vec = vec![1, 2, 3]; vec.iter(); // &T - borrows vec.into_iter(); // T - takes ownership vec.iter_mut(); // &mut T - mutable borrow // From ranges (0..10) // 0 to 9 (0..=10) // 0 to 10 inclusive // Infinite iterators std::iter::repeat(5) // 5, 5, 5, ... (0..).step_by(2) // 0, 2, 4, 6, ... // From functions std::iter::from_fn(|| Some(rand::random::<u32>())) } }
Essential Iterator Adaptors
Transform: map, flat_map
#![allow(unused)] fn main() { fn transformations() { let numbers = vec![1, 2, 3, 4]; // Simple transformation let doubled: Vec<i32> = numbers.iter() .map(|x| x * 2) .collect(); // [2, 4, 6, 8] // Parse strings to numbers, handling errors let strings = vec!["1", "2", "3"]; let parsed: Result<Vec<i32>, _> = strings .iter() .map(|s| s.parse::<i32>()) .collect(); // Collects into Result<Vec<_>, _> // Flatten nested structures let nested = vec![vec![1, 2], vec![3, 4]]; let flat: Vec<i32> = nested .into_iter() .flat_map(|v| v.into_iter()) .collect(); // [1, 2, 3, 4] } }
Filter and Search
#![allow(unused)] fn main() { fn filtering() { let numbers = vec![1, 2, 3, 4, 5, 6]; // Keep only even numbers let evens: Vec<_> = numbers.iter() .filter(|&&x| x % 2 == 0) .cloned() .collect(); // [2, 4, 6] // Find first match let first_even = numbers.iter() .find(|&&x| x % 2 == 0); // Some(&2) // Check conditions let all_positive = numbers.iter().all(|&x| x > 0); // true let has_seven = numbers.iter().any(|&x| x == 7); // false // Position of element let pos = numbers.iter().position(|&x| x == 4); // Some(3) } }
Reduce: fold, reduce, sum
#![allow(unused)] fn main() { fn reductions() { let numbers = vec![1, 2, 3, 4, 5]; // Sum all elements let sum: i32 = numbers.iter().sum(); // 15 // Product of all elements let product: i32 = numbers.iter().product(); // 120 // Custom reduction with fold let result = numbers.iter() .fold(0, |acc, x| acc + x * x); // Sum of squares: 55 // Build a string let words = vec!["Hello", "World"]; let sentence = words.iter() .fold(String::new(), |mut acc, word| { if !acc.is_empty() { acc.push(' '); } acc.push_str(word); acc }); // "Hello World" } }
Take and Skip
#![allow(unused)] fn main() { fn slicing_iterators() { let numbers = 0..100; // Take first n elements let first_five: Vec<_> = numbers.clone() .take(5) .collect(); // [0, 1, 2, 3, 4] // Skip first n elements let after_ten: Vec<_> = numbers.clone() .skip(10) .take(5) .collect(); // [10, 11, 12, 13, 14] // Take while condition is true let until_ten: Vec<_> = numbers.clone() .take_while(|&x| x < 10) .collect(); // [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] } }
Closures: Anonymous Functions
Closure Syntax and Captures
#![allow(unused)] fn main() { fn closure_basics() { let x = 10; // Closure that borrows let add_x = |y| x + y; println!("{}", add_x(5)); // 15 // Closure that mutates let mut count = 0; let mut increment = || { count += 1; count }; println!("{}", increment()); // 1 println!("{}", increment()); // 2 // Move closure - takes ownership let message = String::from("Hello"); let print_message = move || println!("{}", message); print_message(); // message is no longer accessible here } }
Fn, FnMut, FnOnce Traits
#![allow(unused)] fn main() { // Fn: Can be called multiple times, borrows values fn apply_twice<F>(f: F) -> i32 where F: Fn(i32) -> i32 { f(f(5)) } // FnMut: Can be called multiple times, mutates values fn apply_mut<F>(mut f: F) where F: FnMut() { f(); f(); } // FnOnce: Can only be called once, consumes values fn apply_once<F>(f: F) where F: FnOnce() { f(); // f(); // Error: f was consumed } }
Common Patterns
Processing Collections
#![allow(unused)] fn main() { use std::collections::HashMap; fn collection_processing() { let text = "hello world hello rust"; // Word frequency counter let word_counts: HashMap<&str, usize> = text .split_whitespace() .fold(HashMap::new(), |mut map, word| { *map.entry(word).or_insert(0) += 1; map }); // Find most common word let most_common = word_counts .iter() .max_by_key(|(_, &count)| count) .map(|(&word, _)| word); println!("Most common: {:?}", most_common); // Some("hello") } }
Error Handling with Iterators
#![allow(unused)] fn main() { fn parse_numbers(input: &[&str]) -> Result<Vec<i32>, std::num::ParseIntError> { input.iter() .map(|s| s.parse::<i32>()) .collect() // Collects into Result<Vec<_>, _> } fn process_files(paths: &[&str]) -> Vec<Result<String, std::io::Error>> { paths.iter() .map(|path| std::fs::read_to_string(path)) .collect() // Collects all results, both Ok and Err } // Partition successes and failures fn partition_results<T, E>(results: Vec<Result<T, E>>) -> (Vec<T>, Vec<E>) { let (oks, errs): (Vec<_>, Vec<_>) = results .into_iter() .partition(|r| r.is_ok()); let values = oks.into_iter().map(|r| r.unwrap()).collect(); let errors = errs.into_iter().map(|r| r.unwrap_err()).collect(); (values, errors) } }
Infinite Iterators and Lazy Evaluation
#![allow(unused)] fn main() { fn lazy_evaluation() { // Generate Fibonacci numbers lazily let mut fib = (0u64, 1u64); let fibonacci = std::iter::from_fn(move || { let next = fib.0; fib = (fib.1, fib.0 + fib.1); Some(next) }); // Take only what we need let first_10: Vec<_> = fibonacci .take(10) .collect(); println!("First 10 Fibonacci: {:?}", first_10); // Find first Fibonacci > 1000 let mut fib2 = (0u64, 1u64); let first_large = std::iter::from_fn(move || { let next = fib2.0; fib2 = (fib2.1, fib2.0 + fib2.1); Some(next) }) .find(|&n| n > 1000); println!("First > 1000: {:?}", first_large); } }
Performance: Iterators vs Loops
#![allow(unused)] fn main() { // These compile to identical machine code! fn sum_squares_loop(nums: &[i32]) -> i32 { let mut sum = 0; for &n in nums { sum += n * n; } sum } fn sum_squares_iter(nums: &[i32]) -> i32 { nums.iter() .map(|&n| n * n) .sum() } // Iterator version is: // - More concise // - Harder to introduce bugs // - Easier to modify (add filter, take, etc.) // - Same performance! }
Exercise: Data Pipeline
Build a log analysis pipeline using iterators:
#[derive(Debug)] struct LogEntry { timestamp: u64, level: LogLevel, message: String, } #[derive(Debug, PartialEq)] enum LogLevel { Debug, Info, Warning, Error, } impl LogEntry { fn parse(line: &str) -> Option<LogEntry> { // Format: "timestamp|level|message" let parts: Vec<&str> = line.split('|').collect(); if parts.len() != 3 { return None; } let timestamp = parts[0].parse().ok()?; let level = match parts[1] { "DEBUG" => LogLevel::Debug, "INFO" => LogLevel::Info, "WARNING" => LogLevel::Warning, "ERROR" => LogLevel::Error, _ => return None, }; Some(LogEntry { timestamp, level, message: parts[2].to_string(), }) } } struct LogAnalyzer<'a> { lines: &'a [String], } impl<'a> LogAnalyzer<'a> { fn new(lines: &'a [String]) -> Self { LogAnalyzer { lines } } fn parse_entries(&self) -> impl Iterator<Item = LogEntry> + '_ { // TODO: Parse lines into LogEntry, skip invalid lines self.lines.iter() .filter_map(|line| LogEntry::parse(line)) } fn errors_only(&self) -> impl Iterator<Item = LogEntry> + '_ { // TODO: Return only ERROR level entries todo!() } fn in_time_range(&self, start: u64, end: u64) -> impl Iterator<Item = LogEntry> + '_ { // TODO: Return entries within time range todo!() } fn count_by_level(&self) -> HashMap<LogLevel, usize> { // TODO: Count entries by log level todo!() } fn most_recent(&self, n: usize) -> Vec<LogEntry> { // TODO: Return n most recent entries (highest timestamps) todo!() } } fn main() { let log_lines = vec![ "1000|INFO|Server started".to_string(), "1001|DEBUG|Connection received".to_string(), "1002|ERROR|Failed to connect to database".to_string(), "invalid line".to_string(), "1003|WARNING|High memory usage".to_string(), "1004|INFO|Request processed".to_string(), "1005|ERROR|Timeout error".to_string(), ]; let analyzer = LogAnalyzer::new(&log_lines); // Test the methods println!("Valid entries: {}", analyzer.parse_entries().count()); println!("Errors: {:?}", analyzer.errors_only().collect::<Vec<_>>()); println!("Count by level: {:?}", analyzer.count_by_level()); println!("Most recent 3: {:?}", analyzer.most_recent(3)); }
Key Takeaways
- Iterators are lazy - nothing happens until you consume them
- Zero-cost abstraction - same performance as hand-written loops
- Composable - chain operations for clean, readable code
- collect() is powerful - converts to any collection type
- Closures capture environment - be aware of borrowing vs moving
- Error handling - Result<Vec
, E> vs Vec<Result<T, E>> - Prefer iterators over manual loops for clarity and safety
Next Up: In Chapter 12, we’ll explore modules and visibility - essential for organizing larger Rust projects and creating clean APIs.
Chapter 12: Modules and Visibility
Organizing Rust Projects at Scale
Learning Objectives
By the end of this chapter, you’ll be able to:
- Structure Rust projects with modules and submodules
- Control visibility with
puband privacy rules - Use the
usekeyword effectively for imports - Organize code across multiple files
- Design clean module APIs with proper encapsulation
- Apply the module system to build maintainable projects
- Understand path resolution and the module tree
Module Basics
Defining Modules
// Modules can be defined inline mod network { pub fn connect() { println!("Connecting to network..."); } fn internal_function() { // Private by default - not accessible outside this module println!("Internal network operation"); } } mod database { pub struct Connection { // Fields are private by default host: String, port: u16, } impl Connection { // Public constructor pub fn new(host: String, port: u16) -> Self { Connection { host, port } } // Public method pub fn execute(&self, query: &str) { println!("Executing: {}", query); } // Private method fn validate_query(&self, query: &str) -> bool { !query.is_empty() } } } fn main() { network::connect(); // network::internal_function(); // Error: private function let conn = database::Connection::new("localhost".to_string(), 5432); conn.execute("SELECT * FROM users"); // println!("{}", conn.host); // Error: private field }
Module Hierarchy
#![allow(unused)] fn main() { mod front_of_house { pub mod hosting { pub fn add_to_waitlist() { println!("Added to waitlist"); } fn seat_at_table() { println!("Seated at table"); } } mod serving { fn take_order() {} fn serve_order() {} fn take_payment() {} } } // Using paths to access nested modules pub fn eat_at_restaurant() { // Absolute path crate::front_of_house::hosting::add_to_waitlist(); // Relative path front_of_house::hosting::add_to_waitlist(); } }
The use Keyword
Basic Imports
mod math { pub fn add(a: i32, b: i32) -> i32 { a + b } pub fn multiply(a: i32, b: i32) -> i32 { a * b } pub mod advanced { pub fn power(base: i32, exp: u32) -> i32 { base.pow(exp) } } } // Bring functions into scope use math::add; use math::multiply; use math::advanced::power; // Group imports use math::{add, multiply}; // Import everything from a module use math::advanced::*; fn main() { let sum = add(2, 3); // No need for math:: prefix let product = multiply(4, 5); let result = power(2, 10); }
Re-exporting with pub use
#![allow(unused)] fn main() { mod shapes { pub mod circle { pub struct Circle { pub radius: f64, } impl Circle { pub fn area(&self) -> f64 { std::f64::consts::PI * self.radius * self.radius } } } pub mod rectangle { pub struct Rectangle { pub width: f64, pub height: f64, } impl Rectangle { pub fn area(&self) -> f64 { self.width * self.height } } } } // Re-export to flatten the hierarchy pub use shapes::circle::Circle; pub use shapes::rectangle::Rectangle; // Now users can do: // use your_crate::{Circle, Rectangle}; // Instead of: // use your_crate::shapes::circle::Circle; }
File-based Modules
Project Structure
src/
├── main.rs
├── lib.rs
├── network/
│ ├── mod.rs
│ ├── client.rs
│ └── server.rs
└── utils.rs
Main Module File (src/main.rs or src/lib.rs)
#![allow(unused)] fn main() { // src/lib.rs pub mod network; // Looks for network/mod.rs or network.rs pub mod utils; // Looks for utils.rs // Re-export commonly used items pub use network::client::Client; pub use network::server::Server; }
Module Directory (src/network/mod.rs)
#![allow(unused)] fn main() { // src/network/mod.rs pub mod client; pub mod server; // Common network functionality pub struct Config { pub timeout: u64, pub retry_count: u32, } impl Config { pub fn default() -> Self { Config { timeout: 30, retry_count: 3, } } } }
Submodule Files
#![allow(unused)] fn main() { // src/network/client.rs use super::Config; // Access parent module pub struct Client { config: Config, connected: bool, } impl Client { pub fn new(config: Config) -> Self { Client { config, connected: false, } } pub fn connect(&mut self) -> Result<(), String> { // Connection logic self.connected = true; Ok(()) } } }
#![allow(unused)] fn main() { // src/network/server.rs use super::Config; pub struct Server { config: Config, listening: bool, } impl Server { pub fn new(config: Config) -> Self { Server { config, listening: false, } } pub fn listen(&mut self, port: u16) -> Result<(), String> { println!("Listening on port {}", port); self.listening = true; Ok(()) } } }
Visibility Rules
Privacy Boundaries
mod outer { pub fn public_function() { println!("Public function"); } fn private_function() { println!("Private function"); } pub mod inner { pub fn inner_public() { // Can access parent's private items super::private_function(); } pub(super) fn visible_to_parent() { println!("Only visible to parent module"); } pub(crate) fn visible_in_crate() { println!("Visible throughout the crate"); } } } fn main() { outer::public_function(); outer::inner::inner_public(); // outer::inner::visible_to_parent(); // Error: not visible here outer::inner::visible_in_crate(); // OK: we're in the same crate }
Struct Field Visibility
mod back_of_house { pub struct Breakfast { pub toast: String, // Public field seasonal_fruit: String, // Private field } impl Breakfast { pub fn summer(toast: &str) -> Breakfast { Breakfast { toast: String::from(toast), seasonal_fruit: String::from("peaches"), } } } // All fields must be public for tuple struct to be constructable pub struct Color(pub u8, pub u8, pub u8); } fn main() { let mut meal = back_of_house::Breakfast::summer("Rye"); meal.toast = String::from("Wheat"); // OK: public field // meal.seasonal_fruit = String::from("strawberries"); // Error: private let color = back_of_house::Color(255, 0, 0); // OK: all fields public }
Module Design Patterns
API Design with Modules
// A well-designed module API pub mod database { // Re-export the main types users need pub use self::connection::Connection; pub use self::error::{Error, Result}; mod connection { use super::error::Result; pub struct Connection { // Implementation details hidden url: String, } impl Connection { pub fn open(url: &str) -> Result<Self> { Ok(Connection { url: url.to_string(), }) } pub fn execute(&self, query: &str) -> Result<()> { // Implementation Ok(()) } } } mod error { use std::fmt; #[derive(Debug)] pub struct Error { message: String, } impl fmt::Display for Error { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "Database error: {}", self.message) } } impl std::error::Error for Error {} pub type Result<T> = std::result::Result<T, Error>; } } // Clean usage use database::{Connection, Result}; fn main() -> Result<()> { let conn = Connection::open("postgres://localhost/mydb")?; conn.execute("SELECT * FROM users")?; Ok(()) }
Builder Pattern with Modules
pub mod request { pub struct Request { url: String, method: Method, headers: Vec<(String, String)>, } #[derive(Clone)] pub enum Method { GET, POST, PUT, DELETE, } pub struct RequestBuilder { url: Option<String>, method: Method, headers: Vec<(String, String)>, } impl RequestBuilder { pub fn new() -> Self { RequestBuilder { url: None, method: Method::GET, headers: Vec::new(), } } pub fn url(mut self, url: &str) -> Self { self.url = Some(url.to_string()); self } pub fn method(mut self, method: Method) -> Self { self.method = method; self } pub fn header(mut self, key: &str, value: &str) -> Self { self.headers.push((key.to_string(), value.to_string())); self } pub fn build(self) -> Result<Request, &'static str> { let url = self.url.ok_or("URL is required")?; Ok(Request { url, method: self.method, headers: self.headers, }) } } impl Request { pub fn builder() -> RequestBuilder { RequestBuilder::new() } pub fn send(&self) -> Result<Response, &'static str> { // Send request logic Ok(Response { status: 200 }) } } pub struct Response { pub status: u16, } } use request::{Request, Method}; fn main() { let response = Request::builder() .url("https://api.example.com/data") .method(Method::POST) .header("Content-Type", "application/json") .build() .unwrap() .send() .unwrap(); println!("Response status: {}", response.status); }
Common Patterns and Best Practices
Prelude Pattern
#![allow(unused)] fn main() { // Create a prelude module for commonly used items pub mod prelude { pub use crate::error::{Error, Result}; pub use crate::config::Config; pub use crate::client::Client; pub use crate::server::Server; } // Users can import everything they need with one line: // use your_crate::prelude::*; }
Internal Module Pattern
#![allow(unused)] fn main() { pub mod parser { // Public API pub fn parse(input: &str) -> Result<Expression, Error> { let tokens = internal::tokenize(input)?; internal::build_ast(tokens) } pub struct Expression { // ... } pub struct Error { // ... } // Implementation details in internal module mod internal { use super::*; pub(super) fn tokenize(input: &str) -> Result<Vec<Token>, Error> { // ... } pub(super) fn build_ast(tokens: Vec<Token>) -> Result<Expression, Error> { // ... } struct Token { // Private implementation detail } } } }
Exercise: Create a Library Management System
Design a module structure for a library system:
// TODO: Create the following module structure: // - books module with Book struct and methods // - members module with Member struct // - loans module for managing book loans // - Use proper visibility modifiers mod books { pub struct Book { // TODO: Add fields (some public, some private) } impl Book { // TODO: Add constructor and methods } } mod members { pub struct Member { // TODO: Add fields } impl Member { // TODO: Add methods } } mod loans { use super::books::Book; use super::members::Member; pub struct Loan { // TODO: Reference a Book and Member } impl Loan { // TODO: Implement loan management } } pub mod library { // TODO: Create a public API that uses the above modules // Re-export necessary types } fn main() { // TODO: Use the library module to: // 1. Create some books // 2. Register members // 3. Create loans // 4. Return books }
Key Takeaways
- Modules organize code into logical units with clear boundaries
- Privacy by default - items are private unless marked
pub - The
usekeyword brings items into scope for convenience - File structure mirrors module structure for large projects
pub usefor re-exports creates clean public APIs- Visibility modifiers (
pub(crate),pub(super)) provide fine-grained control - Module design should hide implementation details and expose minimal APIs
- Prelude pattern simplifies imports for users of your crate
Congratulations! You’ve completed Day 2 of the Rust course. You now have a solid understanding of Rust’s advanced features including traits, generics, error handling, iterators, and module organization. These concepts form the foundation for building robust, maintainable Rust applications.
Chapter 13: Hardware Hello - ESP32-C3 Basics
Learning Objectives
This chapter covers:
- Set up ESP32-C3 development environment
- Understand the ESP32-C3 hardware capabilities and built-in sensors
- Create your first embedded Rust program that blinks an LED
- Read temperature from the ESP32-C3’s built-in temperature sensor
- Send data over USB Serial for monitoring
- Understand the basics of embedded program structure and entry points
Welcome to Embedded Rust!
After learning Rust fundamentals, it’s time to apply that knowledge to real hardware. The ESP32-C3 is perfect for learning embedded Rust because it has:
- Built-in temperature sensor - No external components needed!
- USB Serial support - Easy debugging and communication
- WiFi capability - For IoT projects
- Rust-first tooling - Good
esp-haland ecosystem support - RISC-V architecture - Modern, open-source instruction set
Why Start with Hardware?
Many embedded courses start with theory, but we’re jumping straight into practical work - making real hardware do real things. This approach helps you:
- See immediate results (LED blinking, temperature readings)
- Understand constraints early (memory, power, timing)
- Build intuition for embedded programming patterns
- Stay motivated with tangible progress
ESP32-C3 Hardware Overview
The ESP32-C3 is a system-on-chip (SoC) that includes:
┌─────────────────────────────────────┐
│ ESP32-C3 SoC │
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ RISC-V Core │ │ WiFi │ │
│ │ 160 MHz │ │ 802.11 b/g/n│ │
│ └─────────────┘ └─────────────┘ │
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ 320KB │ │ GPIO │ │
│ │ RAM │ │ Pins │ │
│ └─────────────┘ └─────────────┘ │
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ 4MB │ │Temperature │ │
│ │ Flash │ │ Sensor │ │ ← We'll use this!
│ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────┘
Key Features for Our Project:
- Built-in Temperature Sensor: Returns readings in digital format
- USB Serial: Built-in USB-to-serial conversion for easy debugging
- GPIO Pin 8: Usually connected to an LED on development boards
- Low Power: Can run on batteries for IoT applications
Development Environment Setup
Prerequisites
# Install Rust targets for ESP32-C3
rustup target add riscv32imc-unknown-none-elf
# Install cargo-espflash for flashing ESP32-C3
cargo install cargo-espflash
# Install probe-rs for debugging (optional, works best on Linux/macOS)
cargo install probe-rs --features cli
# Install serial monitoring tool (optional, for serial communication)
cargo install serialport-rs
Hardware Requirements
- ESP32-C3 development board (like ESP32-C3-DevKitM-1)
- USB-C cable for programming and power
- Computer with USB port
No external sensors or components needed - we’ll use the built-in temperature sensor!
Your First ESP32 Program: LED Blink
Let’s start with the embedded equivalent of “Hello, World!” - blinking an LED:
#![no_std] #![no_main] #![deny( clippy::mem_forget, reason = "mem::forget is generally not safe to do with esp_hal types" )] use esp_hal::clock::CpuClock; use esp_hal::gpio::{Level, Output, OutputConfig}; use esp_hal::main; use esp_hal::time::{Duration, Instant}; #[panic_handler] fn panic(_: &core::panic::PanicInfo) -> ! { loop {} } // Required by the ESP-IDF bootloader esp_bootloader_esp_idf::esp_app_desc!(); #[main] fn main() -> ! { // Initialize hardware let config = esp_hal::Config::default().with_cpu_clock(CpuClock::max()); let peripherals = esp_hal::init(config); // Configure GPIO 8 as LED output let mut led = Output::new(peripherals.GPIO8, Level::Low, OutputConfig::default()); // Main loop - blink LED every second loop { led.set_high(); let delay_start = Instant::now(); while delay_start.elapsed() < Duration::from_millis(1000) {} led.set_low(); let delay_start = Instant::now(); while delay_start.elapsed() < Duration::from_millis(1000) {} } }
Understanding the Code
Key Differences from Regular Rust:
#![no_std]- No standard library (no heap, no OS services)#![no_main]- No traditional main function (embedded entry point)#[main]- ESP-HAL’s main macro for embedded programs#[panic_handler]- Required to handle panics in no_std-> !- Function never returns (embedded programs run forever)
Hardware Abstraction:
esp_hal::init()- Initialize hardware with configurationgpio::Output- Type-safe GPIO pin configurationInstant::now()andDuration- Hardware timer-based timing
Why These Patterns?
- Singleton Pattern: Hardware can only have one owner
- Type Safety: GPIO configuration enforced at compile time
- Zero Cost: Abstractions compile to direct hardware access
Reading the Built-in Temperature Sensor
Now let’s read the ESP32-C3’s built-in temperature sensor:
#![no_std] #![no_main] #![deny( clippy::mem_forget, reason = "mem::forget is generally not safe to do with esp_hal types" )] use esp_hal::clock::CpuClock; use esp_hal::gpio::{Level, Output, OutputConfig}; use esp_hal::main; use esp_hal::time::{Duration, Instant}; use esp_hal::tsens::{Config, TemperatureSensor}; #[panic_handler] fn panic(_: &core::panic::PanicInfo) -> ! { loop {} } // Required by the ESP-IDF bootloader esp_bootloader_esp_idf::esp_app_desc!(); #[main] fn main() -> ! { // Initialize hardware let config = esp_hal::Config::default().with_cpu_clock(CpuClock::max()); let peripherals = esp_hal::init(config); // Initialize GPIO for LED on GPIO8 let mut led = Output::new(peripherals.GPIO8, Level::Low, OutputConfig::default()); // Initialize the built-in temperature sensor let temp_sensor = TemperatureSensor::new(peripherals.TSENS, Config::default()).unwrap(); // Track reading count let mut _reading_count = 0u32; // Main monitoring loop loop { // Small stabilization delay (recommended by ESP-HAL) let delay_start = Instant::now(); while delay_start.elapsed() < Duration::from_micros(200) {} // Read temperature from built-in sensor let temperature = temp_sensor.get_temperature(); let temp_celsius = temperature.to_celsius(); _reading_count += 1; // LED feedback based on temperature threshold (52°C) if temp_celsius > 52.0 { // Fast blink pattern for high temperature led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} led.set_low(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} led.set_low(); } else { // Slow single blink for normal temperature led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(200) {} led.set_low(); } // Wait for remainder of 2-second interval let wait_start = Instant::now(); while wait_start.elapsed() < Duration::from_millis(1500) {} } }
Understanding Temperature Sensor Code
New Concepts:
tsens::TemperatureSensor- Hardware abstraction for built-in sensor (requiresunstablefeature)get_temperature()- Returns Temperature structto_celsius()- Converts to Celsius value- No external wiring - Sensor is built into the chip!
- Temperature threshold - We use 52°C to trigger fast blinking (you can trigger this by touching the chip)
Data Flow:
Hardware Sensor → ADC → Digital Value → Celsius Conversion → Your Code
LED Status Patterns:
- Normal temp (≤52°C): Single slow blink (200ms)
- High temp (>52°C): Fast double blink pattern (3x100ms blinks)
Building and Running on Hardware
Project Structure
Create a new embedded project:
cargo new --bin temp_monitor
cd temp_monitor
Update Cargo.toml:
[package]
name = "temp_monitor"
version = "0.1.0"
edition = "2024"
rust-version = "1.88"
[[bin]]
name = "temp_monitor"
path = "./src/bin/main.rs"
[dependencies]
esp-hal = { version = "1.0.0", features = ["esp32c3", "unstable"] }
esp-bootloader-esp-idf = { version = "0.4.0", features = ["esp32c3"] }
critical-section = "1.2.0"
[profile.dev]
# Rust debug is too slow for embedded
opt-level = "s"
[profile.release]
codegen-units = 1
debug = 2
debug-assertions = false
incremental = false
lto = 'fat'
opt-level = 's'
overflow-checks = false
Building and Flashing
# Build and flash to ESP32-C3 (recommended method)
cargo run --release
# Alternative: Build first, then flash
cargo build --release
cargo espflash flash --monitor target/riscv32imc-unknown-none-elf/release/temp_monitor
Serial Monitoring
Connect to see output:
# Using cargo-espflash (flashes and shows serial output)
cargo run --release
# Or just monitor serial output (after flashing)
cargo espflash monitor
# Alternative: Using screen (macOS/Linux)
screen /dev/cu.usbmodem* 115200
Expected Output:
ESP32-C3 Temperature Monitor
Built-in sensor initialized
Reading temperature every 2 seconds...
Reading #1: Temperature = 24.3°C
Reading #2: Temperature = 24.5°C
Reading #3: Temperature = 24.1°C
...
Reading #10: Temperature = 24.7°C
Status: 10 readings completed
Understanding Embedded Program Structure
Program Lifecycle
#![allow(unused)] fn main() { // 1. Hardware initialization let peripherals = Peripherals::take(); // Get hardware ownership let clocks = ClockControl::max(...); // Configure clocks let delay = Delay::new(&clocks); // Set up timing // 2. Peripheral configuration let io = Io::new(...); // Initialize GPIO system let mut led = Output::new(...); // Configure specific pins let mut temp_sensor = TemperatureSensor::new(...); // Set up sensors // 3. Main application loop loop { // Read sensors // Process data // Control outputs // Timing/delays } }
Memory and Resource Management
Key Constraints:
- 320KB RAM - All variables must fit in memory
- No heap allocation - Only stack and static allocation
- No garbage collector - Manual memory management
- Real-time constraints - Delays must be predictable
Best Practices:
- Use
delay.delay_millis()instead ofstd::thread::sleep() - Prefer fixed-size arrays over dynamic vectors
- Initialize all peripherals before main loop
- Keep critical timing sections short
Error Handling in Embedded
Embedded Rust uses Result<T, E> even more extensively:
#![allow(unused)] fn main() { // Temperature sensor can fail match temp_sensor.read_celsius() { Ok(temperature) => { esp_println::println!("Temperature: {:.1}°C", temperature); } Err(e) => { esp_println::println!("Sensor error: {:?}", e); // Could enter error state, reset, or retry } } // Alternative: Use expect() for prototype code let temperature = temp_sensor.read_celsius() .expect("Temperature sensor failed"); }
Exercise: Your First Temperature Monitor
Build a basic temperature monitoring system with the ESP32-C3’s built-in sensor.
Requirements
- Hardware Setup: ESP32-C3 development board connected via USB
- Temperature Reading: Use built-in temperature sensor
- LED Status: Visual feedback based on temperature
- Serial Output: Temperature readings every 2 seconds
- Status Reporting: Progress summary every 10 readings
Starting Code
Create src/main.rs with this foundation:
#![no_std] #![no_main] use esp_backtrace as _; use esp_hal::{ clock::ClockControl, delay::Delay, gpio::{Io, Level, Output}, peripherals::Peripherals, prelude::*, system::SystemControl, temperature_sensor::{TemperatureSensor, TempSensorConfig}, }; #[entry] fn main() -> ! { // TODO: Initialize hardware // TODO: Set up temperature sensor // TODO: Main monitoring loop loop { // TODO: Read temperature // TODO: Control LED based on temperature // TODO: Print reading with status // TODO: Wait for next reading } }
Implementation Tasks
-
Initialize Hardware:
#![allow(unused)] fn main() { let peripherals = Peripherals::take(); let system = SystemControl::new(peripherals.SYSTEM); let clocks = ClockControl::max(system.clock_control).freeze(); let delay = Delay::new(&clocks); let io = Io::new(peripherals.GPIO, peripherals.IO_MUX); let mut led = Output::new(io.pins.gpio8, Level::Low); } -
Configure Temperature Sensor:
#![allow(unused)] fn main() { let temp_config = TempSensorConfig::default(); let mut temp_sensor = TemperatureSensor::new( peripherals.TEMP_SENSOR, temp_config ); } -
Main Monitoring Loop:
- Read temperature with
temp_sensor.read_celsius() - Control LED: fast blink if >25°C, slow if ≤25°C
- Print “Reading #N: Temperature = X.X°C”
- Status summary every 10 readings
- 2-second intervals between readings
- Read temperature with
-
Test on Hardware:
- Build and flash to ESP32-C3
- Verify temperature readings and LED behavior
- Try warming the chip with your finger
Success Criteria
- Program compiles without warnings
- ESP32-C3 boots and shows startup message
- Temperature readings displayed every 2 seconds
- LED blinks with different patterns based on temperature
- Status summary appears every 10 readings
- Temperature values are reasonable (20-40°C typically)
Expected Serial Output
ESP32-C3 Temperature Monitor
Built-in sensor initialized
Reading temperature every 2 seconds...
Reading #1: Temperature = 24.3°C
Reading #2: Temperature = 24.5°C
Reading #3: Temperature = 24.1°C
Reading #4: Temperature = 24.8°C
Reading #5: Temperature = 25.2°C ← LED should blink faster now
...
Reading #10: Temperature = 24.7°C
Status: 10 readings completed
Reading #11: Temperature = 24.9°C
...
Extension Challenges
- Temperature Threshold: Make threshold adjustable via const
- LED Patterns: Different patterns for different temperature ranges
- Statistics: Track min/max temperatures
- Timing: More precise 2-second intervals
- Error Handling: Handle sensor reading failures gracefully
Troubleshooting Tips
Build Errors:
- Ensure
rustup target add riscv32imc-unknown-none-elfis installed - Check that feature flags match your ESP32-C3 variant
Flash Errors:
- Ensure cargo-espflash is installed:
cargo install cargo-espflash - Check USB cable and ESP32-C3 connection
- Try:
cargo espflash flash target/riscv32imc-unknown-none-elf/release/temp_monitor
No Serial Output:
- Verify baud rate (115200)
- Try different serial monitor tools
- Check USB device enumeration
Sensor Issues:
- Temperature readings should be 20-40°C typically
- Values outside this range might indicate calibration issues
- Warm the chip gently with your finger to test responsiveness
Key Takeaways
✅ Hardware First: Starting with real hardware creates immediate engagement and practical learning
✅ Built-in Sensors: ESP32-C3’s temperature sensor eliminates external component complexity
✅ Embedded Patterns: #[no_std], #[no_main], and loop are fundamental embedded concepts
✅ Real-time Constraints: Understanding timing and resource limitations from the start
✅ Type Safety: Rust’s ownership system prevents common embedded bugs even on bare metal
✅ Immediate Feedback: LED status and serial output provide instant verification of functionality
ESP32-C3 Troubleshooting Guide
Hardware Issues
Device Not Found / Flashing Fails:
- Check USB-C cable is properly connected
- Try a different USB-C cable (some are power-only)
- Press and hold BOOT button while connecting USB
- Check device enumeration:
ls /dev/cu.*(macOS) orls /dev/ttyUSB*(Linux) - Install USB drivers if needed:
brew install --cask silicon-labs-vcp-driver(macOS)
No Serial Output:
- Verify baud rate is 115200
- Try different terminal:
screen /dev/cu.usbmodem* 115200 - Check if device is already open in another terminal
- Press RESET button on ESP32-C3 to restart program
Sensor Readings Look Wrong:
- Temperature should be 20-40°C typically for room temperature
- Very high values (>80°C) may indicate calibration issues
- Try warming chip gently with finger to test responsiveness
- Compare with room thermometer for validation
Software Issues
Build Errors:
# Install required targets and tools
rustup target add riscv32imc-unknown-none-elf
cargo install cargo-espflash
cargo install probe-rs --features cli
# Update tools if outdated
cargo install-update -a
Linker Errors:
- Check Cargo.toml dependencies match examples exactly
- Verify feature flags:
features = ["esp32c3", "unstable"] - Clean and rebuild:
cargo clean && cargo build
Runtime Panics:
- Check temperature sensor initialization succeeds
- Verify GPIO pin 8 is available (built-in LED)
- Add more delay if sensor readings fail intermittently
Performance Issues:
- Use
opt-level = "s"in Cargo.toml for size optimization - Debug builds are very slow - always test with
--release - Monitor memory usage if experiencing strange behavior
Development Tips
Faster Development Cycle:
- Use
cargo run --releasefor combined build + flash + monitor - Keep one terminal open for monitoring, another for building
- Save modified code before flashing (auto-save recommended)
Serial Monitoring:
# Built-in monitoring
cargo espflash monitor
# External tools
screen /dev/cu.usbmodem* 115200 # macOS/Linux
picocom /dev/ttyUSB0 -b 115200 # Linux alternative
# Exit screen: Ctrl+A then K, then Y
When Things Go Wrong:
- Try different USB cable/port
- Power cycle ESP32-C3 (unplug + replug)
- Press RESET button
- Clean build:
cargo clean - Check for conflicting cargo processes:
pkill cargo
Common Error Messages
espflash::connection_failed:
- Device not in bootloader mode
- Wrong serial port selected
- Driver issues
failed to parse elf:
- Build failed but cargo didn’t catch it
- Run
cargo buildfirst to see actual error - Check target architecture matches
timer not found:
- Old esp-hal version - update dependencies
- Feature flag mismatch in Cargo.toml
If problems persist, check the ESP32-C3 documentation and esp-rs community.
Next: In Chapter 14, we’ll build proper data structures for storing and processing these temperature readings using embedded-friendly no_std patterns.
Chapter 14: Embedded Foundations - no_std from the Start
Learning Objectives
This chapter covers:
- Understand the difference between
core,alloc, andstdlibraries - Create temperature data structures that work in embedded environments
- Use heapless collections for fixed-capacity storage
- Implement const functions for compile-time configuration
- Build a circular buffer for continuous sensor data collection
- Calculate statistics without dynamic allocation
Task: Build Memory-Efficient Temperature Storage
In Chapter 13, we successfully read temperature values from the ESP32-C3’s built-in sensor. Now we need to build a system that can:
Your Mission:
- Store multiple readings in a fixed-size circular buffer
- Calculate statistics (average, min, max) without heap allocation
- Use only 2 bytes per temperature reading (vs 4 bytes for f32)
- Handle buffer overflow gracefully with circular behavior
- Monitor memory usage and system performance
Why This Matters: This chapter teaches essential embedded patterns:
- Memory-efficient data structures
- Fixed-capacity collections with
heapless - Const generics for compile-time configuration
- Statistics without dynamic allocation
Understanding no_std: The Embedded Reality
Why no_std?
Desktop programs can use:
- Unlimited memory (well, gigabytes via virtual memory)
- Dynamic allocation (
Vec,HashMap,String) - Operating system services (files, network, threads)
Embedded programs must work with:
- Fixed memory (320KB RAM total on ESP32-C3)
- No heap allocator (or very limited heap)
- No operating system (we are the operating system!)
#![allow(unused)] fn main() { // ❌ This won't work in no_std embedded use std::collections::HashMap; use std::vec::Vec; fn desktop_approach() { let mut readings = Vec::new(); // Heap allocation let mut sensors = HashMap::new(); // Dynamic sizing readings.push(23.5); // Can grow infinitely sensors.insert("temp1", 24.1); // Hash table overhead } // ✅ This is the embedded way use heapless::Vec; use heapless::FnvIndexMap; fn embedded_approach() { let mut readings: Vec<f32, 32> = Vec::new(); // Fixed capacity let mut sensors: FnvIndexMap<&str, f32, 8> = FnvIndexMap::new(); // Known limits readings.push(23.5).ok(); // Handles full buffer sensors.insert("temp1", 24.1).ok(); // Graceful failure } }
The Three-Layer Architecture
Rust’s libraries are organized in layers:
┌─────────────────────────────────────┐
│ std │
│ File I/O, networking, threads, │ ← Desktop applications
│ HashMap, process management │
├─────────────────────────────────────┤
│ alloc │
│ Vec, String, Box, Rc, │ ← Embedded with heap
│ heap-allocated collections │
├─────────────────────────────────────┤
│ core │
│ Option, Result, Iterator, │ ← Minimal embedded
│ basic traits, no allocation │
└─────────────────────────────────────┘
For our ESP32-C3 project, we’ll use core + heapless collections.
Creating an Embedded Temperature Type
Let’s build a temperature type designed for embedded use:
#![allow(unused)] #![no_std] fn main() { use core::fmt; /// Temperature reading optimized for embedded systems #[derive(Debug, Clone, Copy, PartialEq)] pub struct Temperature { // Store as i16 to save memory (16-bit vs 32-bit f32) // Resolution: 0.1°C, Range: -3276.8°C to +3276.7°C // More than enough for ESP32-C3's typical -40°C to +125°C range celsius_tenths: i16, } impl Temperature { /// Create temperature from Celsius value pub const fn from_celsius(celsius: f32) -> Self { Self { celsius_tenths: (celsius * 10.0) as i16, } } /// Create temperature from raw ESP32 sensor reading pub const fn from_sensor_raw(raw_value: u16) -> Self { // ESP32-C3 temperature sensor specific conversion // This is a simplified conversion - real implementation depends on calibration let celsius = (raw_value as f32 - 1000.0) / 10.0; Self::from_celsius(celsius) } /// Get temperature as Celsius f32 pub fn celsius(&self) -> f32 { self.celsius_tenths as f32 / 10.0 } /// Get temperature as Fahrenheit f32 pub fn fahrenheit(&self) -> f32 { self.celsius() * 9.0 / 5.0 + 32.0 } /// Check if temperature is within normal range pub const fn is_normal_range(&self) -> bool { // Normal room temperature: 15-35°C self.celsius_tenths >= 150 && self.celsius_tenths <= 350 } /// Check if temperature is too high (potential overheating) pub const fn is_overheating(&self) -> bool { self.celsius_tenths > 1000 // > 100°C } } // Implement Display for serial output impl fmt::Display for Temperature { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "{:.1}°C", self.celsius()) } } // Example usage in embedded code #[cfg(test)] mod tests { use super::*; #[test] fn test_temperature_creation() { let temp = Temperature::from_celsius(23.5); assert_eq!(temp.celsius(), 23.5); assert_eq!(temp.fahrenheit(), 74.3); assert!(temp.is_normal_range()); assert!(!temp.is_overheating()); } #[test] fn test_memory_efficiency() { // Temperature struct should be small assert_eq!(core::mem::size_of::<Temperature>(), 2); // Just 2 bytes! } } }
Why This Design?
Memory Efficiency:
i16(2 bytes) instead off32(4 bytes) saves 50% memory- 0.1°C resolution is more than adequate for most applications
- Fits in CPU registers for fast operations
Const Functions:
const fn from_celsius()- Computed at compile timeconst fn is_normal_range()- Zero runtime cost- Perfect for configuration and thresholds
No Heap Usage:
- Copy trait means values are stack-allocated
- No hidden allocations or indirection
Heapless Collections for Sensor Data
Now let’s store multiple temperature readings efficiently:
#![allow(unused)] fn main() { use heapless::Vec; use heapless::pool::{Pool, Node}; /// Fixed-capacity temperature buffer for embedded systems pub struct TemperatureBuffer<const N: usize> { readings: Vec<Temperature, N>, total_readings: u32, // Track total for statistics } impl<const N: usize> TemperatureBuffer<N> { /// Create new buffer with compile-time capacity pub const fn new() -> Self { Self { readings: Vec::new(), total_readings: 0, } } /// Add a temperature reading (circular buffer behavior) pub fn push(&mut self, temperature: Temperature) { if self.readings.len() < N { // Buffer not full yet - just add self.readings.push(temperature).ok(); } else { // Buffer full - use circular indexing (more efficient than remove(0)) let oldest_index = (self.total_readings as usize) % N; self.readings[oldest_index] = temperature; } self.total_readings += 1; } /// Get current number of readings pub fn len(&self) -> usize { self.readings.len() } /// Check if buffer is empty pub fn is_empty(&self) -> bool { self.readings.is_empty() } /// Get buffer capacity pub const fn capacity(&self) -> usize { N } /// Get the latest reading pub fn latest(&self) -> Option<Temperature> { self.readings.last().copied() } /// Get the oldest reading in buffer pub fn oldest(&self) -> Option<Temperature> { self.readings.first().copied() } /// Calculate average temperature pub fn average(&self) -> Option<Temperature> { if self.readings.is_empty() { return None; } let sum: i32 = self.readings.iter() .map(|t| t.celsius_tenths as i32) .sum(); let avg_tenths = sum / self.readings.len() as i32; Some(Temperature { celsius_tenths: avg_tenths as i16 }) } /// Find minimum temperature in buffer pub fn min(&self) -> Option<Temperature> { self.readings.iter() .min_by_key(|t| t.celsius_tenths) .copied() } /// Find maximum temperature in buffer pub fn max(&self) -> Option<Temperature> { self.readings.iter() .max_by_key(|t| t.celsius_tenths) .copied() } /// Get total readings processed (including overwritten ones) pub fn total_readings(&self) -> u32 { self.total_readings } /// Clear all readings pub fn clear(&mut self) { self.readings.clear(); self.total_readings = 0; } /// Get statistics summary pub fn stats(&self) -> Option<TemperatureStats> { if self.readings.is_empty() { return None; } Some(TemperatureStats { count: self.readings.len(), total_count: self.total_readings, average: self.average()?, min: self.min()?, max: self.max()?, }) } } /// Statistics summary for temperature readings #[derive(Debug, Clone, Copy)] pub struct TemperatureStats { pub count: usize, // Current readings in buffer pub total_count: u32, // Total readings ever processed pub average: Temperature, pub min: Temperature, pub max: Temperature, } impl fmt::Display for TemperatureStats { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "Stats: {} readings (total: {}), Avg: {}, Min: {}, Max: {}", self.count, self.total_count, self.average, self.min, self.max ) } } }
Understanding Heapless Collections
Key Differences from std:
| Feature | std::Vec | heapless::Vec |
|---|---|---|
| Capacity | Dynamic (grows) | Fixed at compile time |
| Memory | Heap allocated | Stack or static |
| Failure | Panic on OOM | Returns Result |
| Performance | Allocation overhead | Zero allocation |
When to Use Each Pattern:
#![allow(unused)] fn main() { // ✅ Use const generics for compile-time capacity type SmallBuffer = TemperatureBuffer<16>; // 16 readings max type LargeBuffer = TemperatureBuffer<128>; // 128 readings max // ✅ Handle full buffer gracefully let mut buffer = TemperatureBuffer::<10>::new(); for i in 0..20 { let temp = Temperature::from_celsius(20.0 + i as f32); buffer.push(temp); // Automatically overwrites oldest when full } // ✅ Check capacity and adjust behavior if buffer.len() >= buffer.capacity() { esp_println::println!("Buffer full, overwriting oldest data"); } }
Const Configuration for Embedded Systems
Embedded systems benefit from compile-time configuration:
#![allow(unused)] fn main() { /// System configuration computed at compile time pub struct SystemConfig; impl SystemConfig { /// ESP32-C3 system clock frequency pub const CLOCK_HZ: u32 = 160_000_000; // 160 MHz /// Temperature monitoring configuration pub const TEMP_SAMPLE_RATE_HZ: u32 = 1; // 1 reading per second pub const TEMP_BUFFER_SIZE: usize = 60; // 1 minute of readings pub const TEMP_WARNING_THRESHOLD: f32 = 52.0; // 52°C warning threshold pub const TEMP_CRITICAL_THRESHOLD: f32 = 100.0; // 100°C critical threshold /// Calculate timer interval for sampling rate pub const fn sample_interval_ms() -> u32 { 1000 / Self::TEMP_SAMPLE_RATE_HZ } /// Create temperature thresholds at compile time pub const fn warning_threshold() -> Temperature { Temperature::from_celsius(Self::TEMP_WARNING_THRESHOLD) } pub const fn critical_threshold() -> Temperature { Temperature::from_celsius(Self::TEMP_CRITICAL_THRESHOLD) } /// Validate buffer size is reasonable pub const fn validate_buffer_size() -> bool { // Buffer should hold 1-300 seconds of data Self::TEMP_BUFFER_SIZE >= Self::TEMP_SAMPLE_RATE_HZ as usize && Self::TEMP_BUFFER_SIZE <= (Self::TEMP_SAMPLE_RATE_HZ * 300) as usize } } // Compile-time assertions (will fail at compile time if invalid) const _: () = assert!(SystemConfig::validate_buffer_size()); const _: () = assert!(SystemConfig::TEMP_SAMPLE_RATE_HZ > 0); const _: () = assert!(SystemConfig::TEMP_BUFFER_SIZE > 0); // Pre-computed constants (zero runtime cost) pub const SAMPLE_INTERVAL: u32 = SystemConfig::sample_interval_ms(); pub const WARNING_TEMP: Temperature = SystemConfig::warning_threshold(); pub const CRITICAL_TEMP: Temperature = SystemConfig::critical_threshold(); }
Integrating with ESP32-C3 Hardware
Let’s update our temperature monitor to use these new data structures:
#![no_std] #![no_main] #![deny( clippy::mem_forget, reason = "mem::forget is generally not safe to do with esp_hal types" )] use esp_hal::clock::CpuClock; use esp_hal::gpio::{Level, Output, OutputConfig}; use esp_hal::main; use esp_hal::time::{Duration, Instant}; use esp_hal::tsens::{Config, TemperatureSensor}; use heapless::Vec; // Temperature types from earlier in this chapter /// Temperature reading optimized for embedded systems #[derive(Debug, Clone, Copy, PartialEq)] struct Temperature { celsius_tenths: i16, } impl Temperature { const fn from_celsius(celsius: f32) -> Self { Self { celsius_tenths: (celsius * 10.0) as i16, } } fn celsius(&self) -> f32 { self.celsius_tenths as f32 / 10.0 } fn fahrenheit(&self) -> f32 { self.celsius() * 9.0 / 5.0 + 32.0 } const fn is_normal_range(&self) -> bool { // Normal room temperature: 15-35°C self.celsius_tenths >= 150 && self.celsius_tenths <= 350 } const fn is_overheating(&self) -> bool { self.celsius_tenths > 1000 // > 100°C } } /// Fixed-capacity temperature buffer struct TemperatureBuffer<const N: usize> { readings: Vec<Temperature, N>, total_readings: u32, } impl<const N: usize> TemperatureBuffer<N> { const fn new() -> Self { Self { readings: Vec::new(), total_readings: 0, } } fn push(&mut self, temperature: Temperature) { if self.readings.len() < N { self.readings.push(temperature).ok(); } else { // Circular buffer - overwrite oldest let oldest_index = (self.total_readings as usize) % N; self.readings[oldest_index] = temperature; } self.total_readings += 1; } fn total_readings(&self) -> u32 { self.total_readings } fn stats(&self) -> Option<TemperatureStats> { if self.readings.is_empty() { return None; } let sum: i32 = self.readings.iter() .map(|t| t.celsius_tenths as i32) .sum(); let avg_tenths = sum / self.readings.len() as i32; let average = Temperature { celsius_tenths: avg_tenths as i16 }; let min = *self.readings.iter() .min_by_key(|t| t.celsius_tenths)?; let max = *self.readings.iter() .max_by_key(|t| t.celsius_tenths)?; Some(TemperatureStats { count: self.readings.len(), total_count: self.total_readings, average, min, max, }) } } #[derive(Debug, Clone, Copy)] struct TemperatureStats { count: usize, total_count: u32, average: Temperature, min: Temperature, max: Temperature, } const BUFFER_SIZE: usize = 20; const SAMPLE_INTERVAL_MS: u64 = 1000; // 1 second #[panic_handler] fn panic(info: &core::panic::PanicInfo) -> ! { esp_println::println!("💥 SYSTEM PANIC: {}", info); loop {} } esp_bootloader_esp_idf::esp_app_desc!(); #[main] fn main() -> ! { // Initialize hardware let config = esp_hal::Config::default().with_cpu_clock(CpuClock::max()); let peripherals = esp_hal::init(config); // Initialize GPIO for LED on GPIO8 let mut led = Output::new(peripherals.GPIO8, Level::Low, OutputConfig::default()); // Initialize the built-in temperature sensor let temp_sensor = TemperatureSensor::new(peripherals.TSENS, Config::default()).unwrap(); // Create fixed-capacity temperature buffer let mut temp_buffer = TemperatureBuffer::<BUFFER_SIZE>::new(); // Startup messages esp_println::println!("ESP32-C3 Temperature Monitor with Data Storage"); esp_println::println!("Buffer capacity: {} readings", BUFFER_SIZE); esp_println::println!("Sample rate: {} second intervals", SAMPLE_INTERVAL_MS / 1000); esp_println::println!("Temperature stored as {} bytes per reading", core::mem::size_of::<Temperature>()); esp_println::println!(); // Main monitoring loop loop { // Small stabilization delay (recommended by ESP-HAL) let delay_start = Instant::now(); while delay_start.elapsed() < Duration::from_micros(200) {} // Read temperature from built-in sensor let esp_temperature = temp_sensor.get_temperature(); let temp_celsius = esp_temperature.to_celsius(); let temperature = Temperature::from_celsius(temp_celsius); // Store in buffer temp_buffer.push(temperature); // LED status based on temperature if temperature.is_overheating() { // Rapid triple blink for overheating (>100°C) for _ in 0..3 { led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} led.set_low(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} } } else if !temperature.is_normal_range() { // Double blink for out of normal range (not 15-35°C) led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(150) {} led.set_low(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(150) {} led.set_low(); } else { // Single blink for normal temperature led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(200) {} led.set_low(); } // Print current reading esp_println::println!("Reading #{}: {:.1}°C ({:.1}°F)", temp_buffer.total_readings(), temperature.celsius(), temperature.fahrenheit() ); // Print statistics every 5 readings if temp_buffer.total_readings() % 5 == 0 { if let Some(stats) = temp_buffer.stats() { esp_println::println!("Stats: {} readings (total: {}), Avg: {:.1}°C, Min: {:.1}°C, Max: {:.1}°C", stats.count, stats.total_count, stats.average.celsius(), stats.min.celsius(), stats.max.celsius() ); // Memory usage info let buffer_size = core::mem::size_of::<TemperatureBuffer<BUFFER_SIZE>>(); esp_println::println!("Memory: Buffer using {} of {} slots ({} bytes total)", stats.count, BUFFER_SIZE, buffer_size ); // Buffer status if stats.count >= BUFFER_SIZE { esp_println::println!("Buffer full - circular mode active (overwriting oldest data)"); } esp_println::println!(); } } // Wait for next sample let wait_start = Instant::now(); while wait_start.elapsed() < Duration::from_millis(SAMPLE_INTERVAL_MS) {} } } # Exercise: Temperature Data Collection System Build an embedded data collection system that stores and analyzes temperature readings. ## Requirements 1. **Temperature Type**: Create efficient embedded temperature representation 2. **Circular Buffer**: Fixed-capacity storage with automatic oldest-data replacement 3. **Statistics**: Real-time calculation of min, max, average 4. **Configuration**: Compile-time system parameters 5. **Memory Efficiency**: Minimize RAM usage while maintaining functionality ## Starting Project Structure Create new module files: ```rust // src/temperature.rs #![no_std] use core::fmt; use heapless::Vec; #[derive(Debug, Clone, Copy, PartialEq)] pub struct Temperature { // TODO: Implement memory-efficient temperature storage } impl Temperature { pub const fn from_celsius(celsius: f32) -> Self { // TODO: Convert f32 to efficient internal representation unimplemented!() } pub fn celsius(&self) -> f32 { // TODO: Convert back to f32 unimplemented!() } pub const fn is_overheating(&self) -> bool { // TODO: Check if temperature > 100°C unimplemented!() } } pub struct TemperatureBuffer<const N: usize> { // TODO: Implement fixed-capacity circular buffer } impl<const N: usize> TemperatureBuffer<N> { pub const fn new() -> Self { // TODO: Initialize empty buffer unimplemented!() } pub fn push(&mut self, temperature: Temperature) { // TODO: Add reading with circular buffer behavior unimplemented!() } pub fn stats(&self) -> Option<TemperatureStats> { // TODO: Calculate min, max, average unimplemented!() } } #[derive(Debug, Clone, Copy)] pub struct TemperatureStats { pub count: usize, pub average: Temperature, pub min: Temperature, pub max: Temperature, }
// src/main.rs #![no_std] #![no_main] mod temperature; use temperature::{Temperature, TemperatureBuffer}; #[entry] fn main() -> ! { // TODO: Initialize hardware (from Chapter 13) // TODO: Create temperature buffer with capacity 20 loop { // TODO: Read temperature sensor // TODO: Store in buffer // TODO: Display statistics every 5 readings // TODO: LED status based on temperature // TODO: Wait 2 seconds between readings } }
Implementation Tasks
-
Efficient Temperature Type:
- Use
i16to store temperature * 10 (0.1°C resolution) - Implement
from_celsius()andcelsius()conversion - Add
is_overheating()check for > 100°C - Implement
Displaytrait for printing
- Use
-
Circular Buffer Implementation:
- Use
heapless::Vec<Temperature, N>for storage - Implement
push()with oldest-data replacement when full - Track total readings processed
- Add
len(),capacity(),latest()methods
- Use
-
Statistics Calculation:
- Implement
min(),max(),average()functions - Create
TemperatureStatsstruct - Handle empty buffer case gracefully
- Efficient integer-based calculations
- Implement
-
Integration Testing:
- Build and flash to ESP32-C3
- Verify buffer behavior and statistics
- Test with temperature changes
Expected Output
ESP32-C3 Temperature Monitor with Data Storage
Sample rate: 1 Hz
Buffer capacity: 20 readings
Reading #1: 24.3°C
Reading #2: 24.5°C
Reading #3: 24.1°C
Reading #4: 24.8°C
Reading #5: 25.2°C
Stats: 5 readings, Avg: 24.6°C, Min: 24.1°C, Max: 25.2°C
Memory: Buffer using 5 of 20 slots
...
Reading #25: 24.7°C
Stats: 20 readings, Avg: 24.4°C, Min: 23.8°C, Max: 25.3°C
Memory: Buffer using 20 of 20 slots (circular mode active)
Success Criteria
- Temperature stored efficiently in 2 bytes per reading
- Buffer correctly implements circular behavior when full
- Statistics calculated accurately without floating-point overhead
- LED indicates overheating condition
- Memory usage is predictable and bounded
- No heap allocation or dynamic memory
Extension Challenges
- Compile-time Configuration: Move buffer size and thresholds to const
- Temperature Trends: Track if temperature is rising or falling
- Alarm Conditions: Multiple threshold levels with different LED patterns
- Data Persistence: Retain readings across ESP32 resets (use RTC memory)
- Memory Analysis: Measure actual RAM usage of data structures
Understanding Memory Usage
#![allow(unused)] fn main() { // Check memory footprint of your types const TEMP_SIZE: usize = core::mem::size_of::<Temperature>(); const BUFFER_SIZE: usize = core::mem::size_of::<TemperatureBuffer<20>>(); const STATS_SIZE: usize = core::mem::size_of::<TemperatureStats>(); esp_println::println!("Memory usage:"); esp_println::println!(" Temperature: {} bytes", TEMP_SIZE); esp_println::println!(" Buffer (20 readings): {} bytes", BUFFER_SIZE); esp_println::println!(" Stats: {} bytes", STATS_SIZE); esp_println::println!(" Total: {} bytes", BUFFER_SIZE + STATS_SIZE); }
Target: Less than 100 bytes total for 20 temperature readings + metadata.
Key Takeaways
✅ Memory Efficiency: Using i16 instead of f32 saves 50% memory without losing precision
✅ Fixed Allocation: heapless::Vec provides dynamic behavior with static memory
✅ Const Configuration: Compile-time parameters eliminate runtime overhead
✅ Circular Buffers: Essential pattern for continuous data collection in embedded systems
✅ Statistical Processing: Can calculate aggregates efficiently without external libraries
✅ Type Safety: Rust’s type system prevents common embedded errors like buffer overflows
Next: In Chapter 15, we’ll add proper testing strategies for embedded code, including how to test no_std code on desktop and validate hardware behavior.
Chapter 15: Testing Embedded Code
Learning Objectives
This chapter covers:
- Test no_std code on your desktop using conditional compilation
- Create hardware abstraction layers (HAL) for testable embedded code
- Write unit tests for temperature data structures and algorithms
- Mock hardware dependencies for isolated testing
- Use integration tests to validate ESP32-C3 behavior
- Debug embedded code efficiently using both tests and hardware
Task: Test Embedded Code on Desktop
Building on chapters 13-14, where we created temperature monitoring with data structures, now we need to ensure our code is robust and correct.
Your Mission:
- Test no_std code on desktop using conditional compilation
- Mock hardware dependencies (temperature sensor, GPIO) for isolated testing
- Validate algorithms (circular buffer, statistics) without hardware
- Create testable abstractions that work both embedded and on desktop
- Add comprehensive test coverage including edge cases and error conditions
Why This Matters:
- Faster development: Test business logic without flashing hardware
- Better reliability: Catch bugs before they reach embedded systems
- Easier debugging: Desktop tools are more powerful than embedded debuggers
- Continuous Integration: Automated testing in CI/CD pipelines
The Challenge:
- Code runs on ESP32-C3 (RISC-V), but tests run on desktop (x86/ARM)
- No access to GPIO, sensors, or timers in test environment
- Need to test
no_stdcode usingstdtools
Conditional Compilation Strategy
The key insight: Your business logic doesn’t need hardware to be tested.
#![allow(unused)] fn main() { // This works in both embedded and test environments #[cfg(test)] use std::vec::Vec; // Tests can use std #[cfg(not(test))] use heapless::Vec; // Embedded uses heapless // The rest of your code works with either Vec! fn calculate_average(readings: &[f32]) -> Option<f32> { if readings.is_empty() { return None; } let sum: f32 = readings.iter().sum(); Some(sum / readings.len() as f32) } #[cfg(test)] mod tests { use super::*; #[test] fn test_average_calculation() { let readings = vec![20.0, 25.0, 30.0]; // std::vec in tests let avg = calculate_average(&readings).unwrap(); assert!((avg - 25.0).abs() < 0.01); } #[test] fn test_empty_readings() { let readings = vec![]; assert_eq!(calculate_average(&readings), None); } } }
Project Setup for Testable Embedded Code
First, let’s set up our project to support both embedded and testing targets:
[package]
name = "chapter15_testing"
version = "0.1.0"
edition = "2024"
rust-version = "1.88"
[[bin]]
name = "chapter15_testing"
path = "./src/bin/main.rs"
[lib]
name = "chapter15_testing"
path = "src/lib.rs"
[dependencies]
# Only include ESP dependencies when not testing
esp-hal = { version = "1.0.0", features = ["esp32c3", "unstable"], optional = true }
heapless = "0.8"
esp-println = { version = "0.16", features = ["esp32c3"], optional = true }
esp-bootloader-esp-idf = { version = "0.4.0", features = ["esp32c3"], optional = true }
critical-section = "1.2.0"
[features]
default = ["esp-hal", "esp-println", "esp-bootloader-esp-idf"]
embedded = ["esp-hal", "esp-println", "esp-bootloader-esp-idf"]
[profile.dev]
opt-level = "s"
[profile.release]
codegen-units = 1
debug = 2
debug-assertions = false
incremental = false
lto = 'fat'
opt-level = 's'
overflow-checks = false
Key Setup Details:
- Optional ESP dependencies: Only included when building for embedded target
- Feature flags: Control when ESP-specific code is compiled
- Library + Binary: Allows testing the library separately from main embedded binary
Testing the Temperature Types from Chapter 14
Let’s add comprehensive tests to our embedded temperature code:
#![allow(unused)] fn main() { // src/lib.rs - Testable embedded temperature library #![cfg_attr(not(test), no_std)] use core::fmt; // Conditional imports for testing #[cfg(test)] use std::vec::Vec; #[cfg(not(test))] use heapless::Vec; /// Temperature reading optimized for embedded systems #[derive(Debug, Clone, Copy, PartialEq)] pub struct Temperature { // Store as i16 to save memory (16-bit vs 32-bit f32) // Resolution: 0.1°C, Range: -3276.8°C to +3276.7°C pub(crate) celsius_tenths: i16, } impl Temperature { /// Create temperature from Celsius value pub const fn from_celsius(celsius: f32) -> Self { Self { celsius_tenths: (celsius * 10.0) as i16, } } /// Get temperature as Celsius f32 pub fn celsius(&self) -> f32 { self.celsius_tenths as f32 / 10.0 } pub fn fahrenheit(&self) -> f32 { self.celsius() * 9.0 / 5.0 + 32.0 } pub const fn is_overheating(&self) -> bool { self.celsius_tenths > 500 // > 50°C } pub const fn is_normal_range(&self) -> bool { self.celsius_tenths >= 150 && self.celsius_tenths <= 350 // 15-35°C } } impl fmt::Display for Temperature { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "{:.1}°C", self.celsius()) } } pub struct TemperatureBuffer<const N: usize> { #[cfg(test)] readings: Vec<Temperature>, // std::vec for tests #[cfg(not(test))] readings: Vec<Temperature, N>, // heapless::vec for embedded total_readings: u32, } impl<const N: usize> TemperatureBuffer<N> { pub const fn new() -> Self { Self { readings: Vec::new(), total_readings: 0, } } pub fn push(&mut self, temperature: Temperature) { #[cfg(test)] { // In tests, we can grow unlimited if self.readings.len() >= N { self.readings.remove(0); } self.readings.push(temperature); } #[cfg(not(test))] { // In embedded, handle fixed capacity with circular buffer if self.readings.len() < N { self.readings.push(temperature).ok(); } else { // Use circular indexing (O(1) vs remove(0) which is O(n)) let oldest_index = (self.total_readings as usize) % N; self.readings[oldest_index] = temperature; } } self.total_readings += 1; } pub fn len(&self) -> usize { self.readings.len() } pub const fn capacity(&self) -> usize { N } pub fn latest(&self) -> Option<Temperature> { self.readings.last().copied() } pub fn average(&self) -> Option<Temperature> { if self.readings.is_empty() { return None; } let sum: i32 = self.readings.iter() .map(|t| t.celsius_tenths as i32) .sum(); let avg_tenths = sum / self.readings.len() as i32; Some(Temperature { celsius_tenths: avg_tenths as i16 }) } pub fn min(&self) -> Option<Temperature> { self.readings.iter() .min_by_key(|t| t.celsius_tenths) .copied() } pub fn max(&self) -> Option<Temperature> { self.readings.iter() .max_by_key(|t| t.celsius_tenths) .copied() } pub fn total_readings(&self) -> u32 { self.total_readings } } #[cfg(test)] mod tests { use super::*; #[test] fn test_temperature_creation_and_conversion() { let temp = Temperature::from_celsius(23.5); // Test precision assert!((temp.celsius() - 23.5).abs() < 0.1); // Test Fahrenheit conversion let fahrenheit = temp.fahrenheit(); assert!((fahrenheit - 74.3).abs() < 0.1); // Test memory efficiency assert_eq!(core::mem::size_of::<Temperature>(), 2); } #[test] fn test_temperature_ranges() { let normal = Temperature::from_celsius(25.0); assert!(normal.is_normal_range()); assert!(!normal.is_overheating()); let hot = Temperature::from_celsius(55.0); assert!(!hot.is_normal_range()); assert!(hot.is_overheating()); let cold = Temperature::from_celsius(5.0); assert!(!cold.is_normal_range()); assert!(!cold.is_overheating()); } #[test] fn test_temperature_edge_cases() { // Test extreme values let extreme_hot = Temperature::from_celsius(3276.0); let extreme_cold = Temperature::from_celsius(-3276.0); assert!(extreme_hot.celsius() > 3000.0); assert!(extreme_cold.celsius() < -3000.0); } #[test] fn test_buffer_basic_operations() { let mut buffer = TemperatureBuffer::<5>::new(); assert_eq!(buffer.len(), 0); assert_eq!(buffer.capacity(), 5); assert_eq!(buffer.latest(), None); // Add some readings buffer.push(Temperature::from_celsius(20.0)); buffer.push(Temperature::from_celsius(25.0)); buffer.push(Temperature::from_celsius(30.0)); assert_eq!(buffer.len(), 3); assert_eq!(buffer.total_readings(), 3); assert_eq!(buffer.latest().unwrap().celsius(), 30.0); } #[test] fn test_buffer_circular_behavior() { let mut buffer = TemperatureBuffer::<3>::new(); // Fill buffer exactly buffer.push(Temperature::from_celsius(10.0)); buffer.push(Temperature::from_celsius(20.0)); buffer.push(Temperature::from_celsius(30.0)); assert_eq!(buffer.len(), 3); // Add one more - should overwrite oldest buffer.push(Temperature::from_celsius(40.0)); assert_eq!(buffer.len(), 3); // Still full assert_eq!(buffer.total_readings(), 4); // But total increased // First reading (10.0) should be gone assert_eq!(buffer.min().unwrap().celsius(), 20.0); // Min is now 20 assert_eq!(buffer.max().unwrap().celsius(), 40.0); // Max is 40 } #[test] fn test_buffer_statistics() { let mut buffer = TemperatureBuffer::<10>::new(); // Add test data: 20, 21, 22, 23, 24 for i in 0..5 { buffer.push(Temperature::from_celsius(20.0 + i as f32)); } let avg = buffer.average().unwrap(); assert!((avg.celsius() - 22.0).abs() < 0.1); assert_eq!(buffer.min().unwrap().celsius(), 20.0); assert_eq!(buffer.max().unwrap().celsius(), 24.0); } #[test] fn test_buffer_empty_statistics() { let buffer = TemperatureBuffer::<5>::new(); assert_eq!(buffer.average(), None); assert_eq!(buffer.min(), None); assert_eq!(buffer.max(), None); } #[test] fn test_buffer_single_reading() { let mut buffer = TemperatureBuffer::<5>::new(); buffer.push(Temperature::from_celsius(25.0)); let avg = buffer.average().unwrap(); assert_eq!(avg.celsius(), 25.0); assert_eq!(buffer.min().unwrap().celsius(), 25.0); assert_eq!(buffer.max().unwrap().celsius(), 25.0); } #[test] fn test_temperature_display() { let temp = Temperature::from_celsius(23.7); let display_str = format!("{}", temp); assert_eq!(display_str, "23.7°C"); } #[test] fn test_memory_usage() { // Verify our types are memory efficient let temp_size = core::mem::size_of::<Temperature>(); let buffer_size = core::mem::size_of::<TemperatureBuffer<20>>(); println!("Temperature size: {} bytes", temp_size); println!("Buffer size (20 readings): {} bytes", buffer_size); assert_eq!(temp_size, 2); // Should be exactly 2 bytes // Buffer size will be larger in tests due to std::Vec } } }
Hardware Abstraction Layer (HAL) for Testing
To test hardware-dependent code, create an abstraction layer:
#![allow(unused)] fn main() { // src/hal.rs - Hardware abstraction layer #[cfg(test)] use std::cell::RefCell; /// Trait for reading temperature from any source pub trait TemperatureSensorHal { type Error; fn read_celsius(&mut self) -> Result<f32, Self::Error>; fn sensor_id(&self) -> &str; } /// Real ESP32 temperature sensor implementation #[cfg(not(test))] pub struct Esp32TemperatureSensor { sensor: esp_hal::temperature_sensor::TemperatureSensor, } #[cfg(not(test))] impl Esp32TemperatureSensor { pub fn new(sensor: esp_hal::temperature_sensor::TemperatureSensor) -> Self { Self { sensor } } } #[cfg(not(test))] impl TemperatureSensorHal for Esp32TemperatureSensor { type Error = (); fn read_celsius(&mut self) -> Result<f32, Self::Error> { Ok(self.sensor.read_celsius()) } fn sensor_id(&self) -> &str { "ESP32-C3 Built-in" } } /// Mock sensor for testing #[cfg(test)] pub struct MockTemperatureSensor { temperatures: RefCell<Vec<f32>>, current_index: RefCell<usize>, id: String, } #[cfg(test)] impl MockTemperatureSensor { pub fn new(id: String) -> Self { Self { temperatures: RefCell::new(vec![25.0]), // Default temperature current_index: RefCell::new(0), id, } } pub fn set_temperatures(&self, temps: Vec<f32>) { *self.temperatures.borrow_mut() = temps; *self.current_index.borrow_mut() = 0; } pub fn set_single_temperature(&self, temp: f32) { *self.temperatures.borrow_mut() = vec![temp]; *self.current_index.borrow_mut() = 0; } } #[cfg(test)] impl TemperatureSensorHal for MockTemperatureSensor { type Error = &'static str; fn read_celsius(&mut self) -> Result<f32, Self::Error> { let temps = self.temperatures.borrow(); let mut index = self.current_index.borrow_mut(); if temps.is_empty() { return Err("No temperature data configured"); } let temp = temps[*index]; *index = (*index + 1) % temps.len(); // Cycle through temperatures Ok(temp) } fn sensor_id(&self) -> &str { &self.id } } #[cfg(test)] mod tests { use super::*; #[test] fn test_mock_sensor_single_value() { let mut sensor = MockTemperatureSensor::new("test-sensor".to_string()); sensor.set_single_temperature(23.5); let temp1 = sensor.read_celsius().unwrap(); let temp2 = sensor.read_celsius().unwrap(); assert_eq!(temp1, 23.5); assert_eq!(temp2, 23.5); // Should repeat same value assert_eq!(sensor.sensor_id(), "test-sensor"); } #[test] fn test_mock_sensor_cycling_values() { let mut sensor = MockTemperatureSensor::new("cycle-test".to_string()); sensor.set_temperatures(vec![20.0, 25.0, 30.0]); assert_eq!(sensor.read_celsius().unwrap(), 20.0); assert_eq!(sensor.read_celsius().unwrap(), 25.0); assert_eq!(sensor.read_celsius().unwrap(), 30.0); assert_eq!(sensor.read_celsius().unwrap(), 20.0); // Cycles back } #[test] fn test_mock_sensor_empty_data() { let mut sensor = MockTemperatureSensor::new("empty-test".to_string()); sensor.set_temperatures(vec![]); assert!(sensor.read_celsius().is_err()); } } }
Integration Testing on Hardware
For testing actual hardware behavior, create integration tests:
#![allow(unused)] fn main() { // tests/integration_tests.rs - Hardware integration tests use temp_monitor::{Temperature, TemperatureBuffer}; #[cfg(target_arch = "riscv32")] // Only run on ESP32 #[test] fn test_hardware_sensor_reading() { // This test would run on actual ESP32 hardware // (Implementation depends on test framework like defmt-test) } // Cross-platform integration tests #[test] fn test_temperature_monitor_workflow() { // Test the complete workflow without hardware let mut buffer = TemperatureBuffer::<5>::new(); // Simulate sensor readings let readings = vec![22.0, 23.0, 24.0, 25.0, 26.0, 27.0]; for temp_celsius in readings { let temp = Temperature::from_celsius(temp_celsius); buffer.push(temp); } // Verify circular buffer behavior assert_eq!(buffer.len(), 5); assert_eq!(buffer.total_readings(), 6); // Verify statistics let stats = buffer.average().unwrap(); assert!((stats.celsius() - 25.0).abs() < 0.1); // Should be ~25°C average assert_eq!(buffer.min().unwrap().celsius(), 23.0); // Oldest (22.0) was overwritten assert_eq!(buffer.max().unwrap().celsius(), 27.0); } #[test] fn test_overheating_detection() { let normal_temp = Temperature::from_celsius(25.0); let hot_temp = Temperature::from_celsius(55.0); let very_hot_temp = Temperature::from_celsius(75.0); assert!(!normal_temp.is_overheating()); assert!(hot_temp.is_overheating()); assert!(very_hot_temp.is_overheating()); // Test with buffer let mut buffer = TemperatureBuffer::<3>::new(); buffer.push(normal_temp); buffer.push(hot_temp); buffer.push(very_hot_temp); // Should average to overheating territory let avg = buffer.average().unwrap(); assert!(avg.is_overheating()); } }
Running Tests
Desktop Tests
# Run all tests on desktop
cargo test
# Run specific test module
cargo test temperature::tests
# Run with output
cargo test -- --nocapture
# Run tests in verbose mode
cargo test --verbose
Test Output Example
$ cargo test
Compiling temp_monitor v0.1.0
Finished test [unoptimized + debuginfo] target(s) in 1.23s
Running unittests src/lib.rs
running 12 tests
test temperature::tests::test_temperature_creation_and_conversion ... ok
test temperature::tests::test_temperature_ranges ... ok
test temperature::tests::test_temperature_edge_cases ... ok
test temperature::tests::test_buffer_basic_operations ... ok
test temperature::tests::test_buffer_circular_behavior ... ok
test temperature::tests::test_buffer_statistics ... ok
test temperature::tests::test_buffer_empty_statistics ... ok
test temperature::tests::test_buffer_single_reading ... ok
test temperature::tests::test_temperature_display ... ok
test temperature::tests::test_memory_usage ... ok
test hal::tests::test_mock_sensor_single_value ... ok
test hal::tests::test_mock_sensor_cycling_values ... ok
Running tests/integration_tests.rs
running 2 tests
test test_temperature_monitor_workflow ... ok
test test_overheating_detection ... ok
test result: ok. 14 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Building for Embedded Target
When you’re ready to test on hardware:
# Build and flash to ESP32-C3 (recommended)
cargo run --release --features embedded
# Alternative: Build then flash separately
cargo build --release --target riscv32imc-unknown-none-elf --features embedded
cargo espflash flash target/riscv32imc-unknown-none-elf/release/chapter15_testing
Key Testing Patterns Learned
✅ Conditional Compilation: Use #[cfg(test)] and #[cfg(not(test))] to create testable embedded code
✅ Hardware Abstraction: Create traits that can be mocked for testing hardware dependencies
✅ Memory Efficiency Testing: Verify size and memory usage in unit tests
✅ Edge Case Testing: Test boundary conditions like buffer overflow, empty data, extreme values
✅ Integration Testing: Test complete workflows without hardware dependencies
Next: In Chapter 16, we’ll add communication capabilities to send structured data like JSON over serial connections.
Hardware Validation
# Build and flash test version (recommended)
cargo run --release --features test-on-hardware
# Alternative: Build then flash
cargo build --release --features test-on-hardware
cargo espflash flash target/riscv32imc-unknown-none-elf/release/temp_monitor
# Expected hardware output:
# Running hardware validation...
# ✅ Temperature sensor responding
# ✅ LED control working
# ✅ Buffer operations correct
# ✅ Statistics calculation accurate
# Hardware tests passed!
Test-Driven Development for Embedded
Use TDD to develop new features:
#![allow(unused)] fn main() { // 1. Write failing test first #[test] fn test_temperature_trend_detection() { let mut buffer = TemperatureBuffer::<5>::new(); // Rising temperature trend buffer.push(Temperature::from_celsius(20.0)); buffer.push(Temperature::from_celsius(22.0)); buffer.push(Temperature::from_celsius(24.0)); // This will fail until we implement it assert_eq!(buffer.trend(), Some(TemperatureTrend::Rising)); } // 2. Implement minimal code to make test pass #[derive(Debug, PartialEq)] pub enum TemperatureTrend { Rising, Falling, Stable, } impl<const N: usize> TemperatureBuffer<N> { pub fn trend(&self) -> Option<TemperatureTrend> { if self.readings.len() < 3 { return None; } // Simple trend detection - compare first and last let first = self.readings.first().unwrap().celsius_tenths; let last = self.readings.last().unwrap().celsius_tenths; if last > first + 20 { // More than 2°C increase Some(TemperatureTrend::Rising) } else if last < first - 20 { // More than 2°C decrease Some(TemperatureTrend::Falling) } else { Some(TemperatureTrend::Stable) } } } // 3. Refactor and add more test cases }
Exercise: Add Comprehensive Testing
Add a full test suite to your temperature monitoring code.
Requirements
- Unit Tests: Test all temperature and buffer functions
- Mock Hardware: Create testable hardware abstraction
- Integration Tests: Test complete workflows
- Error Cases: Test edge cases and error conditions
- Performance: Verify memory usage and efficiency
Tasks
-
Setup Test Environment:
- Add conditional compilation for tests
- Create
src/lib.rsto expose modules for testing - Update
Cargo.tomlwith test dependencies
-
Unit Tests for Temperature:
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_temperature_precision() { // TODO: Test 0.1°C precision } #[test] fn test_conversion_roundtrip() { // TODO: celsius -> internal -> celsius should be stable } #[test] fn test_extreme_temperatures() { // TODO: Test very hot and cold values } } } -
Unit Tests for Buffer:
#![allow(unused)] fn main() { #[test] fn test_buffer_capacity_limits() { // TODO: Test buffer behavior at capacity } #[test] fn test_statistics_accuracy() { // TODO: Verify min/max/average calculations } #[test] fn test_circular_replacement() { // TODO: Ensure oldest data is properly replaced } } -
Hardware Abstraction Tests:
- Create mock sensor implementation
- Test sensor trait with controlled data
- Verify error handling
-
Run and Validate:
- Execute test suite with
cargo test - Verify all tests pass
- Check test coverage
- Execute test suite with
Expected Test Results
running 15 tests
test temperature::tests::test_temperature_precision ... ok
test temperature::tests::test_conversion_roundtrip ... ok
test temperature::tests::test_extreme_temperatures ... ok
test temperature::tests::test_buffer_capacity_limits ... ok
test temperature::tests::test_statistics_accuracy ... ok
test temperature::tests::test_circular_replacement ... ok
test hal::tests::test_mock_sensor ... ok
test integration::test_complete_workflow ... ok
...
test result: ok. 15 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Memory usage:
Temperature: 2 bytes
Buffer (20 readings): 86 bytes
Total: 88 bytes ✅
Success Criteria
- All unit tests pass on desktop
- Mock sensor provides controlled test data
- Integration tests verify complete workflows
- Edge cases are handled gracefully
- Memory usage is within expected bounds
- Tests run quickly (< 1 second total)
Extension Challenges
- Property-Based Testing: Use
quickcheckto test with random data - Benchmark Tests: Measure performance of temperature calculations
- Hardware-in-the-Loop: Run tests on actual ESP32 hardware
- Coverage Analysis: Use
cargo tarpaulinto measure test coverage - Fuzzing: Test with invalid input data
Debugging Embedded Code
Test-First Debugging
When hardware doesn’t behave as expected:
-
Write Test for Expected Behavior:
#![allow(unused)] fn main() { #[test] fn test_sensor_reading_should_be_realistic() { let reading = mock_esp32_reading(1500); // ADC value let temp = Temperature::from_sensor_raw(reading); assert!(temp.celsius() > 15.0 && temp.celsius() < 40.0); } } -
Run Test on Desktop to verify logic
-
Compare with Hardware output
-
Identify Discrepancy and fix
Serial Debug Output
#![allow(unused)] fn main() { // Add debug output to embedded code esp_println::println!("Debug: ADC raw = {}, converted = {}°C", raw_value, temperature.celsius()); // Compare with test expectations #[test] fn test_debug_conversion() { let temp = Temperature::from_sensor_raw(1500); println!("Test: ADC raw = 1500, converted = {}°C", temp.celsius()); // Should match hardware output } }
Test-Driven Hardware Validation
#![allow(unused)] fn main() { #[cfg(feature = "hardware-test")] pub fn validate_hardware() { // This function runs on hardware to validate assumptions let mut sensor = /* initialize real sensor */; for _ in 0..10 { let reading = sensor.read_celsius(); esp_println::println!("Hardware reading: {:.1}°C", reading); // Sanity checks assert!(reading > -50.0 && reading < 100.0, "Reading out of range"); } esp_println::println!("✅ Hardware validation passed"); } }
Key Takeaways
✅ Conditional Compilation: Use #[cfg(test)] to test no_std code on desktop
✅ Hardware Abstraction: Create traits to mock hardware dependencies
✅ Test Structure: Unit tests for logic, integration tests for workflows
✅ TDD for Embedded: Write tests first, even for hardware-dependent features
✅ Debug Strategy: Combine desktop tests with serial debugging on hardware
✅ Performance Testing: Verify memory usage and timing in tests
Next: In Chapter 16, we’ll add communication capabilities to send our temperature data in structured formats like JSON and binary protocols.
Chapter 16: Data & Communication
Learning Objectives
This chapter covers:
- Use Serde for serialization in no_std embedded environments
- Send structured temperature data as JSON over USB Serial
- Implement efficient binary protocols with postcard
- Create command/response interfaces for embedded systems
- Handle communication errors gracefully in resource-constrained environments
- Design protocols optimized for IoT and embedded applications
Task: Send Structured Temperature Data via JSON
Building on chapters 13-15, where we created temperature monitoring with testing, now we need to enable communication with external systems.
Your Mission:
- Add serialization support to temperature data structures using Serde
- Send JSON data over USB Serial for monitoring dashboards
- Implement command/response protocol for remote control
- Use fixed-size strings and heapless collections for efficiency
- Handle communication errors gracefully in resource-constrained environment
Why This Matters:
- Remote monitoring: Send data to dashboards and cloud services
- Remote control: Change settings without reflashing firmware
- Interoperability: JSON works with any programming language
- Debugging: Structured data makes debugging easier than raw values
The Challenge:
- No heap allocation for JSON serialization
- Fixed-size buffers for serial communication
- Error handling without panicking
Serde in no_std: Serialization for Embedded
Serde is Rust’s premier serialization framework, and it works great in no_std environments:
[package]
name = "chapter16_communication"
version = "0.1.0"
edition = "2024"
rust-version = "1.88"
[[bin]]
name = "chapter16_communication"
path = "./src/bin/main.rs"
[lib]
name = "chapter16_communication"
path = "src/lib.rs"
[dependencies]
# Only include ESP dependencies when not testing
esp-hal = { version = "1.0.0", features = ["esp32c3", "unstable"], optional = true }
esp-bootloader-esp-idf = { version = "0.4.0", features = ["esp32c3"], optional = true }
esp-println = { version = "0.16", features = ["esp32c3"], optional = true }
# Core dependencies
critical-section = "1.2.0"
heapless = "0.8"
# Serialization
serde = { version = "1.0", default-features = false, features = ["derive"] }
serde-json-core = "0.6"
[features]
default = ["esp-hal", "esp-println", "esp-bootloader-esp-idf"]
embedded = ["esp-hal", "esp-println", "esp-bootloader-esp-idf"]
Making Temperature Data Serializable
Let’s update our temperature types to support serialization:
#![allow(unused)] fn main() { // src/temperature.rs - Updated with serde support #![cfg_attr(not(test), no_std)] use serde::{Deserialize, Serialize}; use core::fmt; #[cfg(test)] use std::vec::Vec; #[cfg(not(test))] use heapless::Vec; #[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize)] pub struct Temperature { celsius_tenths: i16, } impl Temperature { pub const fn from_celsius(celsius: f32) -> Self { Self { celsius_tenths: (celsius * 10.0) as i16, } } pub fn celsius(&self) -> f32 { self.celsius_tenths as f32 / 10.0 } pub fn fahrenheit(&self) -> f32 { self.celsius() * 9.0 / 5.0 + 32.0 } pub const fn is_overheating(&self) -> bool { self.celsius_tenths > 500 // > 50°C } // Helper for JSON serialization with nice format pub fn to_celsius_rounded(&self) -> f32 { (self.celsius() * 10.0).round() / 10.0 } } #[derive(Debug, Clone, Copy, Serialize, Deserialize)] pub struct TemperatureReading { pub temperature: Temperature, pub timestamp_ms: u32, pub sensor_id: u8, // Compact sensor identifier } impl TemperatureReading { pub fn new(temperature: Temperature, timestamp_ms: u32, sensor_id: u8) -> Self { Self { temperature, timestamp_ms, sensor_id, } } pub fn current_time(temperature: Temperature) -> Self { // In real implementation, this would get actual timestamp // For now, use a simple counter static mut TIMESTAMP: u32 = 0; unsafe { TIMESTAMP += 1000; // Simulate 1-second intervals Self::new(temperature, TIMESTAMP, 0) } } } #[derive(Debug, Clone, Copy, Serialize, Deserialize)] pub struct TemperatureStats { pub count: u16, // Use u16 to save space pub total_count: u32, pub min_celsius: f32, // Store as f32 for JSON compatibility pub max_celsius: f32, pub avg_celsius: f32, pub timestamp_ms: u32, } impl TemperatureStats { pub fn from_buffer<const N: usize>( buffer: &TemperatureBuffer<N>, timestamp_ms: u32 ) -> Option<Self> { if buffer.len() == 0 { return None; } let min = buffer.min()?.celsius(); let max = buffer.max()?.celsius(); let avg = buffer.average()?.celsius(); Some(Self { count: buffer.len() as u16, total_count: buffer.total_readings(), min_celsius: min, max_celsius: max, avg_celsius: avg, timestamp_ms, }) } } }
JSON Serialization with serde-json-core
For IoT integration, JSON is widely supported but needs special handling in no_std:
#![allow(unused)] fn main() { // src/communication.rs - JSON communication module #![cfg_attr(not(test), no_std)] use heapless::{String, Vec}; use serde::{Deserialize, Serialize}; use serde_json_core; use crate::temperature::{Temperature, TemperatureReading, TemperatureStats}; /// Commands that can be sent to the temperature monitor #[derive(Debug, Clone, Serialize, Deserialize)] pub enum Command { GetStatus, GetLatestReading, GetStats, SetSampleRate { rate_hz: u8 }, SetThreshold { threshold_celsius: f32 }, Reset, } /// Responses from the temperature monitor #[derive(Debug, Clone, Serialize, Deserialize)] pub enum Response { Status { uptime_ms: u32, sample_rate_hz: u8, threshold_celsius: f32, buffer_usage: u8, // Percentage full }, Reading(TemperatureReading), Stats(TemperatureStats), SampleRateSet(u8), ThresholdSet(f32), ResetComplete, Error { code: u8, message: String<32> }, } impl Response { pub fn error(code: u8, message: &str) -> Self { let mut error_message = String::new(); error_message.push_str(message).ok(); Self::Error { code, message: error_message, } } } /// Communication handler for temperature monitor pub struct TemperatureComm { sample_rate_hz: u8, threshold_celsius: f32, start_time_ms: u32, } impl TemperatureComm { pub const fn new() -> Self { Self { sample_rate_hz: 1, // 1 Hz default threshold_celsius: 35.0, start_time_ms: 0, } } pub fn init(&mut self, start_time_ms: u32) { self.start_time_ms = start_time_ms; } /// Process a command and return appropriate response pub fn process_command<const N: usize>( &mut self, command: Command, buffer: &TemperatureBuffer<N>, current_time_ms: u32 ) -> Response { match command { Command::GetStatus => { let uptime = current_time_ms.saturating_sub(self.start_time_ms); let buffer_usage = if buffer.capacity() > 0 { ((buffer.len() * 100) / buffer.capacity()) as u8 } else { 0 }; Response::Status { uptime_ms: uptime, sample_rate_hz: self.sample_rate_hz, threshold_celsius: self.threshold_celsius, buffer_usage, } } Command::GetLatestReading => { if let Some(temp) = buffer.latest() { let reading = TemperatureReading::new(temp, current_time_ms, 0); Response::Reading(reading) } else { Response::error(1, "No readings available") } } Command::GetStats => { if let Some(stats) = TemperatureStats::from_buffer(buffer, current_time_ms) { Response::Stats(stats) } else { Response::error(2, "No data for statistics") } } Command::SetSampleRate { rate_hz } => { if rate_hz > 0 && rate_hz <= 10 { self.sample_rate_hz = rate_hz; Response::SampleRateSet(rate_hz) } else { Response::error(3, "Rate must be 1-10 Hz") } } Command::SetThreshold { threshold_celsius } => { if threshold_celsius > 0.0 && threshold_celsius < 100.0 { self.threshold_celsius = threshold_celsius; Response::ThresholdSet(threshold_celsius) } else { Response::error(4, "Threshold must be 0-100°C") } } Command::Reset => { self.start_time_ms = current_time_ms; self.sample_rate_hz = 1; self.threshold_celsius = 35.0; Response::ResetComplete } } } /// Serialize response to JSON string for transmission pub fn response_to_json(&self, response: &Response) -> Result<String<512>, ()> { // Use heapless String with fixed capacity match serde_json_core::to_string::<_, 512>(response) { Ok(json) => Ok(json), Err(_) => Err(()), } } /// Deserialize command from JSON string pub fn json_to_command(&self, json: &str) -> Result<Command, ()> { match serde_json_core::from_str(json) { Ok(command) => Ok(command), Err(_) => Err(()), } } /// Create a status response as JSON pub fn status_json<const N: usize>( &self, buffer: &TemperatureBuffer<N>, current_time_ms: u32 ) -> String<256> { let status = self.process_command( Command::GetStatus, buffer, current_time_ms ); self.response_to_json(&status) .unwrap_or_else(|_| { let mut error = String::new(); error.push_str("{\"error\":\"serialization_failed\"}").ok(); error }) } /// Create latest reading as JSON pub fn reading_json<const N: usize>( &self, buffer: &TemperatureBuffer<N>, current_time_ms: u32 ) -> String<256> { let reading = self.process_command( Command::GetLatestReading, buffer, current_time_ms ); self.response_to_json(&reading) .unwrap_or_else(|_| { let mut error = String::new(); error.push_str("{\"error\":\"no_reading\"}").ok(); error }) } pub fn sample_rate(&self) -> u8 { self.sample_rate_hz } pub fn threshold(&self) -> f32 { self.threshold_celsius } } #[cfg(test)] mod tests { use super::*; use crate::temperature::TemperatureBuffer; #[test] fn test_json_serialization() { let temp = Temperature::from_celsius(23.5); let reading = TemperatureReading::new(temp, 1000, 0); // Test command serialization let command = Command::GetStatus; let json = serde_json_core::to_string::<_, 64>(&command).unwrap(); assert_eq!(json, "\"GetStatus\""); // Test response serialization let response = Response::Reading(reading); let json = serde_json_core::to_string::<_, 256>(&response).unwrap(); assert!(json.contains("Reading")); assert!(json.contains("23.5")); } #[test] fn test_command_processing() { let mut comm = TemperatureComm::new(); comm.init(0); let buffer = TemperatureBuffer::<5>::new(); // Test status command let status_resp = comm.process_command(Command::GetStatus, &buffer, 5000); if let Response::Status { uptime_ms, .. } = status_resp { assert_eq!(uptime_ms, 5000); } else { panic!("Expected status response"); } // Test rate setting let rate_resp = comm.process_command( Command::SetSampleRate { rate_hz: 5 }, &buffer, 5000 ); assert!(matches!(rate_resp, Response::SampleRateSet(5))); assert_eq!(comm.sample_rate(), 5); } #[test] fn test_json_roundtrip() { let mut comm = TemperatureComm::new(); // Test command deserialization let json_cmd = "\"GetStatus\""; let command = comm.json_to_command(json_cmd).unwrap(); assert!(matches!(command, Command::GetStatus)); // Test response serialization let response = Response::ResetComplete; let json_resp = comm.response_to_json(&response).unwrap(); assert_eq!(json_resp, "\"ResetComplete\""); } #[test] fn test_error_handling() { let mut comm = TemperatureComm::new(); let buffer = TemperatureBuffer::<5>::new(); // Test invalid sample rate let response = comm.process_command( Command::SetSampleRate { rate_hz: 20 }, // Invalid: too high &buffer, 0 ); if let Response::Error { code, message } = response { assert_eq!(code, 3); assert!(message.contains("Rate must be")); } else { panic!("Expected error response"); } } } }
Binary Serialization with postcard
For bandwidth-constrained applications, binary serialization is more efficient:
#![allow(unused)] fn main() { // src/binary_comm.rs - Binary communication with postcard #![cfg_attr(not(test), no_std)] use heapless::Vec; use serde::{Deserialize, Serialize}; use postcard; use crate::communication::{Command, Response}; /// Binary communication handler pub struct BinaryComm; impl BinaryComm { /// Serialize command to binary format pub fn command_to_binary(command: &Command) -> Result<Vec<u8, 64>, postcard::Error> { postcard::to_vec(command) } /// Deserialize command from binary format pub fn binary_to_command(data: &[u8]) -> Result<Command, postcard::Error> { postcard::from_bytes(data) } /// Serialize response to binary format pub fn response_to_binary(response: &Response) -> Result<Vec<u8, 256>, postcard::Error> { postcard::to_vec(response) } /// Deserialize response from binary format pub fn binary_to_response(data: &[u8]) -> Result<Response, postcard::Error> { postcard::from_bytes(data) } /// Get size of serialized command pub fn command_size(command: &Command) -> usize { Self::command_to_binary(command) .map(|v| v.len()) .unwrap_or(0) } /// Get size of serialized response pub fn response_size(response: &Response) -> usize { Self::response_to_binary(response) .map(|v| v.len()) .unwrap_or(0) } } #[cfg(test)] mod tests { use super::*; use crate::temperature::{Temperature, TemperatureReading}; #[test] fn test_binary_command_serialization() { let command = Command::SetSampleRate { rate_hz: 5 }; // Serialize to binary let binary = BinaryComm::command_to_binary(&command).unwrap(); // Deserialize back let deserialized = BinaryComm::binary_to_command(&binary).unwrap(); if let Command::SetSampleRate { rate_hz } = deserialized { assert_eq!(rate_hz, 5); } else { panic!("Deserialization failed"); } } #[test] fn test_binary_response_serialization() { let temp = Temperature::from_celsius(25.0); let reading = TemperatureReading::new(temp, 1000, 0); let response = Response::Reading(reading); // Serialize to binary let binary = BinaryComm::response_to_binary(&response).unwrap(); // Should be much smaller than JSON println!("Binary size: {} bytes", binary.len()); assert!(binary.len() < 20); // Much smaller than JSON // Deserialize back let deserialized = BinaryComm::binary_to_response(&binary).unwrap(); if let Response::Reading(r) = deserialized { assert!((r.temperature.celsius() - 25.0).abs() < 0.1); assert_eq!(r.timestamp_ms, 1000); } else { panic!("Deserialization failed"); } } #[test] fn test_size_comparison() { let temp = Temperature::from_celsius(23.5); let reading = TemperatureReading::new(temp, 1000, 0); let response = Response::Reading(reading); // Binary size let binary_size = BinaryComm::response_size(&response); // JSON size (approximate) let json = serde_json_core::to_string::<_, 256>(&response).unwrap(); let json_size = json.len(); println!("Binary: {} bytes, JSON: {} bytes", binary_size, json_size); println!("Binary is {}% smaller", ((json_size - binary_size) * 100) / json_size); assert!(binary_size < json_size); assert!(binary_size < 16); // Binary should be very compact } } }
Integrating Communication with ESP32-C3
Let’s update our main application to use these communication capabilities:
// src/bin/main.rs - ESP32 temperature monitor with communication #![no_std] #![no_main] #![deny( clippy::mem_forget, reason = "mem::forget is generally not safe to do with esp_hal types" )] use esp_hal::clock::CpuClock; use esp_hal::gpio::{Level, Output, OutputConfig}; use esp_hal::main; use esp_hal::time::{Duration, Instant}; use esp_hal::tsens::{Config, TemperatureSensor}; // Use the communication library types use chapter16_communication::{Temperature, TemperatureBuffer, Command, TemperatureComm}; const BUFFER_SIZE: usize = 20; const SAMPLE_INTERVAL_MS: u64 = 1000; // 1 second #[panic_handler] fn panic(info: &core::panic::PanicInfo) -> ! { esp_println::println!("💥 SYSTEM PANIC: {}", info); loop {} } esp_bootloader_esp_idf::esp_app_desc!(); #[main] fn main() -> ! { // Initialize hardware let config = esp_hal::Config::default().with_cpu_clock(CpuClock::max()); let peripherals = esp_hal::init(config); // Initialize GPIO for LED on GPIO8 let mut led = Output::new(peripherals.GPIO8, Level::Low, OutputConfig::default()); // Initialize the built-in temperature sensor let temp_sensor = TemperatureSensor::new(peripherals.TSENS, Config::default()).unwrap(); // Create fixed-capacity temperature buffer let mut temp_buffer = TemperatureBuffer::<BUFFER_SIZE>::new(); // Initialize communication handler let mut comm = TemperatureComm::new(); comm.init(0); // Startup messages with JSON communication esp_println::println!("🌡️ ESP32-C3 Temperature Monitor with Communication"); esp_println::println!("📊 Buffer capacity: {} readings", temp_buffer.capacity()); esp_println::println!("📡 JSON communication enabled"); esp_println::println!("🔧 Send commands: status, reading, stats, reset"); esp_println::println!(); // Demonstrate initial JSON output let status_json = comm.status_json(&temp_buffer, 0); esp_println::println!("INITIAL_STATUS: {}", status_json); esp_println::println!(); let mut reading_count = 0u32; // Main monitoring loop loop { // Get current timestamp (simplified) let current_time = reading_count * SAMPLE_INTERVAL_MS as u32; // Small stabilization delay (recommended by ESP-HAL) let delay_start = Instant::now(); while delay_start.elapsed() < Duration::from_micros(200) {} // Read temperature from built-in sensor let esp_temperature = temp_sensor.get_temperature(); let temp_celsius = esp_temperature.to_celsius(); let temperature = Temperature::from_celsius(temp_celsius); // Store in buffer temp_buffer.push(temperature); reading_count += 1; // LED status based on temperature if temperature.is_overheating() { // Rapid triple blink for overheating (>50°C) for _ in 0..3 { led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} led.set_low(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} } } else if !temperature.is_normal_range() { // Double blink for out of normal range (not 15-35°C) led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(150) {} led.set_low(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(150) {} led.set_low(); } else { // Single blink for normal temperature led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(200) {} led.set_low(); } // Output structured JSON data let reading_json = comm.latest_reading_json(&temp_buffer, current_time); esp_println::println!("READING: {}", reading_json); // Print statistics every 5 readings if reading_count % 5 == 0 { let stats_json = comm.stats_json(&temp_buffer, current_time); esp_println::println!("STATS: {}", stats_json); let status_json = comm.status_json(&temp_buffer, current_time); esp_println::println!("STATUS: {}", status_json); esp_println::println!(); } // Wait for next sample let wait_start = Instant::now(); while wait_start.elapsed() < Duration::from_millis(SAMPLE_INTERVAL_MS) {} } }
Example Output
When you run this on the ESP32-C3, you’ll see structured JSON output like:
🌡️ ESP32-C3 Temperature Monitor with Communication
📊 Buffer capacity: 20 readings
📡 JSON communication enabled
🔧 Send commands: status, reading, stats, reset
INITIAL_STATUS: {"Status":{"uptime_ms":0,"sample_rate_hz":1,"threshold_celsius":35.0,"buffer_usage":0}}
READING: {"Reading":{"temperature":{"celsius_tenths":523},"timestamp_ms":1000,"sensor_id":0}}
READING: {"Reading":{"temperature":{"celsius_tenths":524},"timestamp_ms":2000,"sensor_id":0}}
READING: {"Reading":{"temperature":{"celsius_tenths":521},"timestamp_ms":3000,"sensor_id":0}}
READING: {"Reading":{"temperature":{"celsius_tenths":522},"timestamp_ms":4000,"sensor_id":0}}
READING: {"Reading":{"temperature":{"celsius_tenths":523},"timestamp_ms":5000,"sensor_id":0}}
STATS: {"Stats":{"count":5,"total_count":5,"average":{"celsius_tenths":523},"min":{"celsius_tenths":521},"max":{"celsius_tenths":524},"timestamp_ms":5000}}
STATUS: {"Status":{"uptime_ms":5000,"sample_rate_hz":1,"threshold_celsius":35.0,"buffer_usage":25}}
Building and Testing
# Run tests on desktop
cargo test
# Build and flash to ESP32-C3 (recommended)
cargo run --release --features embedded
# Alternative: Build then flash separately
cargo build --release --target riscv32imc-unknown-none-elf --features embedded
cargo espflash flash target/riscv32imc-unknown-none-elf/release/chapter16_communication
Exercise: JSON Temperature Communication System
Build a complete JSON communication system for your temperature monitor.
Requirements
- JSON Output: Send temperature readings as JSON over serial every second
- Command Processing: Parse and respond to JSON commands
- Status Reporting: Provide system status via JSON
- Statistics Export: Export temperature statistics in JSON format
- Error Handling: Handle serialization errors gracefully
Starting Project Structure
Create these files:
#![allow(unused)] fn main() { // src/temperature.rs - Add Serde support to existing types #[derive(Debug, Clone, Copy, PartialEq, Serialize, Deserialize)] pub struct Temperature { celsius_tenths: i16, } // TODO: Add Serde derives to TemperatureBuffer // TODO: Create TemperatureReading struct with timestamp }
#![allow(unused)] fn main() { // src/communication.rs - Create command/response system #[derive(Debug, Clone, Serialize, Deserialize)] pub enum Command { GetStatus, GetLatestReading, GetStats, SetSampleRate { rate_hz: u8 }, Reset, } #[derive(Debug, Clone, Serialize, Deserialize)] pub enum Response { // TODO: Define response types } pub struct TemperatureComm { // TODO: Implement communication handler } }
Implementation Tasks
-
Add Serde Support:
- Add
Serialize, Deserializeto Temperature struct - Create TemperatureReading with timestamp
- Update Cargo.toml with serde dependencies
- Add
-
Create Command System:
- Define Command enum for incoming commands
- Define Response enum for outgoing responses
- Implement command processing logic
-
JSON Communication:
- Serialize responses to JSON strings
- Deserialize commands from JSON
- Handle serialization errors gracefully
-
Integration:
- Update main loop to output JSON readings
- Add command demonstration
- Test JSON format with serial monitor
Success Criteria
- Program compiles without warnings
- Temperature readings output as valid JSON
- Commands processed and responses sent as JSON
- Statistics exported in JSON format
- Serial output shows structured data
- No panics on malformed input
Expected JSON Output
🌡️ ESP32-C3 Temperature Monitor with Communication
READING: {"Reading":{"temperature":{"celsius_tenths":523},"timestamp_ms":1000,"sensor_id":0}}
STATUS: {"Status":{"uptime_ms":1000,"sample_rate_hz":1,"threshold_celsius":52.0,"buffer_usage":5}}
STATS: {"Stats":{"count":5,"average":{"celsius_tenths":522},"min":{"celsius_tenths":520},"max":{"celsius_tenths":525}}}
Command Response: {"SampleRateSet":2}
Testing Commands
# Run tests first
./test.sh
# Build and flash
cargo run --release
# Monitor output
cargo espflash monitor
You can test commands by sending JSON to the serial interface:
"GetStatus"{"SetSampleRate":{"rate_hz":2}}"Reset"
Extension Challenges
- Command Input: Read commands from serial input
- Binary Protocol: Compare JSON vs postcard serialization
- Compression: Implement message compression for efficiency
- Authentication: Add simple command authentication
- Batch Operations: Send multiple readings in one JSON message
Troubleshooting
Serialization Errors:
- Check that all types implement Serde traits
- Ensure fixed-size strings for heapless compatibility
- Use
serde-json-coreinstead ofserde_jsonfor no_std
JSON Format Issues:
- Validate JSON with online tools
- Use pretty-printing for debugging
- Check string buffer sizes are sufficient
Memory Errors:
- Monitor stack usage during JSON operations
- Use smaller buffer sizes if memory is limited
- Consider streaming large responses
Key Communication Patterns Learned
✅ Serde Integration: Add serialization support to embedded types with #[derive(Serialize, Deserialize)]
✅ Fixed-size Collections: Use heapless::String and heapless::Vec for JSON without heap allocation
✅ Command/Response Protocol: Design structured interfaces for remote control
✅ Error Handling: Handle serialization errors gracefully in resource-constrained environments
✅ JSON vs Binary: Understand trade-offs between readability and efficiency
Next: In Chapter 17, we’ll integrate all these components into a production-ready system with proper error handling and deployment strategies.
Chapter 17: Integration & Deployment
Learning Objectives
This chapter covers:
- Integrate all components into a complete temperature monitoring system
- Configure build optimization for embedded deployment
- Flash and debug applications on ESP32-C3 hardware
- Implement basic error handling and recovery
Task: Build Production-Ready Temperature Monitor
Over chapters 13-16, we’ve built individual components. Now it’s time to integrate everything into a robust, production-ready system.
Your Mission:
- Integrate all components into a single working system
- Add error handling and recovery mechanisms
- Optimize build configuration for production deployment
- Add deployment scripts for easy flashing and monitoring
- Create production monitoring with structured output
What We’re Combining:
- Chapter 13: Hardware interaction with ESP32-C3 and temperature sensor
- Chapter 14: Embedded data structures with no_std foundations
- Chapter 15: Comprehensive testing strategy for embedded code
- Chapter 16: JSON communication and structured data protocols
Production Requirements:
- Graceful error handling (no panics in production)
- Optimized binary size and performance
- Reliable sensor reading with fallback
- Structured logging for monitoring
- Easy deployment and debugging
Simplified System Architecture
┌─────────────────────────────────────────────────────────┐
│ ESP32-C3 System │
│ │
│ ┌─────────────────────────────────────────────────┐ │
│ │ Main Loop │ │
│ │ │ │
│ │ 1. Read Temperature │ │
│ │ 2. Store in Buffer │ │
│ │ 3. Update LED Status │ │
│ │ 4. Output JSON (every 5 readings) │ │
│ │ 5. Delay 1 second │ │
│ │ 6. Repeat │ │
│ └─────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │Temperature │ │ LED │ │ JSON │ │
│ │Buffer │ │ Controller │ │ Output │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────┐ │
│ │ USB Serial Output │ │
│ │ Status Messages | Readings | Statistics │ │
│ └─────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
Complete Temperature Monitor Implementation
Project Setup
First, let’s create the complete Cargo.toml:
[package]
name = "chapter17_integration"
version = "0.1.0"
edition = "2024"
rust-version = "1.88"
[[bin]]
name = "chapter17_integration"
path = "./src/bin/main.rs"
[lib]
name = "chapter17_integration"
path = "src/lib.rs"
[dependencies]
# Only include ESP dependencies when not testing
esp-hal = { version = "1.0.0", features = ["esp32c3", "unstable"], optional = true }
esp-bootloader-esp-idf = { version = "0.4.0", features = ["esp32c3"], optional = true }
esp-println = { version = "0.16", features = ["esp32c3"], optional = true }
# Core dependencies
heapless = "0.8"
# Serialization
serde = { version = "1.0", default-features = false, features = ["derive"] }
serde-json-core = "0.6"
[features]
default = ["hardware"]
hardware = ["esp-hal", "esp-println", "esp-bootloader-esp-idf"]
simulation = [] # Use mock sensors instead of hardware
verbose = [] # Enable detailed debug logging
telemetry = [] # Extended monitoring capabilities
[profile.dev]
# Rust debug is too slow for embedded
opt-level = "s"
[profile.release]
# Production optimizations
codegen-units = 1 # LLVM can perform better optimizations using a single thread
debug = 2
debug-assertions = false
incremental = false
lto = 'fat'
opt-level = 's'
overflow-checks = false
Main System Implementation
// src/bin/main.rs - Production-ready integrated system #![no_std] #![no_main] #![deny( clippy::mem_forget, reason = "mem::forget is generally not safe to do with esp_hal types" )] use esp_hal::clock::CpuClock; use esp_hal::gpio::{Level, Output, OutputConfig}; use esp_hal::main; use esp_hal::time::{Duration, Instant}; use esp_hal::tsens::{Config, TemperatureSensor}; // Use the integrated system components from previous chapters use chapter17_integration::{Temperature, TemperatureBuffer, Command, TemperatureComm}; // Production system configuration const BUFFER_SIZE: usize = 32; const SAMPLE_RATE_MS: u32 = 1000; const JSON_OUTPUT_INTERVAL: u32 = 5; const HEALTH_REPORT_INTERVAL: u32 = 20; // System state tracking for production monitoring struct SystemState { reading_count: u32, system_time_ms: u32, overheating_count: u32, sensor_error_count: u32, last_temp: f32, } impl SystemState { fn new() -> Self { Self { reading_count: 0, system_time_ms: 0, overheating_count: 0, sensor_error_count: 0, last_temp: 0.0, } } fn advance_time(&mut self) { self.reading_count += 1; self.system_time_ms += SAMPLE_RATE_MS; } } #[panic_handler] fn panic(_: &core::panic::PanicInfo) -> ! { // In production, we want graceful error handling esp_println::println!("SYSTEM_ERROR: Panic occurred, attempting recovery..."); loop {} } esp_bootloader_esp_idf::esp_app_desc!(); #[main] fn main() -> ! { // Initialize hardware with error handling let config = esp_hal::Config::default().with_cpu_clock(CpuClock::max()); let peripherals = esp_hal::init(config); // Initialize components let mut led = Output::new(peripherals.GPIO8, Level::Low, OutputConfig::default()); let temp_sensor = TemperatureSensor::new(peripherals.TSENS, Config::default()).unwrap(); let mut temp_buffer = TemperatureBuffer::<BUFFER_SIZE>::new(); let mut comm = TemperatureComm::new(); let mut state = SystemState::new(); // System startup esp_println::println!("🚀 ESP32-C3 Production Temperature Monitor v1.0"); esp_println::println!("📊 Buffer: {} readings | Sample rate: {}ms", BUFFER_SIZE, SAMPLE_RATE_MS); esp_println::println!("📡 JSON output every {} readings", JSON_OUTPUT_INTERVAL); esp_println::println!("🏥 Health reports every {} readings", HEALTH_REPORT_INTERVAL); esp_println::println!("✅ System initialized successfully"); esp_println::println!(); comm.init(0); // Main production loop with error handling loop { // Read temperature with error handling let esp_temperature = temp_sensor.get_temperature(); let temp_celsius = esp_temperature.to_celsius(); let temperature = Temperature::from_celsius(temp_celsius); // Update system state state.last_temp = temp_celsius; temp_buffer.push(temperature); state.advance_time(); // LED status indication if temperature.is_overheating() { state.overheating_count += 1; // Rapid triple blink for overheating for _ in 0..3 { led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} led.set_low(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} } } else if !temperature.is_normal_range() { // Double blink for abnormal range led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(150) {} led.set_low(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(100) {} led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(150) {} led.set_low(); } else { // Normal single blink led.set_high(); let blink_start = Instant::now(); while blink_start.elapsed() < Duration::from_millis(200) {} led.set_low(); } // JSON output every N readings if state.reading_count % JSON_OUTPUT_INTERVAL == 0 { let reading_json = comm.latest_reading_json(&temp_buffer, state.system_time_ms); esp_println::println!("READING: {}", reading_json); let stats_json = comm.stats_json(&temp_buffer, state.system_time_ms); esp_println::println!("STATS: {}", stats_json); } // Health report every N readings if state.reading_count % HEALTH_REPORT_INTERVAL == 0 { esp_println::println!("HEALTH: readings={} overheating={} errors={} uptime={}ms", state.reading_count, state.overheating_count, state.sensor_error_count, state.system_time_ms ); } // Wait for next sample let wait_start = Instant::now(); while wait_start.elapsed() < Duration::from_millis(SAMPLE_RATE_MS as u64) {} } }
Production Deployment
Build and deploy the production system:
# Run tests
cargo test
# Build and deploy to ESP32-C3 (recommended)
cargo run --release
# Alternative: Build then flash separately
cargo build --release --target riscv32imc-unknown-none-elf
cargo espflash flash target/riscv32imc-unknown-none-elf/release/chapter17_integration
# Monitor production logs
cargo espflash monitor
Understanding Cargo Features
Cargo features are a powerful mechanism for conditional compilation in Rust projects. In embedded systems, they’re especially useful for managing different build configurations.
What Are Cargo Features?
Features allow you to:
- Enable/disable functionality at compile time
- Support multiple hardware platforms from one codebase
- Create development vs production builds
- Reduce binary size by excluding unused code
Our Feature Configuration
[features]
default = ["hardware"] # Default features enabled
hardware = ["esp-hal", "esp-println", "esp-bootloader-esp-idf"] # Real ESP32-C3 hardware
simulation = [] # Mock sensors for testing
verbose = [] # Detailed debug logging
telemetry = [] # Extended monitoring capabilities
Conditional Compilation
Use #[cfg(feature = "...")] to conditionally compile code:
#![allow(unused)] fn main() { // Different sensor implementations based on features #[cfg(feature = "hardware")] use esp_hal::tsens::{Config, TemperatureSensor}; #[cfg(feature = "simulation")] mod mock_sensor { pub struct MockTemperatureSensor { temperature: f32, } impl MockTemperatureSensor { pub fn new() -> Self { Self { temperature: 25.0 } } pub fn get_temperature(&mut self) -> f32 { // Simulate varying temperature self.temperature += (rand() % 5) as f32 - 2.0; self.temperature } } } // Optional verbose logging #[cfg(feature = "verbose")] macro_rules! debug_log { ($($arg:tt)*) => { esp_println::println!("DEBUG: {}", format!($($arg)*)); }; } #[cfg(not(feature = "verbose"))] macro_rules! debug_log { ($($arg:tt)*) => {}; } // Extended telemetry #[cfg(feature = "telemetry")] fn output_telemetry_data(state: &SystemState) { esp_println::println!("TELEMETRY: {{"); esp_println::println!(" \"uptime_ms\": {},", state.system_time_ms); esp_println::println!(" \"free_heap\": {},", get_free_heap()); esp_println::println!(" \"cpu_usage\": {},", get_cpu_usage()); esp_println::println!("}}"); } }
Building with Different Features
# Default build (hardware features enabled)
cargo build --release
# Build for simulation (no hardware needed)
cargo build --features simulation
# Build with verbose logging
cargo build --features "hardware,verbose"
# Build with all monitoring features
cargo build --features "hardware,verbose,telemetry"
# Build with only simulation and telemetry
cargo build --no-default-features --features "simulation,telemetry"
Why Features Matter in Embedded
- Binary Size: Exclude unused features to reduce Flash usage
- Testing: Run tests without hardware using simulation features
- Development: Enable verbose logging during development
- Production: Strip debug features for production builds
- Portability: Support multiple hardware platforms
Real-World Examples
#![allow(unused)] fn main() { // Production vs Development builds #[cfg(feature = "verbose")] const LOG_LEVEL: LogLevel = LogLevel::Debug; #[cfg(not(feature = "verbose"))] const LOG_LEVEL: LogLevel = LogLevel::Error; // Hardware-specific implementations #[cfg(feature = "hardware")] fn read_temperature() -> Result<f32, SensorError> { let sensor = TemperatureSensor::new(/* ... */)?; Ok(sensor.get_temperature().to_celsius()) } #[cfg(feature = "simulation")] fn read_temperature() -> Result<f32, SensorError> { // Return predictable test data Ok(23.5 + (system_time() % 10) as f32) } }
Exercise: Production System Integration
Integrate all previous components into a production-ready temperature monitoring system.
Requirements
- System Integration: Combine hardware, data structures, testing, and communication
- Cargo Features: Implement conditional compilation for different build configurations
- Error Recovery: Handle sensor failures and system errors gracefully
- Production Monitoring: Add health reporting and system metrics
- Build Optimization: Configure release builds for optimal performance
- Deployment Ready: Create scripts for easy flashing and monitoring
Starting Structure
Based on previous chapters, create the integrated system:
// src/bin/main.rs - Production system main file #![no_std] #![no_main] use esp_hal::clock::CpuClock; use esp_hal::gpio::{Level, Output, OutputConfig}; use esp_hal::main; // Conditional imports based on features #[cfg(feature = "hardware")] use esp_hal::tsens::{Config, TemperatureSensor}; // Import from your integrated library use chapter17_integration::{Temperature, TemperatureBuffer, TemperatureComm}; // Debug logging macro (only compiled with verbose feature) #[cfg(feature = "verbose")] macro_rules! debug_log { ($($arg:tt)*) => { esp_println::println!("DEBUG: {}", format_args!($($arg)*)); }; } #[cfg(not(feature = "verbose"))] macro_rules! debug_log { ($($arg:tt)*) => {}; } // Production configuration const BUFFER_SIZE: usize = 32; const SAMPLE_RATE_MS: u32 = 1000; const HEALTH_REPORT_INTERVAL: u32 = 20; // System state tracking struct SystemState { reading_count: u32, system_time_ms: u32, overheating_count: u32, sensor_error_count: u32, // TODO: Add more state fields } #[panic_handler] fn panic(info: &core::panic::PanicInfo) -> ! { // TODO: Implement production panic handler with logging loop {} } #[main] fn main() -> ! { // TODO: Initialize all components with feature-based configuration // TODO: Initialize sensor based on hardware vs simulation features // TODO: Add error handling for sensor initialization // TODO: Implement main monitoring loop with health checks // TODO: Add conditional telemetry and verbose logging } // TODO: Implement helper functions with feature gates: // - read_temperature_safe() with hardware/simulation branches // - update_led_status() with enhanced patterns // - output_health_report() for system monitoring // - handle_error_conditions() for error recovery // - output_telemetry_data() (telemetry feature only) // - debug logging (verbose feature only)
Implementation Tasks
-
Cargo Features Setup:
- Implement conditional sensor initialization (hardware vs simulation)
- Add debug logging macros with verbose feature
- Create feature-gated telemetry functions
-
System Integration:
- Initialize all hardware components with error handling
- Create SystemState struct to track system health
- Set up production configuration constants
-
Error Recovery:
- Implement safe temperature reading with fallback
- Add sensor error counting and recovery
- Create production panic handler with logging
-
Health Monitoring:
- Add system state tracking (uptime, errors, performance)
- Implement health report generation
- Create status indicators and LED patterns
-
Production Features:
- Configure optimized Cargo.toml profile
- Add JSON health reporting
- Test complete system integration with different features
Success Criteria
- System integrates all previous chapter components
- Cargo features work correctly (hardware, simulation, verbose, telemetry)
- Handles sensor failures without crashing
- Provides health monitoring and error reporting
- Optimized build configuration for production
- Complete JSON communication system working
- LED status indicates system health
- Recovery from common error conditions
- Different features produce different build outputs
Expected Health Report Output
🌡️ ESP32-C3 Complete Temperature Monitor System
=================================================
🔧 Hardware: ESP32-C3 @ max frequency
📊 Buffer capacity: 32 readings
⏱️ Sample rate: 1 Hz
🌡️ Overheating threshold: 52.0°C
📡 JSON output every 5 readings
💓 Health reports every 20 readings
🚀 System starting...
🟢📊 #001 | 24.3°C | Buffer: 1/32
🟢📊 #002 | 24.1°C | Buffer: 2/32
...
🟢📊 #020 | 24.8°C | Buffer: 20/32
💓 HEALTH REPORT
Uptime: 20s | Readings: 20
Buffer: 62% (20/32) | Memory: 128 bytes
Errors: 0 sensor, 0 overheating events
Current temp: 24.8°C
Build Optimization
Update your Cargo.toml with production settings:
[profile.release]
codegen-units = 1 # Better optimization
debug = false # Remove debug info
debug-assertions = false # Remove runtime checks
incremental = false # Full rebuild for optimization
lto = 'fat' # Link-time optimization
opt-level = 's' # Optimize for size
overflow-checks = false # Remove overflow checks
panic = 'abort' # Smaller panic handler
strip = true # Remove symbols
Deployment Script
Create deploy.sh:
#!/bin/bash
echo "🚀 Deploying Production Temperature Monitor..."
cargo build --release --features embedded
cargo espflash flash --monitor target/riscv32imc-unknown-none-elf/release/chapter17_integration
Extension Challenges
- Watchdog Timer: Add hardware watchdog for system recovery
- Flash Storage: Persist configuration across resets
- Over-the-Air Updates: Implement firmware update capability
- Network Integration: Connect to WiFi for remote monitoring
- Power Management: Add sleep modes for battery operation
Error Recovery Strategies
- Sensor Failure: Use last known good value, count errors
- Buffer Overflow: Circular buffer handles automatically
- Communication Error: Continue operation, log errors
- Memory Issues: Monitor stack usage, implement safeguards
- Timing Drift: Use hardware timers for precision
Testing Production System
# Run comprehensive tests
./test.sh
# Test different feature combinations
cargo check --features hardware
cargo check --features simulation
cargo check --features "hardware,verbose"
cargo check --features "hardware,telemetry"
cargo check --features "simulation,verbose,telemetry"
# Check build sizes with different features
cargo size --release # Default (hardware)
cargo size --release --features simulation # Simulation only
cargo size --release --features "hardware,verbose,telemetry" # Full featured
# Flash and monitor
chmod +x deploy.sh
./deploy.sh
Production System Features
✅ Error Handling: Graceful panic handling with recovery attempts ✅ Health Monitoring: System metrics and error counting ✅ Structured Logging: JSON output for monitoring dashboards ✅ Performance Optimization: Optimized builds for production deployment ✅ State Tracking: Comprehensive system state monitoring ✅ Production Configuration: Configurable intervals and thresholds
Next: In Chapter 18, we’ll explore advanced features and extensions to make the system even more capable.
Chapter 18: Performance Optimization & Power Management
Learning Objectives
This chapter covers:
- Analyze and optimize power consumption for battery-operated IoT devices
- Implement ESP32-C3 sleep modes for energy efficiency
- Measure and optimize memory usage and binary size
- Calculate battery life for embedded systems
- Apply low-power design patterns for production IoT devices
- Profile system performance and resource utilization
Task: Optimize for Battery Operation
After building a complete temperature monitoring system through chapters 13-17, it’s time to make it production-ready for battery-powered deployment. This chapter focuses on the critical skills that differentiate embedded systems from desktop applications.
Your Mission:
- Control CPU clock frequency dynamically based on system needs
- Add real delays between readings to create duty cycles
- Optimize binary size using release profile settings
- Manage peripherals by disabling unused hardware
- Measure actual improvements in power consumption patterns
Why Power Management Matters: For C++/C# developers transitioning to embedded systems, power management is often the most foreign concept. Desktop applications can consume watts of power continuously, but embedded IoT devices must run on milliwatts for months or years on a single battery.
Real-World Impact:
- IoT sensors: Must run 1-2 years on a single battery
- Wearables: Daily charging vs. weekly charging determines user adoption
- Industrial monitoring: Devices deployed in remote locations with no power access
- Environmental sensors: Solar-powered operation with limited energy budget
ESP32-C3 Real Power Optimization Techniques
Clock Frequency Management
The ESP32-C3 can run at different frequencies, with power consumption scaling accordingly:
#![allow(unused)] fn main() { use esp_hal::clock::{ClockControl, CpuClock}; // High performance: 160MHz for critical operations let fast_clocks = ClockControl::configure(system.clock_control, CpuClock::Clock160MHz).freeze(); // Balanced: 80MHz for normal operations let normal_clocks = ClockControl::configure(system.clock_control, CpuClock::Clock80MHz).freeze(); // Power saving: 40MHz for minimal operations let slow_clocks = ClockControl::configure(system.clock_control, CpuClock::Clock40MHz).freeze(); }
Power Impact: Reducing clock speed can cut power consumption by 50-70%
Duty Cycle Power Management
#![allow(unused)] fn main() { // Real power savings come from reducing active time fn create_power_efficient_cycle( measurement_time_ms: u32, // Time to take reading sleep_time_ms: u32, // Time between readings ) { // Active phase: CPU at full speed take_temperature_reading(); process_and_transmit_data(); // Sleep phase: dramatic power reduction esp_hal::delay::Delay::new(&clocks).delay_ms(sleep_time_ms); } // Example: 1 second active, 59 seconds idle = 98.3% power savings // This extends battery life from days to months }
Real Power-Optimized Temperature Monitor
Let’s implement actual power optimization using ESP32-C3 hardware features:
// src/main.rs #![no_std] #![no_main] use esp_backtrace as _; use esp_hal::{ clock::{ClockControl, CpuClock}, delay::Delay, gpio::{Io, Level, Output}, peripherals::Peripherals, prelude::*, system::SystemControl, temperature::TemperatureSensor, }; use esp_println::println; mod temperature; mod communication; use temperature::{Temperature, TemperatureBuffer}; use communication::TemperatureComm; const BUFFER_SIZE: usize = 32; const SAMPLE_INTERVAL_FAST_MS: u32 = 1000; // 1 second when monitoring closely const SAMPLE_INTERVAL_SLOW_MS: u32 = 60000; // 1 minute for power savings const OVERHEATING_THRESHOLD: f32 = 35.0; #[derive(Debug, Clone, Copy)] enum PowerMode { HighPerformance, // 160MHz, fast sampling Efficient, // 80MHz, normal sampling PowerSaver, // 40MHz, slow sampling } struct PowerOptimizedSystem { reading_count: u32, current_mode: PowerMode, sample_interval_ms: u32, } #[entry] fn main() -> ! { println!("🔋 ESP32-C3 Power-Optimized Temperature Monitor"); println!("================================================="); println!("💡 Chapter 18: Performance Optimization & Power Management"); // Hardware initialization with dynamic clock control let peripherals = Peripherals::take(); let system = SystemControl::new(peripherals.SYSTEM); // Start with efficient mode (80MHz) let mut clocks = ClockControl::configure(system.clock_control, CpuClock::Clock80MHz).freeze(); println!("🔧 Initial clock: 80MHz (Efficient mode)"); // GPIO and sensor setup let io = Io::new(peripherals.GPIO, peripherals.IO_MUX); let mut led = Output::new(io.pins.gpio8, Level::Low); let mut temp_sensor = TemperatureSensor::new(peripherals.TEMP); // System components let mut temp_buffer = TemperatureBuffer::<BUFFER_SIZE>::new(); let mut comm = TemperatureComm::new(); // Power management state let mut power_system = PowerOptimizedSystem { reading_count: 0, current_mode: PowerMode::Efficient, sample_interval_ms: SAMPLE_INTERVAL_SLOW_MS, }; println!("📊 Buffer capacity: {} readings", BUFFER_SIZE); println!("🌡️ Overheating threshold: {:.1}°C", OVERHEATING_THRESHOLD); println!("⏱️ Power-optimized sampling: Adaptive intervals"); println!("🚀 Real hardware optimization starting..."); println!(); loop { // === STEP 1: POWER-OPTIMIZED TEMPERATURE READING === led.set_high(); // LED on during active phase // Read from actual ESP32-C3 temperature sensor let celsius = temp_sensor.read_celsius(); let temperature = Temperature::from_celsius(celsius); temp_buffer.push(temperature); power_system.reading_count += 1; println!("🌡️ Reading #{:03}: {:.1}°C | Mode: {:?} | Interval: {}s", power_system.reading_count, celsius, power_system.current_mode, power_system.sample_interval_ms / 1000); // === STEP 2: DYNAMIC POWER MODE ADAPTATION === let new_mode = if temperature.is_overheating() { // Critical: Use maximum performance PowerMode::HighPerformance } else if power_system.reading_count % 20 == 0 { // Periodic energy saving PowerMode::PowerSaver } else { // Normal operation PowerMode::Efficient }; // Actually change CPU frequency if mode changed if new_mode != power_system.current_mode { power_system.current_mode = new_mode; // Reconfigure clocks based on power mode clocks = match new_mode { PowerMode::HighPerformance => { println!("🔴 Switching to HIGH PERFORMANCE: 160MHz"); power_system.sample_interval_ms = SAMPLE_INTERVAL_FAST_MS; ClockControl::configure(system.clock_control, CpuClock::Clock160MHz).freeze() } PowerMode::Efficient => { println!("🟡 Switching to EFFICIENT: 80MHz"); power_system.sample_interval_ms = SAMPLE_INTERVAL_FAST_MS; ClockControl::configure(system.clock_control, CpuClock::Clock80MHz).freeze() } PowerMode::PowerSaver => { println!("🟢 Switching to POWER SAVER: 40MHz"); power_system.sample_interval_ms = SAMPLE_INTERVAL_SLOW_MS; ClockControl::configure(system.clock_control, CpuClock::Clock40MHz).freeze() } }; } // === STEP 3: PERIPHERAL POWER MANAGEMENT === if temperature.is_overheating() { led.set_high(); // Keep LED on during overheating } else { led.set_low(); // Turn off LED to save power } // === STEP 4: REAL POWER SAVINGS - DELAY CYCLE === println!("💤 Sleeping for {}ms to save power...", power_system.sample_interval_ms); // Use actual hardware delay - this is where real power savings happen let delay = Delay::new(&clocks); delay.delay_ms(power_system.sample_interval_ms); // === STEP 5: PERFORMANCE REPORTING === if power_system.reading_count % 10 == 0 { let duty_cycle = if power_system.sample_interval_ms > 1000 { 1000.0 / power_system.sample_interval_ms as f32 * 100.0 } else { 100.0 }; println!("⚡ POWER REPORT:"); println!(" Clock: {} MHz | Mode: {:?}", match power_system.current_mode { PowerMode::HighPerformance => 160, PowerMode::Efficient => 80, PowerMode::PowerSaver => 40, }, power_system.current_mode); println!(" Duty Cycle: {:.1}% active, {:.1}% sleeping", duty_cycle, 100.0 - duty_cycle); println!(" Power Savings: ~{:.0}% vs continuous operation", 100.0 - duty_cycle); if let Some(stats) = temp_buffer.stats() { println!(" Temperature: avg {:.1}°C, range {:.1}-{:.1}°C", stats.avg_celsius, stats.min_celsius, stats.max_celsius); } println!(); } } } #[panic_handler] fn panic(info: &core::panic::PanicInfo) -> ! { println!("💥 SYSTEM PANIC: {}", info); loop {} }
Power Management Module
#![allow(unused)] fn main() { // src/power.rs use esp_println::println; #[derive(Debug, Clone, Copy)] pub enum PowerMode { HighPerformance, // Maximum speed, higher power consumption Efficient, // Balanced performance and power PowerSaver, // Minimum power consumption } pub struct PowerManager { start_time_ms: u32, } impl PowerManager { pub fn new() -> Self { Self { start_time_ms: 0 } } pub fn timestamp_ms(&self) -> u32 { // In real implementation, use actual timer // For demo, simulate increasing time self.start_time_ms.wrapping_add(1000) } pub fn read_battery_voltage_mv(&self) -> u32 { // Simulate battery voltage readings // In real implementation: use ADC to read battery voltage divider let base_voltage = 3700; // 3.7V nominal let variation = (self.timestamp_ms() / 10000) % 100; // Slow discharge simulation base_voltage - variation } pub fn calculate_battery_percentage(&self, voltage_mv: u32) -> u8 { // Simple linear mapping from voltage to percentage let min_voltage = 3300; // 3.3V = 0% let max_voltage = 4200; // 4.2V = 100% if voltage_mv >= max_voltage { 100 } else if voltage_mv <= min_voltage { 0 } else { let voltage_range = max_voltage - min_voltage; let voltage_offset = voltage_mv - min_voltage; ((voltage_offset * 100) / voltage_range) as u8 } } pub fn calculate_average_power_consumption(&self, active_time_s: u32, sleep_time_s: u32) -> f32 { let active_power_ma = 45.0; // Active mode power consumption let sleep_power_ma = 0.01; // Deep sleep power consumption let total_time_s = active_time_s + sleep_time_s; let active_ratio = active_time_s as f32 / total_time_s as f32; let sleep_ratio = sleep_time_s as f32 / total_time_s as f32; (active_power_ma * active_ratio) + (sleep_power_ma * sleep_ratio) } pub fn estimate_battery_life_hours(&self, avg_power_ma: f32, battery_capacity_mah: u32) -> f32 { battery_capacity_mah as f32 / avg_power_ma } pub fn estimate_ram_usage_bytes(&self) -> u32 { // Estimate current RAM usage // TemperatureBuffer<32> ≈ 70 bytes // Communication structs ≈ 50 bytes // PowerManager ≈ 20 bytes // System variables ≈ 40 bytes // Stack usage ≈ 1024 bytes 70 + 50 + 20 + 40 + 1024 } } }
Power Optimization Strategies
1. Sleep Mode Implementation
#![allow(unused)] fn main() { // Different sleep strategies based on requirements match application_mode { Mode::RealTimeMonitoring => { // Light sleep: 0.8mA, wake up quickly rtc.sleep_light(Duration::from_millis(100)); } Mode::PeriodicSampling => { // Deep sleep: 0.01mA, longer wake-up time rtc.sleep_deep(&DeepSleepConfig::new() .timer_wakeup(60_000_000)); // 1 minute } Mode::EventTriggered => { // Ultra-low power: 0.0025mA, external wake-up rtc.sleep_deep(&DeepSleepConfig::new() .ext1_wakeup([gpio_pin])); } } }
2. Adaptive Power Management
#![allow(unused)] fn main() { fn adapt_power_mode(temperature: &Temperature, battery_level: u8) -> PowerMode { match (temperature.is_overheating(), battery_level) { (true, _) => PowerMode::HighPerformance, // Always prioritize safety (false, 0..=20) => PowerMode::PowerSaver, // Conserve battery when low (false, 21..=80) => PowerMode::Efficient, // Balanced operation (false, 81..=100) => PowerMode::HighPerformance, // Full performance when battery good } } }
3. Binary Size Optimization
# Cargo.toml optimizations
[profile.release]
opt-level = 'z' # Optimize for size
lto = true # Link-time optimization
codegen-units = 1 # Better optimization
panic = 'abort' # Smaller panic handling
strip = true # Remove debug symbols
Performance Metrics
Optimization Impact Examples
Optimization Technique | Typical Impact
------------------------------|--------------------------------
Implementing deep sleep | 10-100x power reduction possible
Increasing sample interval | Linear power savings
Optimizing binary size | Reduces flash power, enables smaller MCUs
Reducing RAM usage | Allows for smaller, cheaper hardware
Adaptive sampling rates | Balance responsiveness vs. power
Batch processing | Reduces wake-up overhead
Exercise: Battery Life Optimization Challenge
Your Task: Optimize the temperature monitor for maximum battery life while maintaining essential functionality.
Requirements:
- Implement deep sleep with timer-based wake-up
- Add battery voltage monitoring with percentage calculation
- Create adaptive power modes that change based on conditions
- Calculate and report estimated battery life
- Optimize for different scenarios: emergency monitoring vs. long-term deployment
Starter Code Framework:
#![allow(unused)] fn main() { struct BatteryOptimizedMonitor { target_battery_days: u32, // Target battery life in days emergency_mode: bool, // Override power savings for critical situations adaptive_sampling: bool, // Adjust sample rate based on temperature stability } impl BatteryOptimizedMonitor { fn calculate_optimal_sleep_duration(&self, recent_temps: &[f32]) -> u32 { // Your implementation: analyze temperature stability // Stable temps = longer sleep, volatile temps = shorter sleep unimplemented!() } fn should_enter_emergency_mode(&self, temperature: f32, battery_pct: u8) -> bool { // Your implementation: determine when to override power savings unimplemented!() } } }
Bonus Challenges:
- Implement temperature trend analysis to predict when readings might be needed
- Add WiFi power management (turn off radio during sleep)
- Create a “burst sampling” mode for rapid temperature changes
- Implement battery capacity learning based on discharge patterns
Real-World Applications
Smart Building Sensors:
- 6-month battery life requirement
- Deep sleep between hourly readings
- Wake on motion detection for security
Agricultural IoT:
- Solar charging with battery backup
- Weather-dependent sampling rates
- LoRa communication for remote fields
Wearable Devices:
- Daily charging acceptable
- Continuous heart rate + periodic temperature
- Aggressive power management during sleep
Industrial Monitoring:
- 2-year battery life in hazardous locations
- Emergency alerting overrides power savings
- Mesh network participation
Summary
You’ve learned to optimize embedded systems for real-world deployment:
Key Skills Acquired:
- Power profiling and measurement for embedded systems
- Sleep mode implementation with ESP32-C3 deep sleep
- Battery life calculation and capacity planning
- Adaptive power management based on system conditions
- Performance optimization for memory and binary size
Production Readiness: The power-optimized temperature monitor demonstrates patterns used in commercial IoT devices. With 94% power reduction, the system can run for weeks on a single battery charge.
Next Steps: These optimization techniques apply to any embedded Rust project. Combined with the previous chapters’ lessons on testing, communication, and integration, you have the complete toolkit for building production IoT systems.
Congratulations on completing Day 3: ESP32-C3 Embedded Systems with Rust! Your temperature monitor is now optimized for real-world battery-powered deployment.
Chapter 19: Cargo & Dependency Management
Cargo is Rust’s build system and package manager. It handles dependencies, compilation, testing, and distribution. This chapter covers dependency management, from editions and toolchains to private registries and reproducible builds.
1. Rust Editions
Rust editions are opt-in milestones released every three years that allow the language to evolve while maintaining stability guarantees. All editions remain fully interoperable - crates using different editions work together seamlessly.
Available Editions
| Edition | Released | Default Resolver | Key Changes |
|---|---|---|---|
| 2015 | Rust 1.0 | v1 | Original edition, extern crate required |
| 2018 | Rust 1.31 | v1 | Module system improvements, async/await, NLL |
| 2021 | Rust 1.56 | v2 | Disjoint captures, into_iter() arrays, reserved identifiers |
| 2024 | Rust 1.85 | v3 | MSRV-aware resolver, gen keyword, unsafe env functions |
Key Edition Changes
Edition 2018:
- No more
extern cratedeclarations (except for macros) - Uniform path syntax in
usestatements async/awaitkeywords reserved- Non-lexical lifetimes (NLL)
- Module system simplification
Edition 2021:
- Disjoint captures in closures (only capture used fields)
array.into_iter()iterates by value- New reserved keywords:
try - Default to resolver v2 for Cargo
- Panic macros require format strings
Edition 2024:
- MSRV-aware dependency resolution (resolver v3)
genkeyword for generators/coroutinesstd::env::set_varandremove_varmarked unsafe- Tail expression temporary lifetime changes
unsafe externblocks and attributes
Configuration and Migration
[package]
name = "my-project"
version = "0.1.0"
edition = "2021"
# Migrate code to next edition (modifies files)
cargo fix --edition
# Apply idiomatic style changes
cargo fix --edition --edition-idioms
# Then update Cargo.toml manually
Edition Selection Strategy
| Project Type | Recommended Edition | Rationale |
|---|---|---|
| New projects | Latest stable | Access to all improvements |
| Libraries | Conservative (2018/2021) | Wider compatibility |
| Applications | Latest stable | Modern features |
| Legacy code | Keep current | Migrate when beneficial |
2. Toolchain Channels
Rust uses a release train model with three channels:
Nightly (daily) → Beta (6 weeks) → Stable (6 weeks)
| Channel | Release Cycle | Stability | Use Case |
|---|---|---|---|
| Stable | 6 weeks | Guaranteed stable | Production |
| Beta | 6 weeks | Generally stable | Testing upcoming releases |
| Nightly | Daily | May break | Experimental features |
Stable Channel
# Install or switch to stable
rustup default stable
# Use specific stable version
rustup install 1.82.0
rustup default 1.82.0
Beta Channel
# Switch to beta
rustup default beta
# Test with beta in CI
rustup run beta cargo test
Nightly Channel
# Use nightly for specific project
rustup override set nightly
# Install specific nightly
rustup install nightly-2024-11-28
Enabling unstable features:
#![allow(unused)] fn main() { // Only works on nightly #![feature(generators)] #![feature(type_alias_impl_trait)] }
Project Toolchain Configuration
# rust-toolchain.toml
[toolchain]
channel = "1.82.0" # Or "stable", "beta", "nightly"
components = ["rustfmt", "clippy"]
targets = ["wasm32-unknown-unknown"]
Override commands:
# Set override for current directory
rustup override set nightly
# Run command with specific toolchain
cargo +nightly build
cargo +1.82.0 test
CI/CD Multi-Channel Testing
# GitHub Actions
strategy:
matrix:
rust: [stable, beta, nightly]
continue-on-error: ${{ matrix.rust == 'nightly' }}
steps:
- uses: actions-rs/toolchain@v1
with:
toolchain: ${{ matrix.rust }}
override: true
3. Dependency Resolution
Version Requirements
Cargo uses semantic versioning (SemVer) with various requirement operators:
[dependencies]
# Caret (default) - compatible versions
serde = "1.0" # means ^1.0.0
# Exact version
exact = "=1.0.0"
# Range
range = ">=1.2, <1.5"
# Wildcard
wildcard = "1.0.*"
# Tilde - patch updates only
tilde = "~1.0.0"
Transitive Dependencies
Cargo builds a dependency graph and resolves versions using maximum version strategy:
Your Project
├── crate-a = "1.0"
│ └── shared = "2.1" # Transitive dependency
└── crate-b = "2.0"
└── shared = "2.3" # Same dependency, different version
Resolution: Cargo picks shared = "2.3" (highest compatible version).
Resolver Versions
| Resolver | Default For | Behavior |
|---|---|---|
| v1 | Edition 2015/2018 | Unifies features across all uses |
| v2 | Edition 2021 | Independent feature resolution per target |
| v3 | Edition 2024 (Rust 1.84+) | MSRV-aware dependency selection, default in 2024 |
# Explicit resolver configuration
[package]
edition = "2018"
resolver = "2" # Opt into v2 resolver
# For workspaces
[workspace]
members = ["crate-a", "crate-b"]
resolver = "2"
Key v2 differences:
- Platform-specific dependencies don’t affect other platforms
- Build dependencies don’t share features with normal dependencies
- Dev dependencies only activate features when building tests/examples
Key v3 additions (Edition 2024 default):
- MSRV-aware dependency resolution when
rust-versionis specified - Falls back to compatible versions when newer versions require higher MSRV
- Better support for workspaces with mixed Rust versions
4. Cargo.lock
The Cargo.lock file pins exact dependency versions for reproducible builds.
When to Commit
| Project Type | Commit? | Reason |
|---|---|---|
| Binary/Application | Yes | Reproducible builds |
| Library | No | Allow flexible version resolution |
| Workspace root | Yes | Consistent versions across workspace |
Lock File Usage
# Build with exact lock file versions
cargo build --locked
# Update all dependencies
cargo update
# Update specific dependency
cargo update -p serde
# Update to specific version
cargo update -p tokio --precise 1.21.0
5. Minimum Supported Rust Version (MSRV)
[package]
rust-version = "1.74" # Minimum Rust version
Finding and Testing MSRV
# Install cargo-msrv
cargo install cargo-msrv
# Find minimum version
cargo msrv find
# Verify declared MSRV
cargo msrv verify
CI Testing
# GitHub Actions
- name: Test MSRV
run: |
rustup install $(grep rust-version Cargo.toml | cut -d'"' -f2)
cargo test --locked
MSRV Policy Guidelines
| Project Type | Suggested MSRV | Rationale |
|---|---|---|
| Foundational libraries | 6-12 months old | Maximum compatibility |
| Application libraries | 3-6 months old | Balance features/compatibility |
| Applications | Current stable | Use latest features |
| Internal tools | Latest stable | No external users |
6. Workspace Management
Workspaces allow managing multiple related crates in a single repository:
# Root Cargo.toml
[workspace]
members = ["crate-a", "crate-b", "crate-c"]
resolver = "2"
[workspace.dependencies]
serde = { version = "1.0", features = ["derive"] }
tokio = "1.47"
[workspace.package]
authors = ["Your Name"]
edition = "2021"
license = "MIT"
repository = "https://github.com/user/repo"
Member crates inherit workspace configuration:
# crate-a/Cargo.toml
[package]
name = "crate-a"
version = "0.1.0"
authors.workspace = true
edition.workspace = true
[dependencies]
serde.workspace = true
tokio.workspace = true
Workspace Commands
# Build all workspace members
cargo build --workspace
# Test specific member
cargo test -p crate-a
# Run example from workspace member
cargo run -p crate-b --example demo
7. Private Registries
Registry Options
| Solution | Type | Best For |
|---|---|---|
| Kellnr | Self-hosted registry | Small-medium teams |
| Alexandrie | Alternative registry | Custom deployments |
| Panamax | Mirror | Offline development |
| Artifactory | Enterprise | Large organizations |
Kellnr Setup
# .cargo/config.toml
[registries]
kellnr = {
index = "git://your-kellnr-host:9418/index",
token = "your-auth-token"
}
Docker deployment:
docker run -p 8000:8000 \
-e "KELLNR_ORIGIN__HOSTNAME=your-domain" \
ghcr.io/kellnr/kellnr:latest
Alexandrie Configuration
# alexandrie.toml
[database]
url = "postgresql://localhost/alexandrie"
[storage]
type = "s3"
bucket = "my-crates"
region = "us-east-1"
Panamax Mirror
# Initialize mirror
panamax init my-mirror
# Sync dependencies
cargo vendor
panamax sync my-mirror vendor/
# Serve mirror
panamax serve my-mirror --port 8080
Client configuration:
# .cargo/config.toml
[source.my-mirror]
registry = "http://panamax.internal/crates.io-index"
[source.crates-io]
replace-with = "my-mirror"
Artifactory Setup
# .cargo/config.toml
[registries]
artifactory = {
index = "https://artifactory.company.com/artifactory/api/cargo/rust-local"
}
Publishing:
cargo publish --registry artifactory \
--token "Bearer <access-token>"
8. Build Configuration
Profiles
[profile.dev]
opt-level = 0
debug = true
overflow-checks = true
[profile.release]
opt-level = 3
lto = true
codegen-units = 1
strip = true
[profile.bench]
inherits = "release"
# Custom profile
[profile.production]
inherits = "release"
lto = "fat"
panic = "abort"
Build Scripts
// build.rs fn main() { // Link system libraries println!("cargo:rustc-link-lib=ssl"); // Rerun if files change println!("cargo:rerun-if-changed=src/native.c"); // Compile C code cc::Build::new() .file("src/native.c") .compile("native"); // Set environment variables println!("cargo:rustc-env=BUILD_TIME={}", chrono::Utc::now().to_rfc3339()); }
Build Dependencies
[build-dependencies]
cc = "1.0"
chrono = "0.4"
9. Dependencies
Dependency Types
[dependencies]
normal = "1.0"
[dev-dependencies]
criterion = "0.5"
proptest = "1.0"
[build-dependencies]
cc = "1.0"
[target.'cfg(windows)'.dependencies]
winapi = "0.3"
[target.'cfg(unix)'.dependencies]
libc = "0.2"
Features
[dependencies]
tokio = { version = "1.47", default-features = false, features = ["rt-multi-thread", "macros"] }
[features]
default = ["std"]
std = ["serde/std"]
alloc = ["serde/alloc"]
performance = ["lto", "parallel"]
Git and Path Dependencies
[dependencies]
# Git repository
from-git = { git = "https://github.com/user/repo", branch = "main" }
# Specific commit
specific = { git = "https://github.com/user/repo", rev = "abc123" }
# Local path
local = { path = "../local-crate" }
# Published with override
override = { version = "1.0", path = "../override" }
10. Documentation
Writing Documentation
#![allow(unused)] fn main() { //! Module-level documentation //! //! This module provides utilities for working with strings. /// Calculate factorial of n /// /// # Examples /// /// ``` /// assert_eq!(factorial(5), 120); /// assert_eq!(factorial(0), 1); /// ``` /// /// # Panics /// /// Panics if the result would overflow. pub fn factorial(n: u32) -> u32 { match n { 0 => 1, _ => n * factorial(n - 1), } } }
Documentation Commands
# Generate and open docs
cargo doc --open
# Include dependencies
cargo doc --no-deps
# Document private items
cargo doc --document-private-items
# Test documentation examples
cargo test --doc
11. Examples Directory
Structure example code for users:
examples/
├── basic.rs # cargo run --example basic
├── advanced.rs # cargo run --example advanced
└── multi-file/ # Multi-file example
├── main.rs
└── helper.rs
# Cargo.toml
[[example]]
name = "multi-file"
path = "examples/multi-file/main.rs"
12. Benchmarking with Criterion
[dev-dependencies]
criterion = { version = "0.5", features = ["html_reports"] }
[[bench]]
name = "my_benchmark"
harness = false
#![allow(unused)] fn main() { // benches/my_benchmark.rs use criterion::{black_box, criterion_group, criterion_main, Criterion, BenchmarkId}; fn fibonacci(n: u64) -> u64 { match n { 0 | 1 => 1, n => fibonacci(n-1) + fibonacci(n-2), } } fn bench_fibonacci(c: &mut Criterion) { let mut group = c.benchmark_group("fibonacci"); for i in [20, 30, 35].iter() { group.bench_with_input(BenchmarkId::from_parameter(i), i, |b, &i| { b.iter(|| fibonacci(black_box(i))); }); } group.finish(); } criterion_group!(benches, bench_fibonacci); criterion_main!(benches); }
Run benchmarks:
cargo bench
# Save baseline
cargo bench -- --save-baseline main
# Compare with baseline
cargo bench -- --baseline main
13. Security
Dependency Auditing
# Install audit tools
cargo install cargo-audit
cargo install cargo-deny
# Check for vulnerabilities
cargo audit
# Audit with fix suggestions
cargo audit fix
Deny Configuration
# deny.toml
[bans]
multiple-versions = "warn"
wildcards = "deny"
skip = []
[licenses]
allow = ["MIT", "Apache-2.0", "BSD-3-Clause"]
deny = ["GPL-3.0"]
[sources]
unknown-registry = "deny"
unknown-git = "warn"
cargo deny check
14. Dependency Update Strategies
Manual Updates
# Update all dependencies
cargo update
# Update specific crate
cargo update -p serde
# See outdated dependencies
cargo install cargo-outdated
cargo outdated
Automated Updates with Dependabot
# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: "cargo"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 5
groups:
aws:
patterns: ["aws-*"]
tokio:
patterns: ["tokio*"]
Renovate Configuration
{
"extends": ["config:base"],
"cargo": {
"enabled": true,
"rangeStrategy": "bump"
},
"packageRules": [{
"matchManagers": ["cargo"],
"matchPackagePatterns": ["^aws-"],
"groupName": "AWS SDK"
}]
}
15. Reproducible Builds
Ensure reproducibility with:
- Committed
Cargo.lockfor applications - Pinned toolchain via
rust-toolchain.toml --lockedflag in CI builds- Vendored dependencies for offline builds
Docker Example
FROM rust:1.82 AS builder
WORKDIR /app
# Cache dependencies
COPY Cargo.lock Cargo.toml ./
RUN mkdir src && echo "fn main() {}" > src/main.rs
RUN cargo build --release --locked
RUN rm -rf src
# Build application
COPY . .
RUN touch src/main.rs && cargo build --release --locked
FROM debian:bookworm-slim
COPY --from=builder /app/target/release/app /usr/local/bin/
CMD ["app"]
Vendoring Dependencies
# Vendor all dependencies
cargo vendor
# Configure to use vendored dependencies
mkdir .cargo
cat > .cargo/config.toml << EOF
[source.crates-io]
replace-with = "vendored-sources"
[source.vendored-sources]
directory = "vendor"
EOF
# Build offline
cargo build --offline
16. Useful Commands
# Dependency tree
cargo tree
cargo tree -d # Show duplicates
cargo tree -i serde # Inverse dependencies
cargo tree -e features # Show features
# Workspace commands
cargo build --workspace # Build all members
cargo test --workspace # Test all members
cargo publish --dry-run # Verify before publishing
# Check commands
cargo check # Fast compilation check
cargo clippy # Linting
cargo fmt # Format code
# Cache management
cargo clean # Remove target directory
cargo clean -p specific-crate # Clean specific crate
# Package management
cargo new myproject --lib # Create library
cargo init # Initialize in existing directory
cargo package # Create distributable package
cargo publish # Publish to crates.io
Additional Resources
Chapter 20: Code Coverage with cargo llvm-cov
Code coverage measures test effectiveness and identifies untested code paths. The cargo llvm-cov tool provides source-based code coverage using LLVM’s instrumentation capabilities.
1. Installation and Setup
# Install from crates.io
cargo install cargo-llvm-cov
# Install required LLVM tools
rustup component add llvm-tools-preview
# Verify installation
cargo llvm-cov --version
System Requirements
- Rust 1.60.0 or newer
- LLVM tools preview component
- Supported platforms: Linux, macOS, Windows
- LLVM versions by Rust version:
- Rust 1.60-1.77: LLVM 14-17
- Rust 1.78-1.81: LLVM 18
- Rust 1.82+: LLVM 19+
2. Basic Usage
Generate Coverage
# Run tests and generate coverage
cargo llvm-cov
# Clean and regenerate
cargo llvm-cov clean
cargo llvm-cov
# Generate HTML report and open
cargo llvm-cov --open
# HTML report without opening
cargo llvm-cov --html
Example Output
Filename Regions Missed Regions Cover Functions Missed Functions Executed Lines Missed Lines Cover Branches Missed Branches Cover
----------------------------------------------------------------------------------------------------------------------------------------------------------------
src/calculator.rs 12 2 83.33% 4 0 100.00% 45 3 93.33% 8 2 75.00%
src/parser.rs 25 5 80.00% 8 1 87.50% 120 15 87.50% 20 4 80.00%
src/lib.rs 8 0 100.00% 3 0 100.00% 30 0 100.00% 4 0 100.00%
----------------------------------------------------------------------------------------------------------------------------------------------------------------
TOTAL 45 7 84.44% 15 1 93.33% 195 18 90.77% 32 6 81.25%
3. Report Formats
HTML Reports
# Generate HTML report
cargo llvm-cov --html
# Output: target/llvm-cov/html/index.html
# With custom output directory
cargo llvm-cov --html --output-dir coverage
JSON Format
# Generate JSON report
cargo llvm-cov --json --output-path coverage.json
LCOV Format
# Generate LCOV for coverage services
cargo llvm-cov --lcov --output-path lcov.info
Cobertura XML
# Generate Cobertura for CI/CD tools
cargo llvm-cov --cobertura --output-path cobertura.xml
Text Summary
# Display only summary
cargo llvm-cov --summary-only
# Text report with specific format
cargo llvm-cov --text
4. Practical Example: Calculator Library
Project Structure
#![allow(unused)] fn main() { // src/lib.rs #[derive(Debug, Clone, PartialEq)] pub enum Operation { Add, Subtract, Multiply, Divide, } pub struct Calculator { precision: usize, } impl Calculator { pub fn new() -> Self { Self { precision: 2 } } pub fn with_precision(precision: usize) -> Self { Self { precision } } pub fn calculate(&self, op: Operation, a: f64, b: f64) -> Result<f64, String> { let result = match op { Operation::Add => a + b, Operation::Subtract => a - b, Operation::Multiply => a * b, Operation::Divide => { if b == 0.0 { return Err("Division by zero".to_string()); } a / b } }; Ok(self.round_to_precision(result)) } fn round_to_precision(&self, value: f64) -> f64 { let multiplier = 10_f64.powi(self.precision as i32); (value * multiplier).round() / multiplier } pub fn chain_operations(&self, initial: f64, operations: Vec<(Operation, f64)>) -> Result<f64, String> { operations.iter().try_fold(initial, |acc, (op, value)| { self.calculate(op.clone(), acc, *value) }) } } impl Default for Calculator { fn default() -> Self { Self::new() } } }
Tests
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_basic_operations() { let calc = Calculator::new(); assert_eq!(calc.calculate(Operation::Add, 5.0, 3.0), Ok(8.0)); assert_eq!(calc.calculate(Operation::Subtract, 5.0, 3.0), Ok(2.0)); assert_eq!(calc.calculate(Operation::Multiply, 5.0, 3.0), Ok(15.0)); assert_eq!(calc.calculate(Operation::Divide, 15.0, 3.0), Ok(5.0)); } #[test] fn test_division_by_zero() { let calc = Calculator::new(); assert!(calc.calculate(Operation::Divide, 5.0, 0.0).is_err()); } #[test] fn test_precision() { let calc = Calculator::with_precision(3); assert_eq!(calc.calculate(Operation::Divide, 10.0, 3.0), Ok(3.333)); } #[test] fn test_chain_operations() { let calc = Calculator::new(); let operations = vec![ (Operation::Add, 5.0), (Operation::Multiply, 2.0), (Operation::Subtract, 3.0), ]; assert_eq!(calc.chain_operations(10.0, operations), Ok(27.0)); } } }
Coverage Analysis
# Run coverage
cargo llvm-cov
# Generate detailed HTML report
cargo llvm-cov --html --open
# Check specific test coverage
cargo llvm-cov --lib
5. Filtering and Exclusions
Include/Exclude Patterns
# Include only library code
cargo llvm-cov --lib
# Include only binary
cargo llvm-cov --bin my-binary
# Exclude tests from coverage
cargo llvm-cov --ignore-filename-regex='tests/'
Coverage Attributes
#![allow(unused)] fn main() { // Exclude function from coverage #[cfg(not(tarpaulin_include))] fn debug_only_function() { // This won't be included in coverage } // Use cfg_attr for conditional exclusion #[cfg_attr(not(test), no_coverage)] fn internal_helper() { // Implementation } }
Configuration File
# .cargo/llvm-cov.toml
[llvm-cov]
ignore-filename-regex = ["tests/", "benches/", "examples/"]
output-dir = "coverage"
html = true
6. Workspace Coverage
Multi-Crate Workspaces
# Coverage for entire workspace
cargo llvm-cov --workspace
# Specific workspace members
cargo llvm-cov --package crate1 --package crate2
# Exclude specific packages
cargo llvm-cov --workspace --exclude integration-tests
Workspace Configuration
# Cargo.toml (workspace root)
[workspace]
members = ["core", "utils", "app"]
[workspace.metadata.llvm-cov]
ignore-filename-regex = ["mock", "test_"]
Aggregated Reports
# Generate workspace-wide HTML report
cargo llvm-cov --workspace --html
# Combined LCOV for all crates
cargo llvm-cov --workspace --lcov --output-path workspace.lcov
7. CI/CD Integration
GitHub Actions
name: Coverage
on: [push, pull_request]
jobs:
coverage:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
with:
components: llvm-tools-preview
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Generate coverage
run: cargo llvm-cov --workspace --lcov --output-path lcov.info
- name: Upload to Codecov
uses: codecov/codecov-action@v3
with:
files: lcov.info
fail_ci_if_error: true
GitLab CI
coverage:
stage: test
image: rust:latest
before_script:
- rustup component add llvm-tools-preview
- cargo install cargo-llvm-cov
script:
- cargo llvm-cov --workspace --lcov --output-path lcov.info
- cargo llvm-cov --workspace --cobertura --output-path cobertura.xml
coverage: '/TOTAL.*\s+(\d+\.\d+)%/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: cobertura.xml
Coverage Badges
<!-- README.md -->
[](https://codecov.io/gh/username/repo)
[](https://coveralls.io/github/username/repo?branch=main)
8. Integration with Coverage Services
Codecov
# Upload to Codecov
cargo llvm-cov --lcov --output-path lcov.info
bash <(curl -s https://codecov.io/bash) -f lcov.info
# codecov.yml
coverage:
precision: 2
round: down
range: "70...100"
status:
project:
default:
target: 80%
threshold: 2%
patch:
default:
target: 90%
Coveralls
# GitHub Actions with Coveralls
- name: Upload to Coveralls
uses: coverallsapp/github-action@v2
with:
file: lcov.info
9. Advanced Configuration
Custom Test Binaries
# Coverage for specific test binary
cargo llvm-cov --test integration_test
# Coverage for doc tests
cargo llvm-cov --doctests
# Coverage for examples
cargo llvm-cov --example my_example
# Integration with nextest (faster test runner)
cargo llvm-cov nextest
# Nextest with specific options
cargo llvm-cov nextest --workspace --exclude integration-tests
Environment Variables
# Set custom LLVM profile directory
export CARGO_LLVM_COV_TARGET_DIR=/tmp/coverage
# Merge multiple runs
export CARGO_LLVM_COV_MERGE=1
cargo llvm-cov --no-report
cargo llvm-cov --no-run --html
Profile-Guided Optimization
# Generate profile data
cargo llvm-cov --release --no-report
# Use for PGO
rustc -Cprofile-use=target/llvm-cov/*/profraw
10. Comparison with Other Tools
cargo-tarpaulin vs cargo-llvm-cov
| Feature | cargo-tarpaulin | cargo-llvm-cov |
|---|---|---|
| Coverage Type | Line-based | Source-based |
| Platform Support | Linux only | Cross-platform |
| Speed | Slower | Faster |
| Accuracy | Good | More precise |
| Report Formats | HTML, XML, LCOV | HTML, JSON, LCOV, Cobertura |
| Integration | ptrace-based | LLVM-based |
When to Use Each
- cargo-llvm-cov: Recommended for most projects, especially cross-platform
- cargo-tarpaulin: Legacy projects, specific Linux features
- grcov: Mozilla projects, Firefox integration
11. Best Practices
Coverage Goals
# .github/coverage.toml
[coverage]
minimum_total = 80
minimum_file = 60
exclude_patterns = ["tests/*", "benches/*"]
Meaningful Coverage
- Focus on Critical Paths: Prioritize business logic over boilerplate
- Test Edge Cases: Don’t just test happy paths
- Avoid Coverage Gaming: 100% coverage doesn’t mean bug-free
- Regular Reviews: Monitor coverage trends over time
Coverage Improvement Strategy
# Find uncovered code
cargo llvm-cov --html
# Review HTML report for red lines
# Generate JSON for analysis
cargo llvm-cov --json --output-path coverage.json
# Parse and analyze with scripts
jq '.data[0].files[] | select(.summary.lines.percent < 80) | .filename' coverage.json
12. Troubleshooting
Common Issues
Issue: No coverage data generated
# Ensure tests actually run
cargo test
# Then run coverage
cargo llvm-cov clean
cargo llvm-cov
Issue: Incorrect coverage numbers
# Clean all artifacts
cargo clean
rm -rf target/llvm-cov
cargo llvm-cov
Issue: Missing functions in report
#![allow(unused)] fn main() { // Ensure functions are called in tests #[inline(never)] // Prevent inlining pub fn my_function() { // Implementation } }
Performance Optimization
# Use release mode for faster execution
cargo llvm-cov --release
# Parallel test execution
cargo llvm-cov -- --test-threads=4
# Skip expensive tests
cargo llvm-cov -- --skip expensive_test
13. Real-World Example: Web Service
#![allow(unused)] fn main() { // src/server.rs use actix_web::{web, App, HttpResponse, HttpServer}; use serde::{Deserialize, Serialize}; #[derive(Serialize, Deserialize)] pub struct User { id: u32, name: String, } pub async fn get_user(id: web::Path<u32>) -> HttpResponse { // Simulate database lookup if *id == 0 { return HttpResponse::NotFound().finish(); } HttpResponse::Ok().json(User { id: *id, name: format!("User{}", id), }) } pub async fn create_user(user: web::Json<User>) -> HttpResponse { HttpResponse::Created().json(&user.into_inner()) } #[cfg(test)] mod tests { use super::*; use actix_web::{test, web, App}; #[actix_web::test] async fn test_get_user() { let app = test::init_service( App::new().route("/user/{id}", web::get().to(get_user)) ).await; let req = test::TestRequest::get() .uri("/user/1") .to_request(); let resp = test::call_service(&app, req).await; assert!(resp.status().is_success()); } #[actix_web::test] async fn test_user_not_found() { let app = test::init_service( App::new().route("/user/{id}", web::get().to(get_user)) ).await; let req = test::TestRequest::get() .uri("/user/0") .to_request(); let resp = test::call_service(&app, req).await; assert_eq!(resp.status(), 404); } } }
Coverage Commands for Web Service
# Run with integration tests
cargo llvm-cov --all-features
# Generate comprehensive report
cargo llvm-cov --workspace --html --open
# CI-friendly output
cargo llvm-cov --workspace --lcov --output-path lcov.info --summary-only
Summary
Code coverage with cargo llvm-cov provides:
- Accurate metrics using LLVM instrumentation
- Multiple report formats for different use cases
- CI/CD integration with major platforms
- Workspace support for complex projects
- Cross-platform compatibility unlike alternatives
Remember: coverage is a tool for finding untested code, not a goal in itself. Focus on meaningful tests that verify behavior rather than achieving arbitrary coverage percentages.
Additional Resources
- cargo-llvm-cov Documentation
- LLVM Coverage Mapping Format
- Codecov Documentation
- GitHub Actions for Rust
Chapter 21: Macros & Code Generation
Macros are Rust’s metaprogramming feature - code that writes other code. They run at compile time, generating Rust code that gets compiled with the rest of your program. This chapter covers declarative macros with macro_rules! and introduces procedural macros.
What are Macros?
Macros enable code generation at compile time, reducing boilerplate and enabling domain-specific languages (DSLs). Unlike functions, macros:
- Operate on syntax trees, not values
- Can take a variable number of arguments
- Generate code before type checking
- Can create new syntax patterns
#![allow(unused)] fn main() { // This macro call println!("Hello, {}!", "world"); // Expands to something like this (simplified) std::io::_print(format_args!("Hello, {}!\n", "world")); }
Declarative Macros with macro_rules!
Basic Syntax
#![allow(unused)] fn main() { macro_rules! say_hello { () => { println!("Hello!"); }; } say_hello!(); // Prints: Hello! }
Pattern Matching Types
Macros use pattern matching with specific fragment specifiers:
1. item - Items like functions, structs, modules
#![allow(unused)] fn main() { macro_rules! create_function { ($func_name:ident) => { fn $func_name() { println!("You called {}!", stringify!($func_name)); } }; } create_function!(foo); foo(); // Prints: You called foo! }
2. block - Code blocks
#![allow(unused)] fn main() { macro_rules! time_it { ($block:block) => { let start = std::time::Instant::now(); $block println!("Took: {:?}", start.elapsed()); }; } time_it!({ std::thread::sleep(std::time::Duration::from_millis(100)); println!("Work done!"); }); }
3. stmt - Statements
#![allow(unused)] fn main() { macro_rules! debug_stmt { ($stmt:stmt) => { println!("Executing: {}", stringify!($stmt)); $stmt }; } debug_stmt!(let x = 42;); }
4. expr - Expressions
#![allow(unused)] fn main() { macro_rules! double { ($e:expr) => { $e * 2 }; } let result = double!(5 + 3); // 16 }
Note: Edition 2024 Change: The expr fragment now also matches const and _ expressions. For backwards compatibility, use expr_2021 if you need the old behavior that doesn’t match these expressions.
5. ty - Types
#![allow(unused)] fn main() { macro_rules! create_struct { ($name:ident, $field_type:ty) => { struct $name { value: $field_type, } }; } create_struct!(MyStruct, i32); }
6. ident - Identifiers
#![allow(unused)] fn main() { macro_rules! getter { ($field:ident) => { fn $field(&self) -> &str { &self.$field } }; } }
7. path - Paths like std::vec::Vec
#![allow(unused)] fn main() { macro_rules! use_type { ($path:path) => { let _instance: $path = Default::default(); }; } use_type!(std::collections::HashMap<String, i32>); }
8. literal - Literal values
#![allow(unused)] fn main() { macro_rules! print_literal { ($lit:literal) => { println!("Literal: {}", $lit); }; } print_literal!("hello"); print_literal!(42); }
9. tt - Token trees (any valid tokens)
macro_rules! capture_tokens { ($($tt:tt)*) => { println!("Tokens: {}", stringify!($($tt)*)); }; } capture_tokens!(fn main() { println!("hello"); });
10. pat - Patterns
#![allow(unused)] fn main() { macro_rules! match_pattern { ($val:expr, $($pat:pat => $result:expr),+) => { match $val { $($pat => $result,)+ } }; } let x = match_pattern!(5, 0..=3 => "low", 4..=6 => "medium", _ => "high" ); }
11. vis - Visibility qualifiers
#![allow(unused)] fn main() { macro_rules! make_struct { ($vis:vis struct $name:ident) => { $vis struct $name { value: i32, } }; } make_struct!(pub struct PublicStruct); }
12. lifetime - Lifetime parameters
#![allow(unused)] fn main() { macro_rules! with_lifetime { ($lt:lifetime) => { struct Ref<$lt> { data: &$lt str, } }; } with_lifetime!('a); }
13. meta - Attributes
#![allow(unused)] fn main() { macro_rules! with_attributes { ($(#[$meta:meta])* struct $name:ident) => { $(#[$meta])* struct $name { value: i32, } }; } with_attributes! { #[derive(Debug, Clone)] struct MyStruct } }
Multiple Patterns
#![allow(unused)] fn main() { macro_rules! vec_shorthand { // Empty vector () => { Vec::new() }; // Vector with elements ($($x:expr),+ $(,)?) => { { let mut vec = Vec::new(); $(vec.push($x);)+ vec } }; } let v1 = vec_shorthand!(); let v2 = vec_shorthand![1, 2, 3]; let v3 = vec_shorthand![1, 2, 3,]; // Trailing comma ok }
Repetition Operators
*- Zero or more repetitions+- One or more repetitions?- Zero or one (optional)
#![allow(unused)] fn main() { macro_rules! create_enum { ($name:ident { $($variant:ident),* }) => { enum $name { $($variant,)* } }; } create_enum!(Color { Red, Green, Blue }); macro_rules! sum { ($x:expr) => ($x); ($x:expr, $($rest:expr),+) => { $x + sum!($($rest),+) }; } let total = sum!(1, 2, 3, 4); // 10 }
Advanced Macro Patterns
Incremental TT Munching
#![allow(unused)] fn main() { macro_rules! replace_expr { ($_t:tt $sub:expr) => {$sub}; } macro_rules! count_tts { () => {0usize}; ($_head:tt $($tail:tt)*) => {1usize + count_tts!($($tail)*)}; } let count = count_tts!(a b c d); // 4 }
Push-down Accumulation
#![allow(unused)] fn main() { macro_rules! reverse { ([] $($reversed:tt)*) => { ($($reversed)*) }; ([$head:tt $($tail:tt)*] $($reversed:tt)*) => { reverse!([$($tail)*] $head $($reversed)*) }; } let rev = reverse!([1 2 3 4]); // (4 3 2 1) }
Internal Rules
#![allow(unused)] fn main() { macro_rules! my_macro { // Public API ($($input:expr),*) => { my_macro!(@internal [$($input),*] []) }; // Internal implementation (@internal [] [$($result:expr),*]) => { vec![$($result),*] }; (@internal [$head:expr $(, $tail:expr)*] [$($result:expr),*]) => { my_macro!(@internal [$($tail),*] [$($result,)* $head * 2]) }; } let doubled = my_macro!(1, 2, 3); // vec![2, 4, 6] }
Hygienic Macros
Rust macros are hygienic - they don’t accidentally capture or interfere with variables:
#![allow(unused)] fn main() { macro_rules! using_a { ($e:expr) => { { let a = 42; $e } }; } let a = "outer"; let result = using_a!(a); // Uses outer 'a', not the one in macro }
To intentionally break hygiene:
#![allow(unused)] fn main() { macro_rules! create_and_use { ($name:ident) => { let $name = 42; println!("{}", $name); }; } create_and_use!(my_var); // Creates my_var in caller's scope }
Debugging Macros
Using trace_macros!
#![allow(unused)] #![feature(trace_macros)] fn main() { trace_macros!(true); my_macro!(args); trace_macros!(false); }
Using log_syntax!
#![allow(unused)] #![feature(log_syntax)] fn main() { macro_rules! debug_macro { ($($arg:tt)*) => { log_syntax!($($arg)*); }; } }
Cargo Expand
cargo install cargo-expand
cargo expand
Procedural Macros
Procedural macros are more powerful but require a separate crate:
Types of Procedural Macros
- Custom Derive Macros
- Attribute Macros
- Function-like Macros
Setup
# Cargo.toml
[lib]
proc-macro = true
[dependencies]
syn = "2.0"
quote = "1.0"
proc-macro2 = "1.0"
Custom Derive Macro Example
#![allow(unused)] fn main() { // src/lib.rs use proc_macro::TokenStream; use quote::quote; use syn::{parse_macro_input, DeriveInput}; #[proc_macro_derive(HelloMacro)] pub fn hello_macro_derive(input: TokenStream) -> TokenStream { let ast = parse_macro_input!(input as DeriveInput); let name = &ast.ident; let gen = quote! { impl HelloMacro for #name { fn hello() { println!("Hello from {}!", stringify!(#name)); } } }; gen.into() } }
Usage:
#![allow(unused)] fn main() { trait HelloMacro { fn hello(); } #[derive(HelloMacro)] struct MyStruct; MyStruct::hello(); // Prints: Hello from MyStruct! }
Attribute Macro Example
#![allow(unused)] fn main() { #[proc_macro_attribute] pub fn route(args: TokenStream, input: TokenStream) -> TokenStream { let mut item = parse_macro_input!(input as syn::ItemFn); let args = parse_macro_input!(args as syn::LitStr); // Modify function based on attribute arguments quote! { #[web::route(#args)] item }.into() } }
Usage:
#![allow(unused)] fn main() { #[route("/api/users")] async fn get_users() -> Response { // Handler implementation } }
Function-like Procedural Macro
#![allow(unused)] fn main() { #[proc_macro] pub fn sql(input: TokenStream) -> TokenStream { let input = parse_macro_input!(input as syn::LitStr); // Parse SQL and generate code quote! { // Generated code here }.into() } }
Usage:
#![allow(unused)] fn main() { let query = sql!("SELECT * FROM users WHERE id = ?"); }
Real-World Examples
Builder Pattern Macro
#![allow(unused)] fn main() { macro_rules! builder { ($name:ident { $($field:ident: $type:ty),* }) => { pub struct $name { $(pub $field: $type,)* } paste::paste! { pub struct [<$name Builder>] { $($field: Option<$type>,)* } impl [<$name Builder>] { pub fn new() -> Self { Self { $($field: None,)* } } $( pub fn $field(mut self, value: $type) -> Self { self.$field = Some(value); self } )* pub fn build(self) -> Result<$name, &'static str> { Ok($name { $($field: self.$field.ok_or(concat!("Missing field: ", stringify!($field)))?,)* }) } } } }; } builder!(Person { name: String, age: u32, email: String }); let person = PersonBuilder::new() .name("Alice".to_string()) .age(30) .email("alice@example.com".to_string()) .build()?; }
Test Generator Macro
#![allow(unused)] fn main() { macro_rules! test_cases { ($($name:ident: $input:expr => $expected:expr),*) => { $( #[test] fn $name() { let result = process($input); assert_eq!(result, $expected); } )* }; } test_cases! { test_zero: 0 => 0, test_one: 1 => 1, test_negative: -5 => 5, test_large: 1000 => 1000 } }
DSL for State Machines
#![allow(unused)] fn main() { macro_rules! state_machine { ( $name:ident { states: [$($state:ident),*], events: [$($event:ident),*], transitions: [ $($from:ident + $on:ident => $to:ident),* ] } ) => { #[derive(Debug, Clone, Copy, PartialEq)] enum $name { $($state,)* } #[derive(Debug)] enum Event { $($event,)* } impl $name { fn transition(self, event: Event) -> Option<Self> { match (self, event) { $( (Self::$from, Event::$on) => Some(Self::$to), )* _ => None } } } }; } state_machine! { DoorState { states: [Open, Closed, Locked], events: [OpenDoor, CloseDoor, LockDoor, UnlockDoor], transitions: [ Closed + OpenDoor => Open, Open + CloseDoor => Closed, Closed + LockDoor => Locked, Locked + UnlockDoor => Closed ] } } }
Common Macro Patterns
Callback Pattern
#![allow(unused)] fn main() { macro_rules! with_callback { ($setup:expr, $callback:expr) => {{ let result = $setup; $callback(&result); result }}; } let data = with_callback!( vec![1, 2, 3], |v| println!("Created vector with {} elements", v.len()) ); }
Configuration Pattern
#![allow(unused)] fn main() { macro_rules! config { ($($key:ident : $value:expr),* $(,)?) => {{ #[derive(Debug)] struct Config { $($key: std::option::Option<String>,)* } Config { $($key: Some($value.to_string()),)* } }}; } let cfg = config! { host: "localhost", port: "8080", database: "mydb" }; }
Best Practices
- Prefer Functions Over Macros: Use macros only when functions can’t achieve your goal
- Keep Macros Simple: Complex macros are hard to debug and maintain
- Document Macro Behavior: Include examples and expansion examples
- Use Internal Rules: Hide implementation details with
@prefixed rules - Test Macro Expansions: Use
cargo expandto verify generated code - Consider Procedural Macros: For complex transformations, proc macros are clearer
- Maintain Hygiene: Avoid capturing external variables unless intentional
Limitations and Gotchas
- Type Information: Macros run before type checking
- Error Messages: Macro errors can be cryptic
- IDE Support: Limited autocomplete and navigation
- Compilation Time: Heavy macro use increases compile times
- Debugging: Harder to debug than regular code
Summary
Macros are a powerful metaprogramming tool in Rust:
- Declarative macros (
macro_rules!) for pattern-based code generation - Procedural macros for more complex AST transformations
- Hygiene prevents accidental variable capture
- Pattern matching on various syntax elements
- Repetition and recursion enable complex patterns
Use macros judiciously to eliminate boilerplate while maintaining code clarity.
Additional Resources
Chapter 22: Unsafe Rust & FFI
This chapter covers unsafe Rust operations and Foreign Function Interface (FFI) for interfacing with C/C++ code. Unsafe Rust provides low-level control when needed while FFI enables integration with existing system libraries and codebases.
Edition 2024 Note: Starting with Rust 1.82 and Edition 2024, all extern blocks must be marked as unsafe extern to make the unsafety of FFI calls explicit. This change improves clarity about where unsafe operations occur.
Part 1: Unsafe Rust Foundations
The Five Unsafe Superpowers
Unsafe Rust enables five specific operations that bypass Rust’s safety guarantees:
- Dereference raw pointers - Direct memory access
- Call unsafe functions/methods - Including FFI functions
- Access/modify mutable statics - Global state management
- Implement unsafe traits - Like
SendandSync - Access union fields - Memory reinterpretation
Raw Pointers
#![allow(unused)] fn main() { use std::ptr; // Creating raw pointers let mut num = 5; let r1 = &num as *const i32; // Immutable raw pointer let r2 = &mut num as *mut i32; // Mutable raw pointer // Dereferencing requires unsafe unsafe { println!("r1: {}", *r1); *r2 = 10; println!("r2: {}", *r2); } // Pointer arithmetic unsafe { let array = [1, 2, 3, 4, 5]; let ptr = array.as_ptr(); for i in 0..5 { println!("Value at offset {}: {}", i, *ptr.add(i)); } } }
Unsafe Functions and Methods
#![allow(unused)] fn main() { unsafe fn dangerous() { // Function body can perform unsafe operations } // Calling unsafe functions unsafe { dangerous(); } // Safe abstraction over unsafe code fn split_at_mut(values: &mut [i32], mid: usize) -> (&mut [i32], &mut [i32]) { let len = values.len(); let ptr = values.as_mut_ptr(); assert!(mid <= len); unsafe { ( std::slice::from_raw_parts_mut(ptr, mid), std::slice::from_raw_parts_mut(ptr.add(mid), len - mid), ) } } }
Mutable Static Variables
#![allow(unused)] fn main() { static mut COUNTER: u32 = 0; fn increment_counter() { unsafe { COUNTER += 1; } } fn get_counter() -> u32 { unsafe { COUNTER } } // Better alternative: use atomic types use std::sync::atomic::{AtomicU32, Ordering}; static ATOMIC_COUNTER: AtomicU32 = AtomicU32::new(0); fn safe_increment() { ATOMIC_COUNTER.fetch_add(1, Ordering::SeqCst); } }
Unsafe Traits
#![allow(unused)] fn main() { unsafe trait Zeroable { // Trait is unsafe because implementor must guarantee safety } unsafe impl Zeroable for i32 { // We guarantee i32 can be safely zeroed } // Send and Sync are unsafe traits struct RawPointer(*const u8); unsafe impl Send for RawPointer {} unsafe impl Sync for RawPointer {} }
Unions
#![allow(unused)] fn main() { #[repr(C)] union IntOrFloat { i: i32, f: f32, } let mut u = IntOrFloat { i: 42 }; unsafe { // Accessing union fields is unsafe u.f = 3.14; println!("Float: {}", u.f); // Type punning (reinterpreting bits) println!("As int: {}", u.i); // Undefined behavior! } }
Part 2: Calling C/C++ from Rust
Manual FFI Bindings
#![allow(unused)] fn main() { use std::os::raw::{c_char, c_int, c_void}; use std::ffi::{CString, CStr}; // Link to system libraries #[link(name = "m")] // Math library extern "C" { fn sqrt(x: f64) -> f64; fn pow(base: f64, exponent: f64) -> f64; } // Note: In Edition 2024 (Rust 1.82+), extern blocks must be marked unsafe: // unsafe extern "C" { // fn sqrt(x: f64) -> f64; // } // Safe wrapper pub fn safe_sqrt(x: f64) -> f64 { if x < 0.0 { panic!("Cannot take square root of negative number"); } unsafe { sqrt(x) } } // Working with strings extern "C" { fn strlen(s: *const c_char) -> usize; } pub fn string_length(s: &str) -> usize { let c_string = CString::new(s).expect("CString creation failed"); unsafe { strlen(c_string.as_ptr()) } } }
Complex C Structures
#![allow(unused)] fn main() { #[repr(C)] struct Point { x: f64, y: f64, } #[repr(C)] struct Rectangle { top_left: Point, bottom_right: Point, } extern "C" { fn calculate_area(rect: *const Rectangle) -> f64; } pub fn rect_area(rect: &Rectangle) -> f64 { unsafe { calculate_area(rect as *const Rectangle) } } }
Using Bindgen
# Cargo.toml
[build-dependencies]
bindgen = "0.70"
cc = "1.1"
// build.rs use std::env; use std::path::PathBuf; fn main() { // Compile C code cc::Build::new() .file("src/native.c") .compile("native"); // Generate bindings let bindings = bindgen::Builder::default() .header("src/wrapper.h") .parse_callbacks(Box::new(bindgen::CargoCallbacks)) .generate() .expect("Unable to generate bindings"); let out_path = PathBuf::from(env::var("OUT_DIR").unwrap()); bindings .write_to_file(out_path.join("bindings.rs")) .expect("Couldn't write bindings!"); }
#![allow(unused)] fn main() { // src/lib.rs include!(concat!(env!("OUT_DIR"), "/bindings.rs")); // Use generated bindings pub fn use_native_function() { unsafe { let result = native_function(42); println!("Result: {}", result); } } }
Part 3: Exposing Rust to C/C++
Using cbindgen
# Cargo.toml
[lib]
crate-type = ["cdylib", "staticlib"]
[build-dependencies]
cbindgen = "0.26"
#![allow(unused)] fn main() { // src/lib.rs use std::ffi::{c_char, CStr}; use std::os::raw::c_int; #[no_mangle] pub extern "C" fn rust_add(a: c_int, b: c_int) -> c_int { a + b } #[no_mangle] pub extern "C" fn rust_greet(name: *const c_char) -> *mut c_char { let name = unsafe { assert!(!name.is_null()); CStr::from_ptr(name) }; let greeting = format!("Hello, {}!", name.to_string_lossy()); let c_string = std::ffi::CString::new(greeting).unwrap(); c_string.into_raw() } #[no_mangle] pub extern "C" fn rust_free_string(s: *mut c_char) { if s.is_null() { return; } unsafe { let _ = std::ffi::CString::from_raw(s); } } }
// build.rs use cbindgen; use std::env; fn main() { let crate_dir = env::var("CARGO_MANIFEST_DIR").unwrap(); cbindgen::Builder::new() .with_crate(crate_dir) .with_language(cbindgen::Language::C) .generate() .expect("Unable to generate bindings") .write_to_file("include/rust_lib.h"); }
Part 4: C++ Integration with cxx
Using cxx for Safe C++ FFI
# Cargo.toml
[dependencies]
cxx = "1.0"
[build-dependencies]
cxx-build = "1.0"
#![allow(unused)] fn main() { // src/lib.rs #[cxx::bridge] mod ffi { unsafe extern "C++" { include!("cpp/include/blobstore.h"); type BlobstoreClient; fn new_blobstore_client() -> UniquePtr<BlobstoreClient>; fn put(&self, key: &str, value: &[u8]) -> Result<()>; fn get(&self, key: &str) -> Vec<u8>; } extern "Rust" { fn process_blob(data: &[u8]) -> Vec<u8>; } } pub fn process_blob(data: &[u8]) -> Vec<u8> { // Rust implementation data.iter().map(|&b| b.wrapping_add(1)).collect() } pub fn use_blobstore() -> Result<(), Box<dyn std::error::Error>> { let client = ffi::new_blobstore_client(); let key = "test_key"; let data = b"hello world"; client.put(key, data)?; let retrieved = client.get(key); Ok(()) } }
// build.rs fn main() { cxx_build::bridge("src/lib.rs") .file("cpp/src/blobstore.cc") .std("c++17") .compile("cxx-demo"); println!("cargo:rerun-if-changed=src/lib.rs"); println!("cargo:rerun-if-changed=cpp/include/blobstore.h"); println!("cargo:rerun-if-changed=cpp/src/blobstore.cc"); }
Part 5: Platform-Specific Code
Conditional Compilation
#![allow(unused)] fn main() { #[cfg(target_os = "windows")] mod windows { use winapi::um::fileapi::GetFileAttributesW; use winapi::um::winnt::FILE_ATTRIBUTE_HIDDEN; use std::os::windows::ffi::OsStrExt; use std::ffi::OsStr; pub fn is_hidden(path: &std::path::Path) -> bool { let wide: Vec<u16> = OsStr::new(path) .encode_wide() .chain(Some(0)) .collect(); unsafe { let attrs = GetFileAttributesW(wide.as_ptr()); attrs != u32::MAX && (attrs & FILE_ATTRIBUTE_HIDDEN) != 0 } } } #[cfg(target_os = "linux")] mod linux { pub fn is_hidden(path: &std::path::Path) -> bool { path.file_name() .and_then(|name| name.to_str()) .map(|name| name.starts_with('.')) .unwrap_or(false) } } }
SIMD Operations
#![allow(unused)] fn main() { #[cfg(target_arch = "x86_64")] use std::arch::x86_64::*; #[cfg(target_arch = "x86_64")] unsafe fn dot_product_simd(a: &[f32], b: &[f32]) -> f32 { assert_eq!(a.len(), b.len()); assert!(a.len() % 8 == 0); let mut sum = _mm256_setzero_ps(); for i in (0..a.len()).step_by(8) { let a_vec = _mm256_loadu_ps(a.as_ptr().add(i)); let b_vec = _mm256_loadu_ps(b.as_ptr().add(i)); let prod = _mm256_mul_ps(a_vec, b_vec); sum = _mm256_add_ps(sum, prod); } // Horizontal sum let mut result = [0.0f32; 8]; _mm256_storeu_ps(result.as_mut_ptr(), sum); result.iter().sum() } }
Part 6: Safety Patterns and Best Practices
Safe Abstraction Pattern
#![allow(unused)] fn main() { pub struct SafeWrapper { ptr: *mut SomeFFIType, } impl SafeWrapper { pub fn new() -> Option<Self> { unsafe { let ptr = ffi_create_object(); if ptr.is_null() { None } else { Some(SafeWrapper { ptr }) } } } pub fn do_something(&self) -> Result<i32, String> { unsafe { let result = ffi_do_something(self.ptr); if result < 0 { Err("Operation failed".to_string()) } else { Ok(result) } } } } impl Drop for SafeWrapper { fn drop(&mut self) { unsafe { if !self.ptr.is_null() { ffi_destroy_object(self.ptr); } } } } // Ensure thread safety only if the C library supports it unsafe impl Send for SafeWrapper {} unsafe impl Sync for SafeWrapper {} }
Error Handling Across FFI
#![allow(unused)] fn main() { use std::ffi::CString; use std::ptr; #[repr(C)] pub struct ErrorInfo { code: i32, message: *mut c_char, } #[no_mangle] pub extern "C" fn rust_operation( input: *const c_char, error: *mut ErrorInfo, ) -> *mut c_char { // Clear error initially if !error.is_null() { unsafe { (*error).code = 0; (*error).message = ptr::null_mut(); } } // Parse input let input_str = unsafe { if input.is_null() { set_error(error, 1, "Null input"); return ptr::null_mut(); } match CStr::from_ptr(input).to_str() { Ok(s) => s, Err(_) => { set_error(error, 2, "Invalid UTF-8"); return ptr::null_mut(); } } }; // Perform operation match perform_operation(input_str) { Ok(result) => { CString::new(result) .map(|s| s.into_raw()) .unwrap_or_else(|_| { set_error(error, 3, "Failed to create result"); ptr::null_mut() }) } Err(e) => { set_error(error, 4, &e.to_string()); ptr::null_mut() } } } fn set_error(error: *mut ErrorInfo, code: i32, message: &str) { if !error.is_null() { unsafe { (*error).code = code; (*error).message = CString::new(message) .map(|s| s.into_raw()) .unwrap_or(ptr::null_mut()); } } } fn perform_operation(input: &str) -> Result<String, Box<dyn std::error::Error>> { // Your actual operation here Ok(format!("Processed: {}", input)) } }
Part 7: Testing FFI Code
Unit Testing with Mocking
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_ffi_wrapper() { // Mock the FFI functions in tests struct MockFFI; impl MockFFI { fn mock_function(&self, input: i32) -> i32 { input * 2 } } let mock = MockFFI; assert_eq!(mock.mock_function(21), 42); } #[test] fn test_error_handling() { let mut error = ErrorInfo { code: 0, message: ptr::null_mut(), }; let result = rust_operation( ptr::null(), &mut error as *mut ErrorInfo, ); assert!(result.is_null()); assert_eq!(unsafe { error.code }, 1); } } }
Integration Testing
#![allow(unused)] fn main() { // tests/integration_test.rs #[test] fn test_full_ffi_roundtrip() { // Load the library let lib = unsafe { libloading::Library::new("./target/debug/libmylib.so") .expect("Failed to load library") }; // Get function symbols let add_fn: libloading::Symbol<unsafe extern "C" fn(i32, i32) -> i32> = unsafe { lib.get(b"rust_add").expect("Failed to load symbol") }; // Test the function let result = unsafe { add_fn(10, 32) }; assert_eq!(result, 42); } }
Best Practices
- Minimize Unsafe Code: Keep unsafe blocks small and isolated
- Document Safety Requirements: Clearly state what callers must guarantee
- Use Safe Abstractions: Wrap unsafe code in safe APIs
- Validate All Inputs: Never trust data from FFI boundaries
- Handle Errors Gracefully: Convert panics to error codes at FFI boundaries
- Test Thoroughly: Include fuzzing and property-based testing
- Use Tools: Run Miri, Valgrind, and sanitizers on FFI code
Common Pitfalls
- Memory Management: Ensure consistent allocation/deallocation across FFI
- String Encoding: C uses null-terminated strings, Rust doesn’t
- ABI Compatibility: Always use
#[repr(C)]for FFI structs - Lifetime Management: Raw pointers don’t encode lifetimes
- Thread Safety: Verify thread safety of external libraries
Summary
Unsafe Rust and FFI provide powerful tools for systems programming:
- Unsafe Rust enables low-level operations with explicit opt-in
- FFI allows seamless integration with C/C++ codebases
- Safe abstractions wrap unsafe code in safe interfaces
- Tools like bindgen and cbindgen automate binding generation
- cxx provides safe C++ interop
Always prefer safe Rust, use unsafe only when necessary, and wrap it in safe abstractions.
Additional Resources
Chapter 23: Embedded HAL - Hardware Register Access & Volatile Memory
This chapter covers hardware abstraction in embedded Rust, focusing on memory-mapped I/O, volatile access patterns, and the embedded-hal ecosystem. These concepts are essential for writing safe, portable embedded code.
Part 1: Why Volatile Access Matters
The Compiler Optimization Problem
When accessing regular memory, the compiler assumes it has complete control and can optimize away “redundant” operations:
#![allow(unused)] fn main() { // Regular memory access - compiler can optimize fn regular_memory() { let mut value = 0u32; value = 1; // Compiler might optimize away value = 2; // Only this write matters value = 3; // And this one let x = value; // Reads 3 let y = value; // Compiler might reuse x instead of reading again } }
Hardware registers are different - they’re windows into hardware state that can change independently:
#![allow(unused)] fn main() { // Hardware register at address 0x4000_0000 const GPIO_OUT: *mut u32 = 0x4000_0000 as *mut u32; unsafe fn bad_gpio_control() { // ❌ WRONG: Compiler might optimize these away! *GPIO_OUT = 0b0001; // Turn on LED 1 *GPIO_OUT = 0b0010; // Turn on LED 2 *GPIO_OUT = 0b0100; // Turn on LED 3 // Compiler might only emit the last write! } unsafe fn good_gpio_control() { use core::ptr; // ✅ CORRECT: Volatile writes are never optimized away ptr::write_volatile(GPIO_OUT, 0b0001); // Turn on LED 1 ptr::write_volatile(GPIO_OUT, 0b0010); // Turn on LED 2 ptr::write_volatile(GPIO_OUT, 0b0100); // Turn on LED 3 // All three writes will happen! } }
Memory-Mapped I/O Fundamentals
In embedded systems, hardware peripherals appear as memory addresses:
#![allow(unused)] fn main() { // ESP32-C3 GPIO registers (simplified) const GPIO_BASE: usize = 0x6000_4000; // GPIO output registers const GPIO_OUT_W1TS: *mut u32 = (GPIO_BASE + 0x0008) as *mut u32; // Set bits const GPIO_OUT_W1TC: *mut u32 = (GPIO_BASE + 0x000C) as *mut u32; // Clear bits const GPIO_OUT: *mut u32 = (GPIO_BASE + 0x0004) as *mut u32; // Direct write const GPIO_IN: *const u32 = (GPIO_BASE + 0x003C) as *const u32; // Read input unsafe fn control_gpio() { use core::ptr; // Set pin 5 high (write 1 to set) ptr::write_volatile(GPIO_OUT_W1TS, 1 << 5); // Clear pin 5 (write 1 to clear - yes, really!) ptr::write_volatile(GPIO_OUT_W1TC, 1 << 5); // Read current pin states let pins = ptr::read_volatile(GPIO_IN); let pin5_state = (pins >> 5) & 1; } }
Why Each Access Must Be Volatile
Hardware registers can change at any time due to:
- External signals (button presses, sensor readings)
- Hardware state machines (timers, DMA completion)
- Interrupt handlers modifying registers
- Peripheral operations completing
#![allow(unused)] fn main() { // Timer register that counts up automatically const TIMER_COUNTER: *const u32 = 0x6002_0000 as *const u32; unsafe fn wait_for_timeout() { use core::ptr; // ❌ WRONG: Compiler might read once and cache while *TIMER_COUNTER < 1000 { // Infinite loop - compiler assumes value never changes! } // ✅ CORRECT: Each read goes to hardware while ptr::read_volatile(TIMER_COUNTER) < 1000 { // Works correctly - reads actual hardware value } } }
Part 2: Safe Register Abstractions
Building Type-Safe Register Access
#![allow(unused)] fn main() { use core::marker::PhantomData; /// Type-safe register wrapper pub struct Register<T> { address: *mut T, } impl<T> Register<T> { pub const fn new(address: usize) -> Self { Self { address: address as *mut T, } } pub fn read(&self) -> T where T: Copy, { unsafe { core::ptr::read_volatile(self.address) } } pub fn write(&self, value: T) { unsafe { core::ptr::write_volatile(self.address, value) } } pub fn modify<F>(&self, f: F) where T: Copy, F: FnOnce(T) -> T, { self.write(f(self.read())); } } // Usage const GPIO_OUT: Register<u32> = Register::new(0x4000_0004); fn toggle_led() { GPIO_OUT.modify(|val| val ^ (1 << 5)); // Toggle bit 5 } }
Field Access with Bitfields
#![allow(unused)] fn main() { use modular_bitfield::prelude::*; #[bitfield] #[derive(Clone, Copy)] pub struct TimerControl { pub enable: bool, // Bit 0 pub interrupt: bool, // Bit 1 pub mode: B2, // Bits 2-3 #[skip] __: B4, // Bits 4-7 reserved pub prescaler: B8, // Bits 8-15 pub reload: B16, // Bits 16-31 } pub struct TimerPeripheral { control: Register<TimerControl>, counter: Register<u32>, } impl TimerPeripheral { pub fn configure(&self, prescaler: u8, reload: u16) { let mut ctrl = self.control.read(); ctrl.set_prescaler(prescaler); ctrl.set_reload(reload); ctrl.set_enable(true); self.control.write(ctrl); } } }
Part 3: PAC Generation with svd2rust
What is an SVD File?
System View Description (SVD) files describe microcontroller peripherals in XML format. The svd2rust tool generates Rust code from these descriptions.
Generated PAC Structure
#![allow(unused)] fn main() { // Generated by svd2rust from manufacturer SVD pub mod gpio { use core::ptr; pub struct RegisterBlock { pub moder: MODER, // Mode register pub otyper: OTYPER, // Output type register pub ospeedr: OSPEEDR, // Output speed register pub pupdr: PUPDR, // Pull-up/pull-down register pub idr: IDR, // Input data register pub odr: ODR, // Output data register pub bsrr: BSRR, // Bit set/reset register } pub struct MODER { register: vcell::VolatileCell<u32>, } impl MODER { pub fn read(&self) -> u32 { self.register.get() } pub fn write(&self, value: u32) { self.register.set(value) } pub fn modify<F>(&self, f: F) where F: FnOnce(u32) -> u32, { self.write(f(self.read())); } } } // Safe peripheral access pub struct Peripherals { pub GPIO: gpio::RegisterBlock, } impl Peripherals { pub fn take() -> Option<Self> { // Ensure single instance (singleton pattern) static mut TAKEN: bool = false; cortex_m::interrupt::free(|_| unsafe { if TAKEN { None } else { TAKEN = true; Some(Peripherals { GPIO: gpio::RegisterBlock { // Initialize with hardware addresses }, }) } }) } } }
Using a PAC
#![allow(unused)] fn main() { use esp32c3_pac::{Peripherals, GPIO}; fn configure_gpio() { let peripherals = Peripherals::take().unwrap(); let gpio = peripherals.GPIO; // Configure pin as output gpio.enable_w1ts.write(|w| w.bits(1 << 5)); gpio.func5_out_sel_cfg.write(|w| w.out_sel().bits(0x80)); // Set pin high gpio.out_w1ts.write(|w| w.bits(1 << 5)); } }
Modern Alternatives (2024): While svd2rust remains popular, newer tools like chiptool and metapac offer alternative approaches. Metapac provides additional metadata (memory layout, interrupt tables) alongside register access, useful for HAL frameworks like Embassy.
Part 4: The Embedded HAL Traits
Core Traits
The embedded-hal provides standard traits for common peripherals:
#![allow(unused)] fn main() { use embedded_hal::digital::v2::{OutputPin, InputPin}; use embedded_hal::blocking::delay::DelayMs; use embedded_hal::blocking::spi::{Write, Transfer}; use embedded_hal::blocking::i2c::{Read, Write as I2cWrite}; // GPIO traits pub trait OutputPin { type Error; fn set_low(&mut self) -> Result<(), Self::Error>; fn set_high(&mut self) -> Result<(), Self::Error>; } pub trait InputPin { type Error; fn is_high(&self) -> Result<bool, Self::Error>; fn is_low(&self) -> Result<bool, Self::Error>; } }
Implementing HAL Traits
#![allow(unused)] fn main() { use embedded_hal::digital::v2::OutputPin; use core::convert::Infallible; pub struct GpioPin { pin_number: u8, gpio_out: &'static Register<u32>, } impl OutputPin for GpioPin { type Error = Infallible; fn set_high(&mut self) -> Result<(), Self::Error> { self.gpio_out.modify(|val| val | (1 << self.pin_number)); Ok(()) } fn set_low(&mut self) -> Result<(), Self::Error> { self.gpio_out.modify(|val| val & !(1 << self.pin_number)); Ok(()) } } }
Driver Portability
Write drivers that work with any HAL implementation:
#![allow(unused)] fn main() { use embedded_hal::blocking::delay::DelayMs; use embedded_hal::digital::v2::OutputPin; pub struct Led<P: OutputPin> { pin: P, } impl<P: OutputPin> Led<P> { pub fn new(pin: P) -> Self { Led { pin } } pub fn on(&mut self) -> Result<(), P::Error> { self.pin.set_high() } pub fn off(&mut self) -> Result<(), P::Error> { self.pin.set_low() } pub fn blink<D: DelayMs<u32>>( &mut self, delay: &mut D, ms: u32, ) -> Result<(), P::Error> { self.on()?; delay.delay_ms(ms); self.off()?; delay.delay_ms(ms); Ok(()) } } }
Part 5: Real-World Example - SPI Display Driver
Portable Display Driver
#![allow(unused)] fn main() { use embedded_hal::blocking::spi::Write; use embedded_hal::digital::v2::OutputPin; use embedded_hal::blocking::delay::DelayMs; pub struct ST7789<SPI, DC, RST, DELAY> { spi: SPI, dc: DC, rst: RST, delay: DELAY, } impl<SPI, DC, RST, DELAY> ST7789<SPI, DC, RST, DELAY> where SPI: Write<u8>, DC: OutputPin, RST: OutputPin, DELAY: DelayMs<u32>, { pub fn new(spi: SPI, dc: DC, rst: RST, delay: DELAY) -> Self { ST7789 { spi, dc, rst, delay } } pub fn init(&mut self) -> Result<(), Error> { // Reset sequence self.rst.set_low().map_err(|_| Error::Gpio)?; self.delay.delay_ms(10); self.rst.set_high().map_err(|_| Error::Gpio)?; self.delay.delay_ms(120); // Initialization commands self.command(0x01)?; // Software reset self.delay.delay_ms(150); self.command(0x11)?; // Sleep out self.delay.delay_ms(10); self.command(0x3A)?; // Pixel format self.data(&[0x55])?; // 16-bit color self.command(0x29)?; // Display on Ok(()) } fn command(&mut self, cmd: u8) -> Result<(), Error> { self.dc.set_low().map_err(|_| Error::Gpio)?; self.spi.write(&[cmd]).map_err(|_| Error::Spi)?; Ok(()) } fn data(&mut self, data: &[u8]) -> Result<(), Error> { self.dc.set_high().map_err(|_| Error::Gpio)?; self.spi.write(data).map_err(|_| Error::Spi)?; Ok(()) } pub fn draw_pixel(&mut self, x: u16, y: u16, color: u16) -> Result<(), Error> { self.set_window(x, y, x, y)?; self.command(0x2C)?; // Memory write self.data(&color.to_be_bytes())?; Ok(()) } fn set_window(&mut self, x0: u16, y0: u16, x1: u16, y1: u16) -> Result<(), Error> { self.command(0x2A)?; // Column address set self.data(&x0.to_be_bytes())?; self.data(&x1.to_be_bytes())?; self.command(0x2B)?; // Row address set self.data(&y0.to_be_bytes())?; self.data(&y1.to_be_bytes())?; Ok(()) } } #[derive(Debug)] pub enum Error { Spi, Gpio, } }
Part 6: Interrupt Handling
Critical Sections and Atomics
use cortex_m::interrupt; use core::cell::RefCell; use cortex_m::interrupt::Mutex; // Shared state between interrupt and main static COUNTER: Mutex<RefCell<u32>> = Mutex::new(RefCell::new(0)); #[interrupt] fn TIMER0() { interrupt::free(|cs| { let mut counter = COUNTER.borrow(cs).borrow_mut(); *counter += 1; }); } fn main() { // Access shared state safely let count = interrupt::free(|cs| { *COUNTER.borrow(cs).borrow() }); }
DMA with Volatile Buffers
#![allow(unused)] fn main() { use core::sync::atomic::{AtomicBool, Ordering}; #[repr(C, align(4))] struct DmaBuffer { data: [u8; 1024], } static mut DMA_BUFFER: DmaBuffer = DmaBuffer { data: [0; 1024] }; static DMA_COMPLETE: AtomicBool = AtomicBool::new(false); fn start_dma_transfer() { unsafe { // Configure DMA to write to DMA_BUFFER let buffer_addr = &DMA_BUFFER as *const _ as u32; // Set up DMA registers (hardware-specific) const DMA_SRC: *mut u32 = 0x4002_0000 as *mut u32; const DMA_DST: *mut u32 = 0x4002_0004 as *mut u32; const DMA_LEN: *mut u32 = 0x4002_0008 as *mut u32; const DMA_CTRL: *mut u32 = 0x4002_000C as *mut u32; core::ptr::write_volatile(DMA_SRC, 0x2000_0000); // Source address core::ptr::write_volatile(DMA_DST, buffer_addr); // Destination core::ptr::write_volatile(DMA_LEN, 1024); // Transfer length core::ptr::write_volatile(DMA_CTRL, 0x01); // Start transfer } } #[interrupt] fn DMA_DONE() { DMA_COMPLETE.store(true, Ordering::Release); } fn wait_for_dma() { while !DMA_COMPLETE.load(Ordering::Acquire) { cortex_m::asm::wfi(); // Wait for interrupt } // DMA complete - buffer contents are valid unsafe { // Must use volatile reads since DMA wrote the data let first_byte = core::ptr::read_volatile(&DMA_BUFFER.data[0]); } } }
Part 7: Power Management
Low-Power Modes
#![allow(unused)] fn main() { pub enum PowerMode { Active, Sleep, DeepSleep, Hibernate, } pub struct PowerController { pwr_ctrl: &'static Register<u32>, } impl PowerController { pub fn set_mode(&self, mode: PowerMode) { let ctrl_value = match mode { PowerMode::Active => 0x00, PowerMode::Sleep => 0x01, PowerMode::DeepSleep => 0x02, PowerMode::Hibernate => 0x03, }; self.pwr_ctrl.write(ctrl_value); // Execute wait-for-interrupt to enter low-power mode cortex_m::asm::wfi(); } pub fn configure_wakeup_sources(&self, sources: u32) { const WAKEUP_EN: Register<u32> = Register::new(0x4000_1000); WAKEUP_EN.write(sources); } } }
Part 8: Real Hardware Example - ESP32-C3
Complete Blinky Example
#![no_std] #![no_main] use esp32c3_hal::{clock::ClockControl, pac::Peripherals, prelude::*, timer::TimerGroup, Rtc}; use esp_backtrace as _; use riscv_rt::entry; #[entry] fn main() -> ! { let peripherals = Peripherals::take().unwrap(); let system = peripherals.SYSTEM.split(); let clocks = ClockControl::boot_defaults(system.clock_control).freeze(); let mut rtc = Rtc::new(peripherals.RTC_CNTL); let timer_group0 = TimerGroup::new(peripherals.TIMG0, &clocks); let mut wdt0 = timer_group0.wdt; let timer_group1 = TimerGroup::new(peripherals.TIMG1, &clocks); let mut wdt1 = timer_group1.wdt; // Disable watchdogs rtc.rwdt.disable(); wdt0.disable(); wdt1.disable(); // Configure GPIO let io = IO::new(peripherals.GPIO, peripherals.IO_MUX); let mut led = io.pins.gpio7.into_push_pull_output(); // Main loop loop { led.toggle().unwrap(); delay(500_000); } } fn delay(cycles: u32) { for _ in 0..cycles { unsafe { riscv::asm::nop() }; } }
Best Practices
- Always Use Volatile: Hardware registers require volatile access
- Type Safety: Use strong types to prevent register misuse
- Singleton Pattern: Ensure single ownership of peripherals
- Critical Sections: Protect shared state in interrupts
- Zero-Cost Abstractions: HAL traits compile to direct register access
- Test on Hardware: Emulators may not match real hardware behavior
Common Pitfalls
- Forgetting Volatile: Regular access leads to optimization bugs
- Race Conditions: Unprotected access from interrupts
- Alignment Issues: DMA buffers need proper alignment
- Clock Configuration: Wrong clock setup causes timing issues
- Power States: Peripherals may need re-initialization after sleep
Summary
Embedded HAL in Rust provides:
- Volatile access patterns for hardware registers
- Type-safe abstractions over raw memory access
- PAC generation from SVD files
- Portable drivers via HAL traits
- Memory safety in embedded contexts
The embedded-hal ecosystem enables writing portable, reusable drivers while maintaining the performance of direct hardware access.
Additional Resources
Chapter 24: Async and Concurrency
Learning Objectives
- Master thread-based concurrency with Arc, Mutex, and channels
- Understand async/await syntax and the Future trait
- Compare threads vs async for different workloads
- Build concurrent applications with Tokio
- Apply synchronization patterns effectively
Concurrency in Rust: Two Approaches
Rust provides two main models for concurrent programming, each with distinct advantages:
| Aspect | Threads | Async/Await |
|---|---|---|
| Best for | CPU-intensive work | I/O-bound operations |
| Memory overhead | ~2MB per thread | ~2KB per task |
| Scheduling | OS kernel | User-space runtime |
| Blocking operations | Normal | Must use async variants |
| Ecosystem maturity | Complete | Growing rapidly |
| Learning curve | Moderate | Steeper initially |
Part 1: Thread-Based Concurrency
The Problem with Shared Mutable State
Rust prevents data races at compile time through its ownership system:
#![allow(unused)] fn main() { use std::thread; // This won't compile - Rust prevents the data race fn broken_example() { let mut counter = 0; let handle = thread::spawn(|| { counter += 1; // Error: cannot capture mutable reference }); handle.join().unwrap(); } }
Arc: Shared Ownership Across Threads
Arc<T> (Atomic Reference Counting) enables multiple threads to share ownership of the same data:
#![allow(unused)] fn main() { use std::sync::Arc; use std::thread; fn share_immutable_data() { let data = Arc::new(vec![1, 2, 3, 4, 5]); let mut handles = vec![]; for i in 0..3 { let data_clone = Arc::clone(&data); let handle = thread::spawn(move || { println!("Thread {}: sum = {}", i, data_clone.iter().sum::<i32>()); }); handles.push(handle); } for handle in handles { handle.join().unwrap(); } } }
Key properties of Arc:
- Reference counting is atomic (thread-safe)
- Cloning is cheap (only increments counter)
- Data is immutable by default
- Memory freed when last reference drops
Mutex: Safe Mutable Access
Mutex<T> provides mutual exclusion for mutable data:
#![allow(unused)] fn main() { use std::sync::{Arc, Mutex}; use std::thread; fn safe_shared_counter() { let counter = Arc::new(Mutex::new(0)); let mut handles = vec![]; for _ in 0..10 { let counter_clone = Arc::clone(&counter); let handle = thread::spawn(move || { for _ in 0..100 { let mut num = counter_clone.lock().unwrap(); *num += 1; // Lock automatically released when guard drops } }); handles.push(handle); } for handle in handles { handle.join().unwrap(); } println!("Final count: {}", *counter.lock().unwrap()); } }
RwLock: Optimizing for Readers
When reads significantly outnumber writes, RwLock<T> provides better performance:
#![allow(unused)] fn main() { use std::sync::{Arc, RwLock}; use std::thread; use std::time::Duration; fn reader_writer_pattern() { let data = Arc::new(RwLock::new(vec![1, 2, 3])); let mut handles = vec![]; // Multiple readers can access simultaneously for i in 0..5 { let data = Arc::clone(&data); handles.push(thread::spawn(move || { let guard = data.read().unwrap(); println!("Reader {}: {:?}", i, *guard); })); } // Single writer waits for all readers let data_clone = Arc::clone(&data); handles.push(thread::spawn(move || { let mut guard = data_clone.write().unwrap(); guard.push(4); println!("Writer: added element"); })); for handle in handles { handle.join().unwrap(); } } }
Channels: Message Passing
Channels avoid shared state entirely through message passing:
#![allow(unused)] fn main() { use std::sync::mpsc; use std::thread; fn channel_example() { let (tx, rx) = mpsc::channel(); thread::spawn(move || { let values = vec!["hello", "from", "thread"]; for val in values { tx.send(val).unwrap(); } }); for received in rx { println!("Got: {}", received); } } // Multiple producers fn fan_in_pattern() { let (tx, rx) = mpsc::channel(); for i in 0..3 { let tx_clone = tx.clone(); thread::spawn(move || { tx_clone.send(format!("Message from thread {}", i)).unwrap(); }); } drop(tx); // Close original sender for msg in rx { println!("{}", msg); } } }
Synchronization Patterns
Worker Pool
#![allow(unused)] fn main() { use std::sync::{Arc, Mutex, mpsc}; use std::thread; struct ThreadPool { workers: Vec<thread::JoinHandle<()>>, sender: mpsc::Sender<Box<dyn FnOnce() + Send + 'static>>, } impl ThreadPool { fn new(size: usize) -> Self { let (sender, receiver) = mpsc::channel(); let receiver = Arc::new(Mutex::new(receiver)); let mut workers = Vec::with_capacity(size); for id in 0..size { let receiver = Arc::clone(&receiver); let worker = thread::spawn(move || loop { let job = receiver.lock().unwrap().recv(); match job { Ok(job) => { println!("Worker {} executing job", id); job(); } Err(_) => { println!("Worker {} shutting down", id); break; } } }); workers.push(worker); } ThreadPool { workers, sender } } fn execute<F>(&self, f: F) where F: FnOnce() + Send + 'static, { self.sender.send(Box::new(f)).unwrap(); } } }
Part 2: Async Programming
Understanding Futures
Futures represent values that will be available at some point:
#![allow(unused)] fn main() { use std::future::Future; use std::pin::Pin; use std::task::{Context, Poll}; // Futures are state machines polled to completion trait SimpleFuture { type Output; fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output>; } // async/await is syntactic sugar for futures async fn simple_async() -> i32 { 42 // Returns impl Future<Output = i32> } }
The Tokio Runtime
Tokio provides a production-ready async runtime:
use tokio::time::{sleep, Duration}; #[tokio::main] async fn main() { println!("Starting"); sleep(Duration::from_millis(100)).await; println!("Done after 100ms"); } // Alternative runtime configurations fn runtime_options() { // Single-threaded runtime let rt = tokio::runtime::Builder::new_current_thread() .enable_all() .build() .unwrap(); // Multi-threaded runtime let rt = tokio::runtime::Builder::new_multi_thread() .worker_threads(4) .enable_all() .build() .unwrap(); rt.block_on(async { // Your async code here }); }
Concurrent Async Operations
Multiple futures can run concurrently without threads:
#![allow(unused)] fn main() { use tokio::time::{sleep, Duration}; async fn concurrent_operations() { // Sequential - takes 300ms total operation("A", 100).await; operation("B", 100).await; operation("C", 100).await; // Concurrent - takes 100ms total tokio::join!( operation("X", 100), operation("Y", 100), operation("Z", 100) ); } async fn operation(name: &str, ms: u64) { println!("Starting {}", name); sleep(Duration::from_millis(ms)).await; println!("Completed {}", name); } }
Spawning Async Tasks
Tasks are the async equivalent of threads:
#![allow(unused)] fn main() { use tokio::task; async fn spawn_tasks() { let mut handles = vec![]; for i in 0..10 { let handle = task::spawn(async move { tokio::time::sleep(Duration::from_millis(100)).await; i * i // Return value }); handles.push(handle); } let mut results = vec![]; for handle in handles { results.push(handle.await.unwrap()); } println!("Results: {:?}", results); } }
Select: Racing Futures
The select! macro enables complex control flow:
#![allow(unused)] fn main() { use tokio::time::{sleep, Duration, timeout}; async fn select_example() { loop { tokio::select! { _ = sleep(Duration::from_secs(1)) => { println!("Timer expired"); } result = async_operation() => { println!("Operation completed: {}", result); break; } _ = tokio::signal::ctrl_c() => { println!("Interrupted"); break; } } } } async fn async_operation() -> String { sleep(Duration::from_millis(500)).await; "Success".to_string() } }
Async I/O Operations
Async excels at I/O-bound work:
#![allow(unused)] fn main() { use tokio::fs::File; use tokio::io::{AsyncReadExt, AsyncWriteExt}; use tokio::net::{TcpListener, TcpStream}; async fn file_io() -> Result<(), Box<dyn std::error::Error>> { // Read file let mut file = File::open("input.txt").await?; let mut contents = String::new(); file.read_to_string(&mut contents).await?; // Write file let mut output = File::create("output.txt").await?; output.write_all(contents.as_bytes()).await?; Ok(()) } async fn tcp_server() -> Result<(), Box<dyn std::error::Error>> { let listener = TcpListener::bind("127.0.0.1:8080").await?; loop { let (mut socket, addr) = listener.accept().await?; tokio::spawn(async move { let mut buf = vec![0; 1024]; loop { let n = match socket.read(&mut buf).await { Ok(n) if n == 0 => return, // Connection closed Ok(n) => n, Err(e) => { eprintln!("Failed to read: {}", e); return; } }; if let Err(e) = socket.write_all(&buf[0..n]).await { eprintln!("Failed to write: {}", e); return; } } }); } } }
Error Handling in Async Code
Error handling follows the same patterns with async-specific considerations:
#![allow(unused)] fn main() { use std::time::Duration; use tokio::time::timeout; async fn with_timeout() -> Result<String, Box<dyn std::error::Error>> { // Timeout wraps the future timeout(Duration::from_secs(5), long_operation()).await? } async fn long_operation() -> Result<String, std::io::Error> { // Simulated long operation tokio::time::sleep(Duration::from_secs(2)).await; Ok("Completed".to_string()) } // Retry with exponential backoff async fn retry_operation<F, Fut, T, E>( mut f: F, max_attempts: u32, ) -> Result<T, E> where F: FnMut() -> Fut, Fut: Future<Output = Result<T, E>>, E: std::fmt::Debug, { let mut delay = Duration::from_millis(100); for attempt in 1..=max_attempts { match f().await { Ok(val) => return Ok(val), Err(e) if attempt == max_attempts => return Err(e), Err(e) => { eprintln!("Attempt {} failed: {:?}, retrying...", attempt, e); tokio::time::sleep(delay).await; delay *= 2; // Exponential backoff } } } unreachable!() } }
Choosing Between Threads and Async
When to Use Threads
Threads are optimal for:
- CPU-intensive work: Computation, data processing, cryptography
- Parallel algorithms: Matrix operations, image processing
- Blocking operations: Legacy libraries, system calls
- Simple concurrency: Independent units of work
Example of CPU-bound work better suited for threads:
#![allow(unused)] fn main() { use std::thread; fn parallel_computation(data: Vec<u64>) -> u64 { let chunk_size = data.len() / num_cpus::get(); let mut handles = vec![]; for chunk in data.chunks(chunk_size) { let chunk = chunk.to_vec(); let handle = thread::spawn(move || { chunk.iter().map(|&x| x * x).sum::<u64>() }); handles.push(handle); } handles.into_iter() .map(|h| h.join().unwrap()) .sum() } }
When to Use Async
Async is optimal for:
- I/O-bound work: Network requests, file operations, databases
- Many concurrent operations: Thousands of connections
- Resource efficiency: Limited memory environments
- Coordinated I/O: Complex workflows with dependencies
Example of I/O-bound work better suited for async:
#![allow(unused)] fn main() { async fn fetch_many_urls(urls: Vec<String>) -> Vec<Result<String, reqwest::Error>> { let futures = urls.into_iter().map(|url| { async move { reqwest::get(&url).await?.text().await } }); futures::future::join_all(futures).await } }
Hybrid Approaches
Sometimes combining both models is optimal:
#![allow(unused)] fn main() { use tokio::task; async fn hybrid_processing(data: Vec<Data>) -> Vec<Result<Processed, Error>> { let mut handles = vec![]; for chunk in data.chunks(100) { let chunk = chunk.to_vec(); // Spawn blocking task for CPU work let handle = task::spawn_blocking(move || { process_cpu_intensive(chunk) }); handles.push(handle); } // Await all CPU tasks let mut results = vec![]; for handle in handles { results.extend(handle.await?); } // Async I/O for results store_results_async(results).await } }
Common Pitfalls and Solutions
Blocking in Async Context
#![allow(unused)] fn main() { // BAD: Blocks the async runtime async fn bad_example() { std::thread::sleep(Duration::from_secs(1)); // Blocks executor } // GOOD: Use async sleep async fn good_example() { tokio::time::sleep(Duration::from_secs(1)).await; } // GOOD: Move blocking work to dedicated thread async fn blocking_work() { let result = tokio::task::spawn_blocking(|| { // CPU-intensive or blocking operation expensive_computation() }).await.unwrap(); } }
Async Mutex vs Sync Mutex
#![allow(unused)] fn main() { // Use tokio::sync::Mutex for async contexts use tokio::sync::Mutex as AsyncMutex; use std::sync::Mutex as SyncMutex; async fn async_mutex_example() { let data = Arc::new(AsyncMutex::new(vec![])); let data_clone = Arc::clone(&data); tokio::spawn(async move { let mut guard = data_clone.lock().await; // Async lock guard.push(1); }); } // Use std::sync::Mutex only for brief critical sections fn sync_mutex_in_async() { let data = Arc::new(SyncMutex::new(vec![])); // OK if lock is held briefly and doesn't cross await points { let mut guard = data.lock().unwrap(); guard.push(1); } // Lock released before any await } }
Performance Considerations
Memory Usage
- Thread: ~2MB stack per thread (configurable)
- Async task: ~2KB per task
- Implication: Can spawn thousands of async tasks vs hundreds of threads
Context Switching
- Threads: Kernel-level context switch (~1-10μs)
- Async tasks: User-space task switch (~100ns)
- Implication: Much lower overhead for many concurrent operations
Throughput vs Latency
- Threads: Better for consistent latency requirements
- Async: Better for maximizing throughput with many connections
Best Practices
- Start simple: Use threads for CPU work, async for I/O
- Avoid blocking: Never block the async runtime
- Choose appropriate synchronization: Arc+Mutex for threads, channels for both
- Profile and measure: Don’t assume, benchmark your specific use case
- Handle errors properly: Both models require careful error handling
- Consider the ecosystem: Check library support for your chosen model
Summary
Rust provides two powerful concurrency models:
- Threads: Best for CPU-intensive work and simple parallelism
- Async: Best for I/O-bound work and massive concurrency
Both models provide:
- Memory safety without garbage collection
- Data race prevention at compile time
- Zero-cost abstractions
- Excellent performance
Choose based on your workload characteristics, and don’t hesitate to combine both approaches when appropriate. The key is understanding the trade-offs and selecting the right tool for each part of your application.
Chapter 25: Rust Patterns
Learning Objectives
- Master memory management patterns from C++/.NET to Rust
- Understand Option
for null safety - Apply type system patterns and explicit conversions
- Use traits for composition over inheritance
- Write idiomatic Rust code
Memory Management Patterns
From RAII to Ownership
The transition from C++ RAII or .NET garbage collection to Rust ownership requires a fundamental mindset shift:
| Aspect | C++ | .NET | Rust |
|---|---|---|---|
| Memory control | Manual/RAII | Garbage collector | Ownership system |
| Safety guarantees | Runtime checks | Runtime managed | Compile-time |
| Performance | Predictable | GC pauses | Zero-cost |
| Resource cleanup | Destructors | Finalizers (unreliable) | Drop trait |
Resource Management Pattern
C++ RAII:
class FileHandler {
std::unique_ptr<FILE, decltype(&fclose)> file;
public:
FileHandler(const char* path)
: file(fopen(path, "r"), fclose) {
if (!file) throw std::runtime_error("Failed to open");
}
// Manual destructor, copy prevention, etc.
};
Rust Ownership:
#![allow(unused)] fn main() { use std::fs::File; use std::io::{BufReader, BufRead, Result}; struct FileHandler { reader: BufReader<File>, } impl FileHandler { fn new(path: &str) -> Result<Self> { Ok(FileHandler { reader: BufReader::new(File::open(path)?), }) } fn read_lines(&mut self) -> Result<Vec<String>> { self.reader.by_ref().lines().collect() } // Drop automatically implemented - no manual cleanup needed } }
Shared State Patterns
C++ Shared Pointer:
std::shared_ptr<Data> data = std::make_shared<Data>();
auto data2 = data; // Reference counted
Rust Arc (Atomic Reference Counting):
#![allow(unused)] fn main() { use std::sync::Arc; let data = Arc::new(Data::new()); let data2 = Arc::clone(&data); // Explicit clone for clarity }
Interior Mutability
When you need to mutate data behind a shared reference:
#![allow(unused)] fn main() { use std::cell::RefCell; use std::rc::Rc; // Single-threaded interior mutability let data = Rc::new(RefCell::new(vec![1, 2, 3])); data.borrow_mut().push(4); // Multi-threaded interior mutability use std::sync::{Arc, Mutex}; let shared = Arc::new(Mutex::new(vec![1, 2, 3])); shared.lock().unwrap().push(4); }
Null Safety with Option
Eliminating Null Pointer Exceptions
Tony Hoare’s “billion-dollar mistake” is eliminated in Rust:
C++/C# Nullable:
std::string* find_user(int id) {
if (id == 1) return new std::string("Alice");
return nullptr; // Potential crash
}
Rust Option:
#![allow(unused)] fn main() { fn find_user(id: u32) -> Option<String> { if id == 1 { Some("Alice".to_string()) } else { None } } fn use_user() { match find_user(42) { Some(name) => println!("Found: {}", name), None => println!("Not found"), } // Or use combinators let name = find_user(1) .map(|n| n.to_uppercase()) .unwrap_or_else(|| "ANONYMOUS".to_string()); } }
Option Combinators
#![allow(unused)] fn main() { fn process_optional_data(input: Option<i32>) -> i32 { input .map(|x| x * 2) // Transform if Some .filter(|x| x > &10) // Keep only if predicate true .unwrap_or(0) // Provide default } // Chaining operations fn get_config_value() -> Option<String> { std::env::var("CONFIG_PATH").ok() .and_then(|path| std::fs::read_to_string(path).ok()) .and_then(|contents| contents.lines().next().map(String::from)) } }
Type System Patterns
No Implicit Conversions
Rust requires explicit type conversions for safety:
fn process(value: f64) { } fn main() { let x: i32 = 42; // process(x); // ERROR: expected f64 process(x as f64); // Explicit cast process(f64::from(x)); // Type conversion // String conversions are explicit let s = String::from("hello"); let slice: &str = &s; let owned = slice.to_string(); }
Newtype Pattern
Wrap primitive types for type safety:
#![allow(unused)] fn main() { struct Kilometers(f64); struct Miles(f64); impl Kilometers { fn to_miles(&self) -> Miles { Miles(self.0 * 0.621371) } } fn calculate_fuel_efficiency(distance: Kilometers, fuel: Liters) -> KmPerLiter { KmPerLiter(distance.0 / fuel.0) } }
Builder Pattern
For complex object construction:
#![allow(unused)] fn main() { #[derive(Debug, Default)] pub struct ServerConfig { host: String, port: u16, max_connections: usize, timeout: Duration, } impl ServerConfig { fn builder() -> ServerConfigBuilder { ServerConfigBuilder::default() } } #[derive(Default)] pub struct ServerConfigBuilder { host: Option<String>, port: Option<u16>, max_connections: Option<usize>, timeout: Option<Duration>, } impl ServerConfigBuilder { pub fn host(mut self, host: impl Into<String>) -> Self { self.host = Some(host.into()); self } pub fn port(mut self, port: u16) -> Self { self.port = Some(port); self } pub fn build(self) -> Result<ServerConfig, &'static str> { Ok(ServerConfig { host: self.host.ok_or("host required")?, port: self.port.unwrap_or(8080), max_connections: self.max_connections.unwrap_or(100), timeout: self.timeout.unwrap_or(Duration::from_secs(30)), }) } } // Usage let config = ServerConfig::builder() .host("localhost") .port(3000) .build()?; }
Traits vs Inheritance
Composition Over Inheritance
C++ Inheritance:
class Animal { virtual void make_sound() = 0; };
class Dog : public Animal {
void make_sound() override { cout << "Woof"; }
};
Rust Traits:
#![allow(unused)] fn main() { trait Animal { fn make_sound(&self); } struct Dog { name: String, } impl Animal for Dog { fn make_sound(&self) { println!("{} says Woof", self.name); } } // Multiple trait implementation trait Swimmer { fn swim(&self); } impl Swimmer for Dog { fn swim(&self) { println!("{} is swimming", self.name); } } }
Trait Objects for Runtime Polymorphism
#![allow(unused)] fn main() { // Static dispatch (monomorphization) fn feed_animal<T: Animal>(animal: &T) { animal.make_sound(); } // Dynamic dispatch (trait objects) fn feed_any_animal(animal: &dyn Animal) { animal.make_sound(); } // Storing heterogeneous collections let animals: Vec<Box<dyn Animal>> = vec![ Box::new(Dog { name: "Rex".into() }), Box::new(Cat { name: "Whiskers".into() }), ]; }
Extension Traits
Add methods to existing types:
#![allow(unused)] fn main() { trait StringExt { fn words(&self) -> Vec<&str>; } impl StringExt for str { fn words(&self) -> Vec<&str> { self.split_whitespace().collect() } } // Now available on all &str let words = "hello world".words(); }
Error Handling Patterns
Result Type Pattern
Replace exceptions with explicit error handling:
#![allow(unused)] fn main() { #[derive(Debug)] enum DataError { NotFound, ParseError(String), IoError(std::io::Error), } impl From<std::io::Error> for DataError { fn from(err: std::io::Error) -> Self { DataError::IoError(err) } } fn load_data(path: &str) -> Result<Data, DataError> { let contents = std::fs::read_to_string(path)?; // ? operator for propagation parse_data(&contents).ok_or(DataError::ParseError("Invalid format".into())) } // Error handling at call site match load_data("config.json") { Ok(data) => process(data), Err(DataError::NotFound) => use_defaults(), Err(e) => eprintln!("Error: {:?}", e), } }
Custom Error Types
#![allow(unused)] fn main() { use std::fmt; #[derive(Debug)] struct ValidationError { field: String, message: String, } impl fmt::Display for ValidationError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "{}: {}", self.field, self.message) } } impl std::error::Error for ValidationError {} // Result type alias for cleaner signatures type ValidationResult<T> = Result<T, ValidationError>; fn validate_email(email: &str) -> ValidationResult<()> { if !email.contains('@') { return Err(ValidationError { field: "email".into(), message: "Invalid email format".into(), }); } Ok(()) } }
Functional Patterns
Iterator Chains
Transform data without intermediate allocations:
#![allow(unused)] fn main() { let result: Vec<_> = data .iter() .filter(|x| x.is_valid()) .map(|x| x.transform()) .take(10) .collect(); // Lazy evaluation - no work done until collect() let lazy_iter = (0..) .map(|x| x * x) .filter(|x| x % 2 == 0) .take(5); }
Closures and Higher-Order Functions
#![allow(unused)] fn main() { fn retry<F, T, E>(mut f: F, max_attempts: u32) -> Result<T, E> where F: FnMut() -> Result<T, E>, { for _ in 0..max_attempts - 1 { if let Ok(result) = f() { return Ok(result); } } f() // Last attempt } // Usage with closure let result = retry(|| risky_operation(), 3)?; }
Smart Pointer Patterns
Box for Heap Allocation
#![allow(unused)] fn main() { // Recursive types need Box enum List<T> { Node(T, Box<List<T>>), Nil, } // Trait objects need Box let drawable: Box<dyn Draw> = Box::new(Circle::new()); }
Rc for Shared Ownership (Single-threaded)
#![allow(unused)] fn main() { use std::rc::Rc; let data = Rc::new(vec![1, 2, 3]); let data2 = Rc::clone(&data); println!("Reference count: {}", Rc::strong_count(&data)); }
State Machine Pattern
Model state transitions at compile time:
#![allow(unused)] fn main() { struct Draft; struct PendingReview; struct Published; struct Post<State> { content: String, state: State, } impl Post<Draft> { fn new() -> Self { Post { content: String::new(), state: Draft, } } fn submit(self) -> Post<PendingReview> { Post { content: self.content, state: PendingReview, } } } impl Post<PendingReview> { fn approve(self) -> Post<Published> { Post { content: self.content, state: Published, } } fn reject(self) -> Post<Draft> { Post { content: self.content, state: Draft, } } } impl Post<Published> { fn content(&self) -> &str { &self.content } } // Usage enforces correct state transitions at compile time let post = Post::new() .submit() .approve(); println!("{}", post.content()); }
RAII and Drop Pattern
Automatic resource management:
#![allow(unused)] fn main() { struct TempFile { path: PathBuf, } impl TempFile { fn new(content: &str) -> std::io::Result<Self> { let path = std::env::temp_dir().join(format!("temp_{}.txt", uuid::Uuid::new_v4())); std::fs::write(&path, content)?; Ok(TempFile { path }) } } impl Drop for TempFile { fn drop(&mut self) { let _ = std::fs::remove_file(&self.path); // Clean up automatically } } // File automatically deleted when temp_file goes out of scope { let temp_file = TempFile::new("temporary data")?; // Use temp_file } // Deleted here }
Performance Patterns
Zero-Copy Operations
#![allow(unused)] fn main() { // Borrowing instead of cloning fn process(data: &[u8]) { // Work with borrowed data } // String slicing without allocation let s = "hello world"; let hello = &s[0..5]; // No allocation // Using Cow for conditional cloning use std::borrow::Cow; fn normalize<'a>(input: &'a str) -> Cow<'a, str> { if input.contains('\n') { Cow::Owned(input.replace('\n', " ")) } else { Cow::Borrowed(input) // No allocation if unchanged } } }
Memory Layout Control
#![allow(unused)] fn main() { #[repr(C)] // C-compatible layout struct NetworkPacket { header: [u8; 4], length: u32, payload: [u8; 1024], } #[repr(C, packed)] // Remove padding struct CompactData { a: u8, b: u32, c: u8, } }
Best Practices
- Prefer borrowing over owning when possible
- Use iterators instead of indexing loops
- Make invalid states unrepresentable using the type system
- Fail fast with Result instead of panicking
- Document ownership in complex APIs
- Use clippy to catch unidiomatic patterns
- Prefer composition over inheritance-like patterns
- Be explicit about type conversions and error handling
Summary
Rust patterns emphasize:
- Ownership for automatic memory management
- Option/Result for explicit error handling
- Traits for polymorphism without inheritance
- Zero-cost abstractions for performance
- Type safety to catch errors at compile time
These patterns work together to create systems that are both safe and fast, catching entire categories of bugs at compile time while maintaining C++ level performance.