Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Trace Manipulation

A deep exploration of Fugue's runtime system and trace manipulation capabilities. This tutorial demonstrates how traces enable sophisticated probabilistic programming techniques including replay, scoring, custom inference, and debugging. Learn how Fugue's execution history recording makes advanced inference algorithms possible while maintaining full type safety.

Learning Objectives

By the end of this tutorial, you will understand:

  • Trace System Architecture: How execution history is recorded and structured
  • Runtime Interpreters: Different ways to execute the same probabilistic model
  • Replay Mechanics: How traces enable MCMC and other inference algorithms
  • Custom Handlers: Building specialized execution strategies for specific needs
  • Memory Optimization: Production-ready techniques for high-throughput scenarios
  • Diagnostic Tools: Convergence assessment and debugging problematic models

The Execution History Problem

Traditional programming languages execute once and discard their execution history. In probabilistic programming, we need to record, manipulate, and reason about random choices to enable sophisticated inference algorithms. Fugue's trace system solves this fundamental challenge.

graph TD
    A["Model Specification"] --> B["Handler Selection"]
    B --> C["PriorHandler<br/>Forward Sampling"]
    B --> D["ReplayHandler<br/>MCMC Proposals"]  
    B --> E["ScoreGivenTrace<br/>Importance Sampling"]
    B --> F["Custom Handler<br/>Specialized Logic"]
    
    C --> G["Execution Trace"]
    D --> G
    E --> G
    F --> G
    
    G --> H["Choice Records"]
    G --> I["Log-Weight Components"]
    G --> J["Type-Safe Values"]
    
    H --> K["Replay"]
    H --> L["Scoring"]
    H --> M["Conditioning"]
    H --> N["Debugging"]
    
    style G fill:#ccffcc
    style K fill:#e1f5fe
    style L fill:#e1f5fe
    style M fill:#e1f5fe
    style N fill:#e1f5fe

Mathematical Foundation

Trace Formalization

A trace records the complete execution history of a probabilistic model, formally represented as:

Where:

  • : Map from addresses to choices
  • : Accumulated prior log-probability
  • : Accumulated observation log-probability
  • : Accumulated factor weights

Total Log-Weight

The total unnormalized log-probability is:

This decomposition enables sophisticated inference algorithms to reason about different sources of probability mass.

Trace Properties

Consistency: For a valid execution, represents the unnormalized log-probability of that specific execution path.

Replayability: Given trace , the model can be deterministically re-executed to produce the same result and weight.

Compositionality: Traces can be modified, combined, and analyzed to implement complex inference strategies.

Basic Trace Inspection

Let's start by understanding how Fugue records execution history:

use fugue::*;
use fugue::runtime::interpreters::PriorHandler;
use rand::{SeedableRng, rngs::StdRng};
// Demonstrate basic trace inspection and manipulation
fn basic_trace_inspection() {
    println!("=== Basic Trace Inspection ===\n");

    // Define a model with multiple types of choices
    let model = prob!(
        let coin <- sample(addr!("coin"), Bernoulli::new(0.7).unwrap());
        let count <- sample(addr!("count"), Poisson::new(3.0).unwrap());
        let category <- sample(addr!("category"), Categorical::uniform(3).unwrap());
        let measurement <- sample(addr!("measurement"), Normal::new(0.0, 1.0).unwrap());

        // Observation adds to likelihood
        let _obs <- observe(addr!("obs"), Normal::new(measurement, 0.1).unwrap(), 0.5);

        pure((coin, count, category, measurement))
    );

    // Execute and inspect the trace
    let mut rng = StdRng::seed_from_u64(12345);
    let (result, trace) = runtime::handler::run(
        PriorHandler {
            rng: &mut rng,
            trace: Trace::default(),
        },
        model,
    );

    println!("🔍 Trace Inspection:");
    println!("   - Total choices: {}", trace.choices.len());
    println!("   - Prior log-weight: {:.4}", trace.log_prior);
    println!("   - Likelihood log-weight: {:.4}", trace.log_likelihood);
    println!("   - Factor log-weight: {:.4}", trace.log_factors);
    println!("   - Total log-weight: {:.4}", trace.total_log_weight());
    println!();

    println!("📊 Individual Choices:");
    for (addr, choice) in &trace.choices {
        println!(
            "   - {}: {:?} (logp: {:.4})",
            addr, choice.value, choice.logp
        );
    }
    println!();

    println!("🎯 Type-Safe Value Access:");
    println!("   - Coin (bool): {:?}", trace.get_bool(&addr!("coin")));
    println!("   - Count (u64): {:?}", trace.get_u64(&addr!("count")));
    println!(
        "   - Category (usize): {:?}",
        trace.get_usize(&addr!("category"))
    );
    println!(
        "   - Measurement (f64): {:?}",
        trace.get_f64(&addr!("measurement"))
    );

    let (coin, count, category, measurement) = result;
    println!(
        "   - Result: coin={}, count={}, category={}, measurement={:.3}",
        coin, count, category, measurement
    );
    println!();
}

Trace Structure Analysis

Every trace contains three critical components:

  1. Choices Map: Records every random decision with its address, value, and log-probability
  2. Weight Decomposition: Separates prior, likelihood, and factor contributions
  3. Type-Safe Values: Maintains natural types throughout execution

Debugging with Traces

The trace decomposition immediately shows you:

  • Prior weight: How likely your parameter values are under priors
  • Likelihood weight: How well your model fits the observed data
  • Factor weight: Contribution from explicit factor() statements

Replay Mechanics

The replay system is the foundation of MCMC algorithms. It allows deterministic re-execution with modified random choices:

use fugue::*;
use fugue::runtime::interpreters::*;
use fugue::runtime::trace::*;
use rand::{SeedableRng, rngs::StdRng};
// Demonstrate trace replay mechanics for MCMC
fn replay_mechanics() {
    println!("=== Trace Replay Mechanics ===\n");

    // Define a simple model
    let make_model = || {
        prob!(
            let mu <- sample(addr!("mu"), Normal::new(0.0, 2.0).unwrap());
            let _obs <- observe(addr!("y"), Normal::new(mu, 1.0).unwrap(), 1.5);
            pure(mu)
        )
    };

    // 1. Generate initial trace
    let mut rng = StdRng::seed_from_u64(42);
    let (mu1, trace1) = runtime::handler::run(
        PriorHandler {
            rng: &mut rng,
            trace: Trace::default(),
        },
        make_model(),
    );

    println!("🎲 Original Execution:");
    println!("   - mu = {:.3}", mu1);
    println!("   - Prior logp: {:.3}", trace1.log_prior);
    println!("   - Likelihood logp: {:.3}", trace1.log_likelihood);
    println!("   - Total logp: {:.3}", trace1.total_log_weight());
    println!();

    // 2. Replay with exact same trace
    let mut rng2 = StdRng::seed_from_u64(42); // New RNG for replay
    let (mu2, trace2) = runtime::handler::run(
        ReplayHandler {
            rng: &mut rng2,
            base: trace1.clone(),
            trace: Trace::default(),
        },
        make_model(),
    );

    println!("🔄 Exact Replay:");
    println!("   - mu = {:.3} (should match original)", mu2);
    println!("   - Values match: {}", mu1 == mu2);
    println!(
        "   - Traces match: {}",
        trace1.total_log_weight() == trace2.total_log_weight()
    );
    println!();

    // 3. Modify trace for proposal
    let mut modified_trace = trace1.clone();
    // Modify the mu value (MCMC proposal)
    if let Some(choice) = modified_trace.choices.get_mut(&addr!("mu")) {
        let old_value = choice.value.as_f64().unwrap();
        let new_value = old_value + 0.1; // Small proposal step
        choice.value = ChoiceValue::F64(new_value);

        // Recompute log-probability under the distribution
        let normal_dist = Normal::new(0.0, 2.0).unwrap();
        choice.logp = normal_dist.log_prob(&new_value);

        println!("🔧 Modified Trace (Proposal):");
        println!("   - Old mu: {:.3}", old_value);
        println!("   - New mu: {:.3}", new_value);
        println!(
            "   - Old logp: {:.3}",
            trace1
                .get_f64(&addr!("mu"))
                .map(|v| Normal::new(0.0, 2.0).unwrap().log_prob(&v))
                .unwrap_or(0.0)
        );
        println!("   - New logp: {:.3}", choice.logp);
    }

    // 4. Score the modified trace
    let mut rng3 = StdRng::seed_from_u64(42); // New RNG for proposal
    let (mu3, trace3) = runtime::handler::run(
        ReplayHandler {
            rng: &mut rng3,
            base: modified_trace,
            trace: Trace::default(),
        },
        make_model(),
    );

    println!("   - Proposal result: mu = {:.3}", mu3);
    println!("   - Proposal total logp: {:.3}", trace3.total_log_weight());
    println!(
        "   - Accept/Reject ratio: {:.3}",
        (trace3.total_log_weight() - trace1.total_log_weight()).exp()
    );
    println!();
}

MCMC Proposal Mechanism

sequenceDiagram
    participant M as Model
    participant T1 as Current Trace
    participant T2 as Proposal Trace
    participant A as Accept/Reject
    
    M->>T1: Execute with PriorHandler
    Note over T1: Record current state
    
    T1->>T2: Modify choice values
    Note over T2: Create proposal
    
    M->>T2: Execute with ReplayHandler  
    Note over T2: Score proposal
    
    T2->>A: Compare log-weights
    Note over A: Accept if log(u) < Δlog(w)
    
    A->>T1: Keep current (if rejected)
    A->>T2: Accept proposal (if accepted)

The key insight: the same model specification can be executed with different random choices by manipulating the trace and using replay.

Custom Handlers

Handlers define how probabilistic effects are interpreted. You can create custom handlers for specialized inference algorithms:

use fugue::*;
use fugue::runtime::{handler::Handler, trace::*};
use rand::{SeedableRng, rngs::StdRng};
// Demonstrate custom handler implementation
struct DebugHandler<R: rand::Rng> {
    rng: R,
    trace: Trace,
    debug_info: Vec<String>,
}

impl<R: rand::Rng> DebugHandler<R> {
    fn new(rng: R) -> Self {
        Self {
            rng,
            trace: Trace::default(),
            debug_info: Vec::new(),
        }
    }
}

impl<R: rand::Rng> Handler for DebugHandler<R> {
    fn on_sample_f64(&mut self, addr: &Address, dist: &dyn Distribution<f64>) -> f64 {
        let value = dist.sample(&mut self.rng);
        let logp = dist.log_prob(&value);

        self.debug_info.push(format!(
            "SAMPLE f64 at {}: {} (logp: {:.3})",
            addr, value, logp
        ));

        self.trace.choices.insert(
            addr.clone(),
            Choice {
                addr: addr.clone(),
                value: ChoiceValue::F64(value),
                logp,
            },
        );
        self.trace.log_prior += logp;

        value
    }

    fn on_sample_bool(&mut self, addr: &Address, dist: &dyn Distribution<bool>) -> bool {
        let value = dist.sample(&mut self.rng);
        let logp = dist.log_prob(&value);

        self.debug_info.push(format!(
            "SAMPLE bool at {}: {} (logp: {:.3})",
            addr, value, logp
        ));

        self.trace.choices.insert(
            addr.clone(),
            Choice {
                addr: addr.clone(),
                value: ChoiceValue::Bool(value),
                logp,
            },
        );
        self.trace.log_prior += logp;

        value
    }

    fn on_sample_u64(&mut self, addr: &Address, dist: &dyn Distribution<u64>) -> u64 {
        let value = dist.sample(&mut self.rng);
        let logp = dist.log_prob(&value);

        self.debug_info.push(format!(
            "SAMPLE u64 at {}: {} (logp: {:.3})",
            addr, value, logp
        ));

        self.trace.choices.insert(
            addr.clone(),
            Choice {
                addr: addr.clone(),
                value: ChoiceValue::U64(value),
                logp,
            },
        );
        self.trace.log_prior += logp;

        value
    }

    fn on_sample_usize(&mut self, addr: &Address, dist: &dyn Distribution<usize>) -> usize {
        let value = dist.sample(&mut self.rng);
        let logp = dist.log_prob(&value);

        self.debug_info.push(format!(
            "SAMPLE usize at {}: {} (logp: {:.3})",
            addr, value, logp
        ));

        self.trace.choices.insert(
            addr.clone(),
            Choice {
                addr: addr.clone(),
                value: ChoiceValue::Usize(value),
                logp,
            },
        );
        self.trace.log_prior += logp;

        value
    }

    fn on_observe_f64(&mut self, addr: &Address, dist: &dyn Distribution<f64>, value: f64) {
        let logp = dist.log_prob(&value);

        self.debug_info.push(format!(
            "OBSERVE f64 at {}: {} (logp: {:.3})",
            addr, value, logp
        ));

        self.trace.log_likelihood += logp;
    }

    fn on_observe_bool(&mut self, addr: &Address, dist: &dyn Distribution<bool>, value: bool) {
        let logp = dist.log_prob(&value);

        self.debug_info.push(format!(
            "OBSERVE bool at {}: {} (logp: {:.3})",
            addr, value, logp
        ));

        self.trace.log_likelihood += logp;
    }

    fn on_observe_u64(&mut self, addr: &Address, dist: &dyn Distribution<u64>, value: u64) {
        let logp = dist.log_prob(&value);

        self.debug_info.push(format!(
            "OBSERVE u64 at {}: {} (logp: {:.3})",
            addr, value, logp
        ));

        self.trace.log_likelihood += logp;
    }

    fn on_observe_usize(&mut self, addr: &Address, dist: &dyn Distribution<usize>, value: usize) {
        let logp = dist.log_prob(&value);

        self.debug_info.push(format!(
            "OBSERVE usize at {}: {} (logp: {:.3})",
            addr, value, logp
        ));

        self.trace.log_likelihood += logp;
    }

    fn on_factor(&mut self, logw: f64) {
        self.debug_info.push(format!("FACTOR: {:.3}", logw));
        self.trace.log_factors += logw;
    }

    fn finish(self) -> Trace {
        self.trace
    }
}

fn custom_handler_demo() {
    println!("=== Custom Handler Demo ===\n");

    let model = prob!(
        let prior_mu <- sample(addr!("mu"), Normal::new(0.0, 1.0).unwrap());
        let success <- sample(addr!("success"), Bernoulli::new(0.6).unwrap());

        let _obs1 <- observe(addr!("data1"), Normal::new(prior_mu, 0.5).unwrap(), 1.2);
        let _obs2 <- observe(addr!("data2"), Bernoulli::new(if success { 0.8 } else { 0.2 }).unwrap(), true);

        // Add a factor for soft constraints
        let _factor_result <- factor(if prior_mu > 0.0 { 0.1 } else { -0.1 });

        pure((prior_mu, success))
    );

    let rng = StdRng::seed_from_u64(67890);
    let debug_handler = DebugHandler::new(rng);

    let (result, final_trace) = runtime::handler::run(debug_handler, model);

    println!("🔍 Debug Handler Output:");
    println!("   - Result: {:?}", result);
    println!(
        "   - Total log-weight: {:.4}",
        final_trace.total_log_weight()
    );
    println!();

    println!("📝 Execution Log:");
    println!("   - {} operations recorded", final_trace.choices.len());

    println!();
}

Handler Architecture

graph TD
    A["Model Effects"] --> B["Handler Dispatch"]
    B --> C["on_sample_f64()"]
    B --> D["on_sample_bool()"] 
    B --> E["on_sample_u64()"]
    B --> F["on_sample_usize()"]
    B --> G["on_observe_*()"]
    B --> H["on_factor()"]
    
    C --> I["Custom Logic"]
    D --> I
    E --> I
    F --> I
    G --> I
    H --> I
    
    I --> J["Trace Update"]
    I --> K["Side Effects"]
    I --> L["Logging/Debug"]
    
    style I fill:#ccffcc

Built-in Handler Types

HandlerPurposeUse Case
PriorHandlerForward samplingGenerate data, initialization
ReplayHandlerDeterministic replayMCMC, validation
ScoreGivenTraceCompute log-probabilityImportance sampling
SafeReplayHandlerError-resilient replayProduction MCMC
SafeScoreGivenTraceSafe scoringRobust inference

Trace Scoring

Scoring computes the log-probability of a specific execution path, essential for importance sampling and model comparison:

use fugue::*;
use fugue::runtime::interpreters::*;
use fugue::runtime::trace::*;
use rand::{SeedableRng, rngs::StdRng};
// Demonstrate trace scoring for importance sampling
fn trace_scoring_demo() {
    println!("=== Trace Scoring Demo ===\n");

    let make_model = || {
        prob!(
            let theta <- sample(addr!("theta"), Beta::new(2.0, 2.0).unwrap());

            // Multiple observations
            let _obs1 <- observe(addr!("y1"), Bernoulli::new(theta).unwrap(), true);
            let _obs2 <- observe(addr!("y2"), Bernoulli::new(theta).unwrap(), true);
            let _obs3 <- observe(addr!("y3"), Bernoulli::new(theta).unwrap(), false);

            pure(theta)
        )
    };

    // Generate a trace from the prior
    let mut rng = StdRng::seed_from_u64(111);
    let (theta_val, prior_trace) = runtime::handler::run(
        PriorHandler {
            rng: &mut rng,
            trace: Trace::default(),
        },
        make_model(),
    );

    println!("🎲 Prior Sample:");
    println!("   - theta = {:.3}", theta_val);
    println!("   - Prior logp: {:.3}", prior_trace.log_prior);
    println!("   - Likelihood logp: {:.3}", prior_trace.log_likelihood);
    println!("   - Total logp: {:.3}", prior_trace.total_log_weight());
    println!();

    // Now score this trace under the model (should get same result)
    let (theta_scored, scored_trace) = runtime::handler::run(
        ScoreGivenTrace {
            base: prior_trace.clone(),
            trace: Trace::default(),
        },
        make_model(),
    );

    println!("📊 Scoring Same Trace:");
    println!("   - theta = {:.3} (should match)", theta_scored);
    println!("   - Prior logp: {:.3}", scored_trace.log_prior);
    println!("   - Likelihood logp: {:.3}", scored_trace.log_likelihood);
    println!("   - Total logp: {:.3}", scored_trace.total_log_weight());
    println!(
        "   - Weights match: {}",
        (prior_trace.total_log_weight() - scored_trace.total_log_weight()).abs() < 1e-10
    );
    println!();

    // Create a modified trace for importance sampling
    let mut importance_trace = prior_trace.clone();

    // Change theta to a different value
    if let Some(choice) = importance_trace.choices.get_mut(&addr!("theta")) {
        let new_theta = 0.8; // High success probability
        choice.value = ChoiceValue::F64(new_theta);
        choice.logp = Beta::new(2.0, 2.0).unwrap().log_prob(&new_theta);

        println!("🎯 Importance Sample:");
        println!("   - Modified theta to: {:.3}", new_theta);
        println!("   - New prior logp: {:.3}", choice.logp);
    }

    // Score under original model
    let (theta_is, is_trace) = runtime::handler::run(
        ScoreGivenTrace {
            base: importance_trace,
            trace: Trace::default(),
        },
        make_model(),
    );

    println!("   - IS result: theta = {:.3}", theta_is);
    println!("   - IS total logp: {:.3}", is_trace.total_log_weight());
    println!(
        "   - Importance weight: {:.3}",
        is_trace.total_log_weight() - prior_trace.total_log_weight()
    );
    println!();
}

Importance Sampling Theory

Given proposal trace and target model :

Where the importance weight is:

Fugue's scoring system automatically computes for any trace under any model.

Numerical Stability

Always work in log-space for importance weights. Direct probability ratios quickly underflow or overflow for realistic models.

Memory Optimization

For production workloads, efficient memory management is crucial:

use fugue::*;
use fugue::runtime::{interpreters::PriorHandler, memory::*};
use rand::{SeedableRng, rngs::StdRng};
// Demonstrate memory-optimized trace handling
fn memory_optimization_demo() {
    println!("=== Memory Optimization Demo ===\n");

    // Simple model for batch processing
    let make_model = |obs_val: f64| {
        prob!(
            let mu <- sample(addr!("mu"), Normal::new(0.0, 1.0).unwrap());
            let sigma <- sample(addr!("sigma"), Gamma::new(2.0, 0.5).unwrap());
            let _obs <- observe(addr!("y"), Normal::new(mu, sigma).unwrap(), obs_val);
            pure((mu, sigma))
        )
    };

    println!("🏭 Batch Processing with Memory Pool:");

    // Simulate batch inference with trace reuse
    let observations = [1.0, 1.2, 0.8, 1.5, 0.9];
    let mut results = Vec::new();

    // Use copy-on-write traces for efficiency
    let base_trace = CowTrace::new();

    for (i, &obs) in observations.iter().enumerate() {
        let mut rng = StdRng::seed_from_u64(200 + i as u64);
        let handler = PriorHandler {
            rng: &mut rng,
            trace: base_trace.to_trace(), // Convert to regular trace
        };

        let (result, trace) = runtime::handler::run(handler, make_model(obs));
        results.push((result, trace));

        println!(
            "   Sample {}: mu={:.3}, sigma={:.3}, obs={:.1}, logp={:.3}",
            i + 1,
            result.0,
            result.1,
            obs,
            results[i].1.total_log_weight()
        );
    }

    println!();
    println!("📊 Batch Statistics:");
    let mu_mean = results.iter().map(|((mu, _), _)| mu).sum::<f64>() / results.len() as f64;
    let sigma_mean =
        results.iter().map(|((_, sigma), _)| sigma).sum::<f64>() / results.len() as f64;
    let logp_mean = results
        .iter()
        .map(|(_, trace)| trace.total_log_weight())
        .sum::<f64>()
        / results.len() as f64;

    println!("   - Average mu: {:.3}", mu_mean);
    println!("   - Average sigma: {:.3}", sigma_mean);
    println!("   - Average log-probability: {:.3}", logp_mean);
    println!();

    println!("🔧 Trace Builder Demo:");

    // Demonstrate efficient trace building
    let _builder = TraceBuilder::new();
    // Note: TraceBuilder API may not have reserve_choices method
    // This is a conceptual example of memory pre-allocation

    // Manually construct a trace (rarely needed, but shows internals)
    let demo_trace = Trace {
        choices: [
            (
                addr!("param1"),
                Choice {
                    addr: addr!("param1"),
                    value: ChoiceValue::F64(0.5),
                    logp: -1.4,
                },
            ),
            (
                addr!("param2"),
                Choice {
                    addr: addr!("param2"),
                    value: ChoiceValue::Bool(true),
                    logp: -0.7,
                },
            ),
        ]
        .iter()
        .cloned()
        .collect(),
        log_prior: -2.1,
        log_likelihood: -0.5,
        log_factors: 0.0,
    };

    println!(
        "   - Manual trace: {} choices, total logp: {:.3}",
        demo_trace.choices.len(),
        demo_trace.total_log_weight()
    );
    println!();
}

Production Memory Strategies

  1. Copy-on-Write Traces: Share read-only data, copy only when modified
  2. Trace Pooling: Reuse allocated memory across multiple inferences
  3. Pre-sized Allocation: Reserve space for expected number of choices
  4. Batch Processing: Amortize allocation costs across many executions

Memory Benchmarking

For high-throughput scenarios:

  • Use TracePool for batch processing
  • Pre-size trace builders when choice count is predictable
  • Profile memory allocation patterns in your specific use case

Diagnostic Tools

Fugue provides comprehensive tools for analyzing trace quality and convergence:

use fugue::*;
use fugue::runtime::interpreters::PriorHandler;
use fugue::inference::diagnostics::*;
use rand::{SeedableRng, rngs::StdRng};
// Demonstrate diagnostic tools for trace analysis
fn diagnostic_tools_demo() {
    println!("=== Diagnostic Tools Demo ===\n");

    // Generate multiple MCMC-like traces for diagnostics
    let make_model = || {
        prob!(
            let mu <- sample(addr!("mu"), Normal::new(0.0, 2.0).unwrap());
            let precision <- sample(addr!("precision"), Gamma::new(2.0, 1.0).unwrap());

            // Multiple observations
            let _obs1 <- observe(addr!("y1"), Normal::new(mu, 1.0/precision.sqrt()).unwrap(), 1.0);
            let _obs2 <- observe(addr!("y2"), Normal::new(mu, 1.0/precision.sqrt()).unwrap(), 1.2);
            let _obs3 <- observe(addr!("y3"), Normal::new(mu, 1.0/precision.sqrt()).unwrap(), 0.8);

            pure((mu, precision))
        )
    };

    // Simulate two chains
    println!("🔗 Generating MCMC-like traces:");
    let mut chain1 = Vec::new();
    let mut chain2 = Vec::new();

    // Chain 1
    for i in 0..20 {
        let mut rng = StdRng::seed_from_u64(300 + i);
        let (_, trace) = runtime::handler::run(
            PriorHandler {
                rng: &mut rng,
                trace: Trace::default(),
            },
            make_model(),
        );
        chain1.push(trace);
    }

    // Chain 2 (different seed)
    for i in 0..20 {
        let mut rng = StdRng::seed_from_u64(400 + i);
        let (_, trace) = runtime::handler::run(
            PriorHandler {
                rng: &mut rng,
                trace: Trace::default(),
            },
            make_model(),
        );
        chain2.push(trace);
    }

    println!("   - Chain 1: {} samples", chain1.len());
    println!("   - Chain 2: {} samples", chain2.len());
    println!();

    // Extract parameter values
    let mu_values1 = extract_f64_values(&chain1, &addr!("mu"));
    let mu_values2 = extract_f64_values(&chain2, &addr!("mu"));
    let precision_values1 = extract_f64_values(&chain1, &addr!("precision"));
    let _precision_values2 = extract_f64_values(&chain2, &addr!("precision"));

    println!("📈 Parameter Summaries:");
    println!(
        "   - mu chain1: mean={:.3}, min={:.3}, max={:.3}",
        mu_values1.iter().sum::<f64>() / mu_values1.len() as f64,
        mu_values1.iter().fold(f64::INFINITY, |a, &b| a.min(b)),
        mu_values1.iter().fold(f64::NEG_INFINITY, |a, &b| a.max(b))
    );

    println!(
        "   - mu chain2: mean={:.3}, min={:.3}, max={:.3}",
        mu_values2.iter().sum::<f64>() / mu_values2.len() as f64,
        mu_values2.iter().fold(f64::INFINITY, |a, &b| a.min(b)),
        mu_values2.iter().fold(f64::NEG_INFINITY, |a, &b| a.max(b))
    );

    println!(
        "   - precision chain1: mean={:.3}, min={:.3}, max={:.3}",
        precision_values1.iter().sum::<f64>() / precision_values1.len() as f64,
        precision_values1
            .iter()
            .fold(f64::INFINITY, |a, &b| a.min(b)),
        precision_values1
            .iter()
            .fold(f64::NEG_INFINITY, |a, &b| a.max(b))
    );

    // Compute R-hat (simplified version)
    let chains_mu = vec![chain1.clone(), chain2.clone()];
    let r_hat_mu = r_hat_f64(&chains_mu, &addr!("mu"));
    let r_hat_precision = r_hat_f64(&chains_mu, &addr!("precision"));

    println!();
    println!("🎯 Convergence Diagnostics:");
    println!("   - R-hat for mu: {:.4} (< 1.1 is good)", r_hat_mu);
    println!(
        "   - R-hat for precision: {:.4} (< 1.1 is good)",
        r_hat_precision
    );

    // Parameter summary
    let mu_summary = summarize_f64_parameter(&chains_mu, &addr!("mu"));
    let q5 = mu_summary.quantiles.get("2.5%").unwrap_or(&f64::NAN);
    let q95 = mu_summary.quantiles.get("97.5%").unwrap_or(&f64::NAN);
    println!(
        "   - mu summary: mean={:.3}, std={:.3}, q2.5={:.3}, q97.5={:.3}",
        mu_summary.mean, mu_summary.std, q5, q95
    );

    println!();
    println!("📊 Trace Quality Assessment:");
    let total_logp_chain1: f64 = chain1.iter().map(|t| t.total_log_weight()).sum();
    let total_logp_chain2: f64 = chain2.iter().map(|t| t.total_log_weight()).sum();
    let avg_logp1 = total_logp_chain1 / chain1.len() as f64;
    let avg_logp2 = total_logp_chain2 / chain2.len() as f64;

    println!("   - Chain 1 avg log-probability: {:.3}", avg_logp1);
    println!("   - Chain 2 avg log-probability: {:.3}", avg_logp2);
    println!(
        "   - Chains similar quality: {}",
        (avg_logp1 - avg_logp2).abs() < 0.5
    );
    println!();
}

Convergence Assessment

R-hat Statistic: Compares between-chain variance to within-chain variance

Where:

  • : Estimated marginal posterior variance
  • : Within-chain variance

Interpretation:

  • : Good convergence
  • : Chains haven't mixed well, need more samples
  • : Poor convergence, investigate model or algorithm

Parameter Summaries

For each parameter, compute:

  • Mean and Standard Deviation: Central tendency and spread
  • Quantiles: 5%, 25%, 50%, 75%, 95% for uncertainty intervals
  • Effective Sample Size: Accounting for autocorrelation

Advanced Debugging

When models behave unexpectedly, trace analysis reveals the root causes:

use fugue::*;
use fugue::runtime::interpreters::*;
use fugue::runtime::trace::*;
use rand::{SeedableRng, rngs::StdRng};
// Demonstrate advanced debugging techniques
fn advanced_debugging_demo() {
    println!("=== Advanced Debugging Techniques ===\n");

    // Model with potential numerical issues
    let _problematic_model = prob!(
        let scale <- sample(addr!("scale"), Exponential::new(1.0).unwrap());

        // This could cause numerical issues if scale is very small
        let precision <- sample(addr!("precision"), Gamma::new(1.0, scale).unwrap());

        let mu <- sample(addr!("mu"), Normal::new(0.0, 1.0 / precision.sqrt()).unwrap());

        // Observation that might conflict
        let _obs <- observe(addr!("y"), Normal::new(mu, 0.01).unwrap(), 10.0);

        pure((scale, precision, mu))
    );

    println!("🚨 Debugging Problematic Model:");

    // Try multiple executions to find issues
    for attempt in 1..=5 {
        let mut rng = StdRng::seed_from_u64(500 + attempt);
        let problematic_model_copy = prob!(
            let scale <- sample(addr!("scale"), Exponential::new(1.0).unwrap());

            // This could cause numerical issues if scale is very small
            let precision <- sample(addr!("precision"), Gamma::new(1.0, scale).unwrap());

            let mu <- sample(addr!("mu"), Normal::new(0.0, 1.0 / precision.sqrt()).unwrap());

            // Observation that might conflict
            let _obs <- observe(addr!("y"), Normal::new(mu, 0.01).unwrap(), 10.0);

            pure((scale, precision, mu))
        );

        let (result, trace) = runtime::handler::run(
            PriorHandler {
                rng: &mut rng,
                trace: Trace::default(),
            },
            problematic_model_copy,
        );

        let (scale, precision, mu) = result;
        let total_logp = trace.total_log_weight();

        println!(
            "   Attempt {}: scale={:.6}, precision={:.6}, mu={:.3}, logp={:.3}",
            attempt, scale, precision, mu, total_logp
        );

        // Check for numerical issues
        if !total_logp.is_finite() {
            println!("     ⚠️ Non-finite log-probability detected!");
        }

        if precision < 1e-6 {
            println!("     ⚠️ Very small precision: {:.8}", precision);
        }

        if mu.abs() > 5.0 {
            println!("     ⚠️ Extreme mu value: {:.3}", mu);
        }

        // Examine individual components
        println!(
            "     Components: prior={:.3}, likelihood={:.3}, factors={:.3}",
            trace.log_prior, trace.log_likelihood, trace.log_factors
        );
    }

    println!();
    println!("🔍 Trace Validation:");

    // Create a trace with known good values for validation
    let validation_trace = Trace {
        choices: [
            (
                addr!("scale"),
                Choice {
                    addr: addr!("scale"),
                    value: ChoiceValue::F64(1.0),
                    logp: Exponential::new(1.0).unwrap().log_prob(&1.0),
                },
            ),
            (
                addr!("precision"),
                Choice {
                    addr: addr!("precision"),
                    value: ChoiceValue::F64(2.0),
                    logp: Gamma::new(1.0, 1.0).unwrap().log_prob(&2.0),
                },
            ),
            (
                addr!("mu"),
                Choice {
                    addr: addr!("mu"),
                    value: ChoiceValue::F64(0.5),
                    logp: Normal::new(0.0, 1.0 / (2.0_f64).sqrt())
                        .unwrap()
                        .log_prob(&0.5),
                },
            ),
        ]
        .iter()
        .cloned()
        .collect(),
        log_prior: 0.0,
        log_likelihood: 0.0,
        log_factors: 0.0,
    };

    // Score this validation trace
    let validation_model = prob!(
        let scale <- sample(addr!("scale"), Exponential::new(1.0).unwrap());
        let precision <- sample(addr!("precision"), Gamma::new(1.0, scale).unwrap());
        let mu <- sample(addr!("mu"), Normal::new(0.0, 1.0 / precision.sqrt()).unwrap());
        let _obs <- observe(addr!("y"), Normal::new(mu, 0.01).unwrap(), 10.0);
        pure((scale, precision, mu))
    );

    let (val_result, val_trace) = runtime::handler::run(
        ScoreGivenTrace {
            base: validation_trace,
            trace: Trace::default(),
        },
        validation_model,
    );

    println!("   - Validation result: {:?}", val_result);
    println!("   - Validation logp: {:.3}", val_trace.total_log_weight());
    println!(
        "   - Validation finite: {}",
        val_trace.total_log_weight().is_finite()
    );
    println!();
}

Debugging Strategy

graph TD
    A["Model Issues"] --> B["Check Log-Weights"]
    B --> C["Infinite/NaN Values?"]
    B --> D["Extreme Parameter Values?"]
    B --> E["Prior-Likelihood Conflict?"]
    
    C --> F["Examine Individual Choices"]
    D --> G["Check Parameter Ranges"]
    E --> H["Validate Observations"]
    
    F --> I["Fix Distribution Parameters"]
    G --> J["Add Constraints/Priors"] 
    H --> K["Verify Data Consistency"]
    
    I --> L["Test with Validation Trace"]
    J --> L
    K --> L
    
    style C fill:#ffcccc
    style D fill:#ffcccc
    style E fill:#ffcccc

Common Issues and Solutions

ProblemSymptomSolution
Numerical overflowInf log-weightsUse log-space throughout
Parameter explosionExtreme valuesAdd regularizing priors
Prior-data conflictVery negative likelihoodCheck data preprocessing
Precision issuesUnstable gradientsUse higher precision types

Production Debugging

Always validate your models with:

  1. Known-good traces with reasonable parameter values
  2. Synthetic data where you know the true parameters
  3. Multiple random seeds to check consistency
  4. Finite-value assertions in your handlers

Real-World Applications

Custom MCMC Algorithm

use fugue::*;
use fugue::runtime::{interpreters::*, trace::*};

struct CustomMCMC<R: rand::Rng> {
    rng: R,
    current_trace: Trace,
    step_size: f64,
}

impl<R: rand::Rng> CustomMCMC<R> {
    fn step<F>(&mut self, model_fn: F) -> bool 
    where F: Fn() -> Model<f64>
    {
        // Create proposal by modifying current trace
        let mut proposal_trace = self.current_trace.clone();
        
        // Modify a random choice (simplified)
        if let Some((addr, choice)) = proposal_trace.choices.iter_mut().next() {
            if let Some(current_val) = choice.value.as_f64() {
                let proposal_val = current_val + self.step_size * 
                    Normal::new(0.0, 1.0).unwrap().sample(&mut self.rng);
                choice.value = ChoiceValue::F64(proposal_val);
            }
        }
        
        // Score proposal
        let (_, scored_trace) = runtime::handler::run(
            ScoreGivenTrace::new(proposal_trace),
            model_fn()
        );
        
        // Accept/reject based on Metropolis criterion
        let log_alpha = scored_trace.total_log_weight() - 
                       self.current_trace.total_log_weight();
        
        if log_alpha > 0.0 || 
           self.rng.gen::<f64>().ln() < log_alpha {
            self.current_trace = scored_trace;
            true // Accepted
        } else {
            false // Rejected  
        }
    }
}

Production Inference Pipeline

use fugue::*;
use fugue::runtime::memory::TracePool;

struct InferencePipeline {
    pool: TracePool,
    diagnostics: Vec<f64>,
}

impl InferencePipeline {
    fn run_batch<F>(&mut self, 
                   model_fn: F, 
                   n_samples: usize) -> Vec<(f64, Trace)>
    where F: Fn() -> Model<f64> + Copy
    {
        let mut results = Vec::with_capacity(n_samples);
        
        for _ in 0..n_samples {
            // Get pooled trace to avoid allocation
            let pooled_trace = self.pool.get_trace();
            
            let mut rng = rand::thread_rng();
            let handler = PriorHandler { 
                rng: &mut rng, 
                trace: pooled_trace 
            };
            
            let (result, trace) = runtime::handler::run(handler, model_fn());
            
            // Record diagnostics
            self.diagnostics.push(trace.total_log_weight());
            
            results.push((result, trace));
        }
        
        results
    }
    
    fn convergence_summary(&self) -> (f64, f64) {
        let mean = self.diagnostics.iter().sum::<f64>() / self.diagnostics.len() as f64;
        let var = self.diagnostics.iter()
            .map(|x| (x - mean).powi(2))
            .sum::<f64>() / (self.diagnostics.len() - 1) as f64;
        (mean, var.sqrt())
    }
}

Best Practices

Handler Development

Custom Handler Guidelines

  1. Type Safety: Always match handler methods to distribution return types
  2. Error Handling: Use Result types for production handlers
  3. State Management: Keep handler state minimal and well-documented
  4. Performance: Pre-allocate collections when possible
  5. Testing: Validate against known-good traces

Memory Management

Production Optimization

  1. Profile First: Measure actual memory usage patterns
  2. Pool Strategically: Use TracePool for repeated operations
  3. Size Appropriately: Pre-size traces when choice count is predictable
  4. Monitor Growth: Watch for memory leaks in long-running processes

Debugging Workflow

Systematic Debugging

  1. Check Basics: Verify all log-weights are finite
  2. Isolate Components: Test prior, likelihood, factors separately
  3. Use Validation: Create traces with known-good parameter values
  4. Compare Algorithms: Try different inference methods
  5. Visualize Traces: Plot parameter trajectories over time

Testing Your Understanding

Exercise 1: Custom Proposal Mechanism

Implement a custom handler that uses adaptive proposals based on the acceptance rate history:

use fugue::*;
use fugue::runtime::{handler::Handler, trace::*};

struct AdaptiveMCMCHandler<R: rand::Rng> {
    rng: R,
    current_trace: Trace,
    proposal_scale: f64,
    acceptance_history: Vec<bool>,
    adaptation_interval: usize,
}

// TODO: Implement Handler trait with adaptive step size

Exercise 2: Multi-Chain Diagnostics

Create a system that runs multiple MCMC chains in parallel and automatically assesses convergence:

use fugue::inference::diagnostics::*;

fn multi_chain_inference<F>(
    model_fn: F,
    n_chains: usize, 
    n_samples: usize
) -> (Vec<Vec<Trace>>, bool)
where F: Fn() -> Model<f64> + Copy
{
    // TODO: Run multiple chains and check R-hat convergence
    unimplemented!()
}

Exercise 3: Memory-Optimized Batch Processing

Design a system for processing thousands of similar models efficiently:

use fugue::runtime::memory::*;

struct BatchProcessor {
    pool: TracePool,
    // TODO: Add fields for efficient batch processing
}

impl BatchProcessor {
    fn process_batch<F>(&mut self, 
                       models: Vec<F>) -> Vec<(f64, Trace)>
    where F: Fn() -> Model<f64>
    {
        // TODO: Implement memory-efficient batch processing
        unimplemented!()
    }
}

Key Takeaways

Trace Manipulation Mastery

  1. Execution History: Traces record complete probabilistic execution paths
  2. Handler Flexibility: The same model can be executed in radically different ways
  3. Replay Foundation: MCMC and other algorithms depend on deterministic replay
  4. Custom Strategies: Implement specialized inference through custom handlers
  5. Production Ready: Memory optimization and diagnostics enable robust deployment
  6. Debugging Power: Trace analysis reveals numerical issues and convergence problems

Core Capabilities:

  • Complete execution recording with type safety and weight decomposition
  • Flexible interpretation through the handler system
  • MCMC foundation via deterministic replay mechanics
  • Custom inference algorithms through handler extensibility
  • Production optimization with memory pooling and efficient allocation
  • Comprehensive diagnostics for convergence assessment and debugging

Further Reading

  • Custom Handlers Guide - Building specialized interpreters
  • Optimizing Performance - Production deployment strategies
  • Debugging Models - Troubleshooting problematic models
  • API Reference - Complete runtime system specification
  • The Elements of Statistical Learning - Theoretical foundations of inference algorithms
  • Monte Carlo Statistical Methods - MCMC theory and practice