Domain F — Applied Case Studies & Mission Reasoning

Guidance, navigation, and control — expressed as mission geometry, hardware limits, and estimator robustness.

Domain F.4–F.6 — Guidance, Control & Estimation Case Studies

These case studies move from “orbital intent” into the reality of GNC execution: how the vehicle targets an inertial plane during ascent, how actuators bound achievable attitude performance, and how estimators can fail even when the equations are correct.

How to use this page

Read each case as a reusable reasoning template: identify the mission intent, map it to controlled variables, list the dominant error sources, then define verification that proves “mission achieved” (not only “orbit achieved”).

F.4 — Launch-to-Orbit Mission Geometry (Timing + Plane Targeting)

Guidance & trajectory targeting case study

Guidance defines the desired orbital plane, but the vehicle must control its ascent trajectory to achieve that plane at injection. In practice, plane targeting is controlled through two coupled channels: in-plane shaping (energy + flight-path angle) and out-of-plane targeting (plane / RAAN / azimuth).

Figure placeholder: inertial target plane (RAAN + inclination) vs Earth-fixed launch site; launch azimuth + yaw steering; injection state vector alignment.
Suggested visual: “Pitch gets you to orbit; yaw + time get you into the plane.”

4. Control Considerations During Ascent (How the vehicle actually “hits the plane”)

Guidance typically outputs a target injection state (or a target orbit set) that implies an inertial plane. The ascent controller must then steer the vehicle so that, at cutoff, the injected position and velocity vectors are consistent with that plane.

Two coupled control channels

  • In-plane shaping (energy + flight path angle): mainly controlled via pitch program and throttle shaping.
  • Out-of-plane targeting (plane / RAAN / azimuth): mainly controlled via yaw steering and launch-time selection.

Mental model

Pitch gets you “to orbit.” Yaw + time get you “into the correct plane.”

4.1 Key controlled variables during ascent

During powered flight, the system works to manage:

  • Launch azimuth: the initial ground-referenced heading that sets the orbital plane.
  • Pitch-over and gravity turn: sets final inclination and insertion conditions.
  • Terminal conditions at injection:
    • position and velocity vector
    • flight path angle
    • orbital inclination
    • RAAN consistency with target plane

Even if the orbital altitude requirement is met, a small error in plane orientation can cause long-term LTAN drift (especially for Sun-synchronous missions).

4.2 How pitch and yaw map to orbit elements (conceptual mapping)

Pitch guidance strongly influences:

  • apogee/perigee targeting
  • semi-major axis
  • inclination (indirectly through trajectory shaping)

Yaw steering / heading bias strongly influences:

  • orbital plane orientation (RAAN)
  • cross-range motion
  • out-of-plane velocity components

4.3 Terminal guidance corrections

Most vehicles use a terminal phase (near MECO) where guidance targets precise orbital insertion:

  • compute current dispersions (position/velocity vs target)
  • apply small steering corrections to minimize injection error

Terminal corrections are limited by:

  • remaining burn time
  • maximum gimbal angles / TVC authority
  • structural load limits (bending + aerodynamic constraints)

Why “late fixes” are hard

Out-of-plane errors require generating cross-track velocity. If control authority is small late in ascent, you can meet altitude but miss the plane tolerance band.

4.4 Primary plane-error sources and why they matter

Launch timing error

  • If liftoff is late/early, Earth rotates, shifting the inertial geometry at injection.
  • This manifests primarily as RAAN error.

Azimuth bias error

  • A small bias in initial heading produces an orbital plane tilt.
  • This cannot be “fixed later” without significant plane-change $\Delta V$.

Thrust vector misalignment / gimbal bias

  • Adds cross-track velocity components.
  • Produces out-of-plane error that appears as inclination/RAAN bias.

Navigation state bias (INS alignment, GPS bias, time sync)

  • Even a small attitude or time bias can cause wrong steering and wrong injection conditions.

4.5 Plane error sensitivity (expanded interpretation)

These sensitivities are why “plane targeting” is not an afterthought: small early biases can become mission-level failures.

  • 30 seconds launch timing error → RAAN shift
    Because the target plane is inertial but the Earth-fixed site rotates. Small timing offsets change the inertial geometry at injection.
  • 0.05° inclination error → long-term LTAN drift
    Because Sun-synchronous precession depends on $\cos i$. Small errors change the nodal drift rate, so local time drifts over weeks/months.
  • Small MECO timing error → wrong node alignment
    Because the time of equator crossing (and hence local solar time at node) shifts.

Design intuition

In many missions, plane error is “expensive” because it is fixed by plane-change maneuvers. That’s why launch-time discipline and yaw targeting are treated as first-class mission requirements.

5. Verification (How you prove you actually met the mission intent)

Verification is not “the orbit looks correct.” Verification means the orbit remains compliant over time despite perturbations and small errors.

5.1 Post-injection verification workflow

Step A — Reconstruct injected orbit at epoch

  • derive classical orbital elements from injected state
  • compare: $a$, $e$, $i$, $\Omega$ (RAAN), and argument of latitude

Step B — Propagate with appropriate fidelity

Use one or both:

  • $J_2$-perturbed propagation (best for plane drift and LTAN stability reasoning)
  • SGP4-like propagation (best for operational-style time-tagged behavior)

Step C — Extract compliance metrics

Evaluate:

  • Inclination error: $|i-i_{\text{target}}|$
  • RAAN error: $|\Omega-\Omega_{\text{target}}|$
  • Nodal drift rate: compare measured $\Omega(t)$ slope vs expected SSO rate
  • LTAN stability: compute LTAN at node crossings over time

5.2 LTAN verification (what you actually check)

Compute node-crossing events:

  • find ascending node crossing epochs (where latitude changes sign upward)
  • at each node crossing, compute local solar time based on longitude + Sun geometry
  • ensure LTAN remains within tolerance

Recommended check points:

  • Day 0, Day 3, Day 7, Day 14
  • (Optional) check across different solar conditions if doing long-term analysis

Typical requirement form:

  • LTAN in a band (e.g., $18{:}00 \pm 10$ min)
  • and drift rate bounded (e.g., < 1 min/day)

Algorithm — Plane Targeting & LTAN Verification

Objective

Given an injected state vector, verify that the orbit satisfies: correct inclination, correct RAAN, correct nodal precession rate, and stable LTAN.

Step 1 — Convert Injection State to Orbital Elements

  1. Input inertial position r and velocity v
  2. Compute angular momentum vector h = r × v
  3. Inclination: i = acos(h_z / |h|)
  4. Node vector: n = k × h
  5. RAAN: Ω = atan2(n_y, n_x)

Step 2 — Compute Expected SSO Precession Rate

\[ \dot{\Omega} = -\frac{3}{2} J_2 \left(\frac{R_E^2}{a^2(1-e^2)^2}\right) n \cos i \]

Step 3 — Propagate Orbit with J2 Model

  • Propagate state for 14 days
  • Compute RAAN over time
  • Fit linear slope to extract nodal drift

Step 4 — Compute LTAN

  1. Detect ascending node crossings
  2. Convert node longitude to local solar time
  3. Verify LTAN tolerance band

Python Example — Extract Orbital Plane Parameters


import numpy as np

mu = 398600.4418  # km^3/s^2
Re = 6378.137     # km
J2 = 1.08262668e-3

def orbital_elements(r, v):
    r = np.array(r)
    v = np.array(v)

    h = np.cross(r, v)
    h_norm = np.linalg.norm(h)

    i = np.arccos(h[2] / h_norm)

    k = np.array([0,0,1])
    n = np.cross(k, h)
    RAAN = np.arctan2(n[1], n[0])

    return np.degrees(i), np.degrees(RAAN)

# Example injected state
r0 = [7000, 0, 0]
v0 = [0, 7.5, 1.0]

incl, raan = orbital_elements(r0, v0)
print("Inclination:", incl)
print("RAAN:", raan)

Python Example — J2 Nodal Drift


def nodal_precession_rate(a, e, i):
    n = np.sqrt(mu / a**3)
    rate = -(3/2) * J2 * (Re**2 / (a**2 * (1 - e**2)**2)) * n * np.cos(i)
    return rate  # rad/s

a = 6878  # 500 km circular
e = 0
i = np.radians(97.5)

rate = nodal_precession_rate(a, e, i)
print("RAAN drift (deg/day):", np.degrees(rate)*86400)

5.3 What verification proves

If verification passes, it proves:

  • the injected plane is correct at epoch
  • $J_2$ precession matches intended SSO behavior
  • the mission lighting constraint remains stable
  • the launch-time + ascent control solution is actually valid

Meaning of “mission achieved”

Mission achieved = correct plane at epoch + correct drift rate + stable LTAN. Orbit achieved = altitude and speed look okay.

6. What Breaks (Failure modes expanded: how errors show up downstream)

6.1 Launch timing error

Symptom: RAAN offset and wrong LTAN at node crossing.

Consequence: orbit plane is correct shape but wrong lighting schedule → imaging inconsistency.

6.2 Inclination bias

Symptom: nodal drift rate differs from SSO rate.

Consequence: LTAN slowly drifts away from 18:00 → long-term mission degradation.

6.3 Incorrect Earth-rotation / time model

Symptom: systematic plane shift even if guidance “looks correct.”

Consequence: consistent RAAN error → missed plane targeting every time.

6.4 Navigation bias

Symptom: injection state error (velocity direction error).

Consequence: wrong inclination + wrong RAAN simultaneously, difficult to correct.

6.5 Control authority limitation

Symptom: terminal corrections cannot remove dispersions.

Consequence: injection misses tolerance band; mission becomes “orbit achieved but not mission achieved.”

Engineering closure

F.4 is fundamentally a coupling story: time, azimuth, yaw steering, and terminal state alignment. The plane is inertial — the launch site is rotating — and your controller must bridge that mismatch.

F.5 — Attitude Control Case (Pointing / Slew / Actuator Limits)

Control system case study

In real spacecraft, control performance is limited less by math and more by hardware. Reaction wheels, magnetorquers, or thrusters impose hard bounds: maximum torque, momentum storage capacity, duty cycle constraints, and nonlinear saturation behavior.

Figure placeholder: reaction wheel torque limit box; momentum accumulation curve approaching saturation; desaturation via magnetorquers.
Suggested visual: why a “stable” controller can still fail when $|h| \rightarrow h_{\max}$.

4. Actuator Limits (Why “stable” isn’t enough)

Stability is necessary, but not sufficient. Hardware constraints determine whether the commanded control action is physically achievable. When limits are hit, the effective control law becomes nonlinear, and performance can degrade abruptly.

4.1 Reaction wheel torque limit

A wheel can only apply torque up to a maximum:

\[ |\tau| \le \tau_{\max} \]

If the controller demands more torque than available:

  • the actuator saturates
  • the effective control law becomes nonlinear
  • stability margins shrink

4.2 Momentum buildup (the hidden constraint)

Even if torque stays within limits, the wheel stores momentum:

\[ h(t)=\int \tau(t)\,dt \]

If $|h|$ approaches $h_{\max}$:

  • wheel saturation occurs
  • the controller may lose authority
  • pointing error grows

This often happens during:

  • long slews
  • constant disturbance torques (aero drag, SRP)
  • persistent bias (misalignment or offset)

Hidden failure mechanism

Your controller can look perfect for minutes — then fail suddenly when the wheel hits $h_{\max}$. That’s why momentum management is not optional.

4.3 Why saturation is dangerous

Saturation causes:

  • loss of damping effectiveness
  • integrator wind-up (if PID)
  • slow recovery and overshoot
  • limit cycles (oscillations that never settle)

4.4 Momentum management (desaturation story)

A realistic system must include a momentum management mode:

  • magnetorquer dumping (LEO)
  • thruster desaturation (larger spacecraft)

Key operational consequence:

A controller that “works perfectly” in simulation may fail in reality because momentum accumulates silently until the wheel hits its limit.

5. Trade Study (Gains vs speed vs hardware limits)

Control tuning is a constrained optimization problem: meet pointing requirement, meet time requirement, and do not violate actuator limits.

5.1 How $K_p$ affects response

Increase $K_p$:

  • faster response (higher $\omega_n$)
  • higher peak torque demand
  • higher momentum accumulation risk

5.2 How $K_d$ affects damping

Increase $K_d$:

  • reduces overshoot
  • reduces oscillation
  • increases settling smoothness

But:

  • too high can slow response
  • increases torque demand during rate correction

5.3 Practical performance metrics

You do not tune only by “looks stable.” You tune by:

  • Rise time (how fast it turns)
  • Settling time (how fast it becomes usable)
  • Overshoot (risk to payload line-of-sight)
  • Max torque (hardware compliance)
  • Max momentum (avoid saturation)

5.4 A realistic “controller acceptance condition”

A control design is acceptable only if:

  • it meets settling + pointing accuracy
  • AND respects $\tau_{\max}$ and $h_{\max}$
  • AND remains stable under disturbances and model uncertainty

Why multi-case simulation matters

The “best” gains in one scenario can be unsafe in another. Hardware constraints make robustness a requirement, not a luxury.

6. Verification (What you must simulate to trust the controller)

Verification must include both ideal and stressed cases. If you don’t simulate saturation explicitly, you haven’t validated the design.

6.1 Nominal case

Simulate a $30^\circ$ slew step command. Check:

  • settling time $\le 60~\mathrm{s}$
  • final error $\le 0.1^\circ$
  • overshoot within line-of-sight constraints
  • $|\tau| < \tau_{\max}$
  • $|h| < h_{\max}$

6.2 Disturbance case

Add a disturbance torque (constant or sinusoidal):

  • drag torque (LEO)
  • gravity-gradient torque
  • SRP torque

Check:

  • steady-state pointing bias
  • momentum accumulation trend
  • whether controller remains stable without saturating

6.3 Saturation case (must test explicitly)

Force saturation by:

  • reducing $\tau_{\max}$
  • increasing commanded slew
  • adding persistent disturbance

Check:

  • whether attitude diverges
  • whether oscillation appears
  • recovery time after saturation ends

Mission readiness criterion

A controller that does not recover gracefully under saturation is not mission-ready, even if it performs perfectly in the nominal case.

Algorithm — PD Attitude Control with Saturation

State Variables

  • Quaternion error: q_err
  • Angular velocity: ω

Step 1 — Compute Control Torque

\[ \tau = -K_p \theta_{err} - K_d \omega \]

Step 2 — Apply Torque Saturation

Clamp torque: τ = clip(τ, -τ_max, τ_max)

Step 3 — Update Wheel Momentum

\[ h(t+dt) = h(t) + \tau dt \]

Step 4 — Check Momentum Limit

If |h| > h_max → trigger desaturation.

Python Example — PD Controller with Torque Limit


import numpy as np

tau_max = 0.1     # Nm
h_max = 5.0       # Nms
dt = 0.1

Kp = 0.8
Kd = 0.3

theta_err = 0.5   # rad
omega = 0.1       # rad/s
h = 0

def pd_control(theta_err, omega):
    tau = -Kp*theta_err - Kd*omega
    tau = np.clip(tau, -tau_max, tau_max)
    return tau

for t in range(200):
    tau = pd_control(theta_err, omega)
    h += tau*dt

    if abs(h) > h_max:
        print("Momentum saturation reached")
        break

F.6 — Navigation / Estimation Case (Minimal EKF)

Estimation robustness case study

Estimation is not “EKF equations.” It is a practical balance between: prediction accuracy, measurement availability, noise levels, and bias/drift. Filters often fail due to tuning, dropouts, or model mismatch — not because the math is unknown.

Figure placeholder: covariance growth during star-tracker dropout; residual spikes; divergence example for Q too small.
Suggested visual: “smooth but wrong” vs “noisy but honest” estimator behavior.

4. Sensitivity Study (Expanded: why filters fail in practice)

Treat EKF performance as an operational envelope. You want acceptable peak error, smooth control-usable outputs, and stable recovery after maneuvers and measurement gaps.

Case A — Nominal

High measurement rate + low noise.

Expected behavior:

  • small covariance $P$
  • residuals close to white noise
  • fast convergence after maneuvers
  • stable attitude estimate

Case B — Reduced measurement rate

Lower star-tracker update frequency (e.g., 10 Hz → 1 Hz → 0.2 Hz).

What happens:

  • longer prediction intervals
  • gyro errors integrate longer without correction
  • covariance grows faster
  • corrections become larger and less smooth
  • attitude drift increases during gaps

Operational consequence

Even if average error is acceptable, peak error during dropouts may violate pointing requirements. Mission risk is often driven by peaks, not means.

Case C — Increased measurement noise

Higher $R$.

What happens:

  • filter trusts measurement less
  • Kalman gain decreases
  • estimate follows prediction more strongly
  • steady-state error increases
  • slow recovery after disturbances

Important insight:

A “noisy sensor” does not just make output noisy — it can make the estimator too sluggish to correct bias.

Case D — Gyro bias present (most realistic)

Introduce bias $b$ in gyro:

\[ \omega_{\text{meas}} = \omega_{\text{true}} + b + \text{noise} \]

If bias is not estimated:

  • attitude error grows roughly linearly with time during dropouts
  • even frequent updates may not fully remove accumulated drift

This is why many practical filters augment the state with bias terms.

5. What Breaks (Expanded failure logic: the typical EKF disasters)

5.1 $Q$ too small (overconfidence in model)

If process noise $Q$ is too small:

  • covariance shrinks artificially
  • Kalman gain becomes too small
  • filter stops correcting properly
  • residuals grow, then divergence occurs

Symptom: estimate looks smooth but wrong.

5.2 $Q$ too large (underconfidence in model)

If $Q$ is too large:

  • estimate becomes noisy
  • filter chases noise
  • output becomes unstable for control usage

Symptom: estimate responds too much to random measurement fluctuations.

5.3 $R$ mis-specified

If $R$ too small:

  • filter over-trusts measurement
  • high-frequency noise enters state
  • control uses noisy attitude → jitter

If $R$ too large:

  • filter under-trusts measurement
  • drift dominates

5.4 Measurement dropout

During star-tracker dropouts:

  • covariance grows
  • drift increases
  • recovery can be slow

If dropout occurs during a fast slew:

  • filter may temporarily lose lock
  • residuals spike
  • attitude error peaks

5.5 Linearization / model mismatch

Even a correct EKF structure can fail if:

  • dynamics are nonlinear beyond small-angle assumption
  • measurement model does not match reality
  • time synchronization is wrong

Symptom: residuals are not white; they show structured bias.

Estimator reality check

Many EKF failures look like “random weirdness” until you plot residual statistics and covariance behavior. Debugging is largely about recognizing structured symptoms.

6. Verification (Expanded: what proves your estimator is healthy)

Verification requires both accuracy metrics and statistical health checks. A good EKF is not just accurate — it is honest about uncertainty.

6.1 Accuracy metrics

Evaluate:

  • RMS attitude error
  • peak attitude error (important for pointing)
  • convergence time after slews
  • rate estimate accuracy ($\omega$ error)

6.2 Residual analysis (filter consistency)

Residual:

\[ r_k = z_k - H \hat{x}_{k|k-1} \]

Healthy residual properties:

  • mean $\approx 0$
  • approximately white (no strong periodic patterns)
  • variance consistent with $R$ and expected noise

If residuals show drift or correlation, it signals model mismatch or bias.

6.3 Covariance consistency

Track:

  • $\mathrm{trace}(P)$ over time
  • whether $P$ grows during dropouts and shrinks after updates
  • whether $P$ matches observed errors (consistency)

6.4 Stress testing (required)

Run the filter under:

  • reduced measurement rate
  • increased noise
  • gyro bias
  • randomized dropout windows

Then verify:

  • estimate does not diverge
  • recovery after dropout occurs within acceptable time
  • peak error stays within mission limit

Engineering lesson (F.6)

EKF trust comes from evidence: residual whiteness, covariance honesty, and recovery behavior under stress — not from having the correct equations in a textbook.

Algorithm — Minimal Attitude EKF

State Vector

x = [attitude error, gyro bias]

Prediction Step

\[ \hat{x}_{k|k-1} = F \hat{x}_{k-1} \]

\[ P_{k|k-1} = F P_{k-1} F^T + Q \]

Update Step

\[ K = P H^T (H P H^T + R)^{-1} \]

\[ \hat{x}_k = \hat{x}_{k|k-1} + K(z - H\hat{x}) \]

Python Example — Minimal EKF Loop


import numpy as np

dt = 0.1
F = np.eye(2)
H = np.eye(2)

Q = np.diag([1e-6, 1e-8])
R = np.diag([1e-4, 1e-4])

x = np.zeros(2)
P = np.eye(2)

for k in range(100):
    # Prediction
    x = F @ x
    P = F @ P @ F.T + Q

    # Simulated measurement
    z = np.array([0.01, 0.0]) + np.random.normal(0, 0.01, 2)

    # Update
    S = H @ P @ H.T + R
    K = P @ H.T @ np.linalg.inv(S)
    x = x + K @ (z - H @ x)
    P = (np.eye(2) - K @ H) @ P

How F.4–F.6 connect into a single GNC story

These cases form a realistic chain:

  • F.4 (Guidance): defines and targets mission geometry (plane / RAAN / LTAN), with timing and steering constraints.
  • F.5 (Control): executes attitude and pointing goals, but only within actuator torque and momentum limits.
  • F.6 (Navigation): provides the state estimate that both guidance and control rely on — and can fail due to tuning, bias, or dropouts.

Domain F mindset

Mission intent → controlled variables → dominant error sources → verification evidence. That’s how you turn “theory” into flight-ready reasoning.

Continue in Domain F

Next: F.7–F.9 SSA & Tracking →

← Back to Advanced Domains