Domain F — Applied Case Studies & Mission Reasoning

Launch geometry, operational access, and SSA visibility — translated into engineering decisions.

Domain F.10–F.11 — Integrated Mission Walkthroughs

This page is the integration bridge of Domain F. Up to F.9, you built the modules: plane targeting, actuator realism, estimator robustness, event windows, detectability gates, and scalable logic. Here, you connect them into a single mission-level loop — without changing tone, pacing, or style.

How to use this page

Read the integration bridge first, then follow F.10 as a reusable “pipeline template.” Use F.11 as a repeatable interview answer pattern (fast scan + traceable reasoning + measurable outputs).

How F.10–F.11 Extend This Style (Without Changing Your Identity)

Integration bridge: modules → mission reasoning

Domains F.4 through F.9 quietly established something powerful: each technical block — guidance, control, estimation, geometry, detectability, and scaling — is not an isolated discipline. Each block produces outputs that become inputs to the next. Each block constrains the others. And if one block is even slightly inconsistent (frame mismatch, timing drift, unrealistic actuator assumptions), it can silently invalidate the logic downstream.

By the time a reader reaches F.9, they have already learned how mission reasoning should be built:

  • F.4 gave the plane-targeting truth: “Pitch gets you to orbit; yaw + time get you into the plane.”
  • F.5 made hardware real: stability means nothing if actuators saturate.
  • F.6 made state knowledge real: filters fail from tuning and dropouts, not from missing equations.
  • F.7 introduced event logic: geometry produces crossing intervals, not just sampled booleans.
  • F.8 applied operational truth: crossing ≠ detection (illumination + range gates matter).
  • F.9 made scaling real: reject cheaply, refine only inside windows, store intervals not samples.

So what remains is not a new topic. What remains is integration. F.10 and F.11 should not introduce a new visual style, a new reasoning style, or a new tone. They should unify what already exists into a single mission-level loop.

Two rules to preserve your Domain F identity

The goal is not to add complexity. The goal is to make the bridge logic explicit. Keep the same pattern: mission question → shared state → event windows → constraints → outputs.

F.10 — Integrated Mission Template

End-to-end loop: timing → geometry → detectability → metrics

Purpose

F.10 combines everything built so far into one coherent, end-to-end mission evaluation loop:

Launch timing Injected orbit Propagation Geometry windows Detectability Scheduling outputs

Up to now, these have been explored individually. F.10 shows how they interact in sequence, where each stage constrains and shapes the next. This is where Domain F transitions from “technical modules” to “mission reasoning.”

1) Integrated Mission Scenario

Begin with a mission framing that naturally demands cross-domain reasoning.

Example scenario

Deploy into Sun-Synchronous Orbit and evaluate tracking capability over a 48-hour horizon.

This single sentence forces interaction between:

  • Plane targeting (F.4) — correct inclination, RAAN, and LTAN drift logic.
  • Propagation + geometry (F.7) — when crossings exist in time.
  • Detectability gates (F.8) — which crossings are physically usable.
  • Scaling mindset (F.9) — how to run this without brute force.

2) System Architecture — What Talks to What

A mission is not a single equation. It is a chain of modules passing shared state forward. Each module consumes and transforms state. Each module depends on upstream correctness.

  1. Launch window solver → chooses t0 for target plane/LTAN alignment.
  2. Injection state definition → outputs x_inj at t_inj.
  3. Propagator → produces ephemerides across the horizon.
  4. Geometry engine → converts ephemerides into crossing intervals.
  5. Detectability gates → filters crossings into detectable intervals.
  6. Scheduler/scoring → converts intervals into mission metrics and ranked opportunities.

3) Shared State Definitions — The Glue

Integration only works if shared state objects are defined clearly and consistently. These must be explicit (because ambiguity here breaks everything downstream):

  • t0 — UTC liftoff time
  • t_inj — injection epoch
  • x_inj — injected state vector or orbital elements
  • Frames — ECI/TEME/ECEF stated clearly (never implicit)
  • Crossings = [(t_start, t_end), ...]
  • Detections = [(t_start, t_end), ...]
  • Constraints: sunlit, range, slew, duty cycle, station-night (if ground-based)

Why this matters

Frame mismatch alone can invalidate geometry results. Time-tag inconsistency can distort illumination logic. F.10 is “mission reasoning” only if the shared objects are clear enough that downstream modules cannot silently drift.

F.10 Algorithm — Coarse-to-Refine Integrated Evaluation

Reject cheaply → refine inside windows → store intervals

Reuse the core principle from F.7–F.9: reject cheaply, refine only when necessary, and store event intervals instead of dense time-series booleans.

Integrated mission loop (end-to-end)

  1. Choose t0 from launch window logic (plane/LTAN alignment).
  2. Compute injection state at t_inj.
  3. Propagate sensor + object states on a coarse grid (e.g., 60 s).
  4. Apply cheap gates to build candidate windows (relaxed range, relaxed FOV with margin).
  5. Inside candidate windows, resample on a fine grid (e.g., 5–10 s).
  6. Extract crossing intervals (AOS/LOS style boundary detection).
  7. Apply detectability gates only inside crossings (sunlit, strict range, slew/duty checks).
  8. Convert detectable samples → detectable intervals.
  9. Store intervals + mission metrics (not dense time series).
  10. Output ranked opportunities and “constraint dominance” summary.
Pseudocode: interval-first evaluation
# coarse grid scan -> candidate windows -> refine -> interval extraction

t_coarse = grid(t0, horizon=48h, dt=60s)
propagate(sensor, object, t_coarse)

candidate = relaxed_range_gate & relaxed_fov_gate
cand_windows = boolean_to_intervals(t_coarse, candidate)

crossings = []
detections = []

for w in cand_windows:
    t_fine = grid(w.start-pad, w.end+pad, dt=10s)
    propagate(sensor, object, t_fine)

    C = fov_crossing_boolean(t_fine)
    crossings += boolean_to_intervals(t_fine, C)

    D = C & sunlit(t_fine) & strict_range(t_fine) & slew_ok(t_fine) & duty_ok(t_fine)
    detections += boolean_to_intervals(t_fine, D)

metrics = summarize(crossings, detections)
event_table = build_event_table(crossings, detections, metrics)
This preserves the Domain F pattern: cheap rejection → refined windows → stored intervals → mission-facing decisions.

Mission-facing outputs

  • Coverage percentage across 48 hours
  • Worst gap duration
  • Total detectable minutes per day
  • Number of crossings vs detections
  • Constraint dominance (illumination vs geometry vs slew)
  • Sensitivity: pointing law change or small maneuver insertion

Deliverable standard

F.10 should feel like a mission review slide: a pipeline, an event table, and decisions — not a wall of time-series plots.

Python Snippet (F.10) — Integrated Pipeline Skeleton

Glue skeleton: readable, modular, extendable

This is intentionally a “glue skeleton,” not a full package. It’s the pipeline shape that matters: coarse scan → candidate windows → refine → interval extraction → metrics.

Open: minimal mission-style function skeleton
import numpy as np

def integrated_mission_eval(cfg):
    """
    cfg contains:
      - t0, horizon_hours
      - injection_state (x_inj) or injection_orbit
      - sensor model: FOV, pointing law
      - constraints: rho_max, sunlit model, duty cycle, slew, keep-out zones
      - dt_coarse, dt_fine
    returns:
      - event_table (intervals)
      - metrics (mission-facing summary)
    """

    # 1) Propagate (coarse grid)
    t_coarse = make_time_grid(cfg["t0"], cfg["horizon_hours"], cfg["dt_coarse"])
    sens_c = propagate_sensor(cfg, t_coarse)
    obj_c  = propagate_object(cfg, t_coarse)

    # 2) Cheap gates -> candidate windows (use relaxed margins)
    Cc = crossing_boolean(obj_c["r"], sens_c["r"], sens_c["b_hat"], cfg["fov_half_deg_margin"])
    Rc, _ = range_gate(obj_c["r"], sens_c["r"], cfg["rho_max_km_margin"])
    cand = Cc & Rc
    windows = boolean_to_intervals(t_coarse["ts"], cand)

    crossings = []
    detections = []

    # 3) Refine inside windows only
    for (ta, tb) in windows:
        t_fine = make_time_grid_window(ta, tb, cfg["dt_fine"], pad_s=cfg.get("pad_s", 30))
        sens_f = propagate_sensor(cfg, t_fine)
        obj_f  = propagate_object(cfg, t_fine)

        # 3a) Geometry intervals
        C = crossing_boolean(obj_f["r"], sens_f["r"], sens_f["b_hat"], cfg["fov_half_deg"])
        crossings += boolean_to_intervals(t_fine["ts"], C)

        # 3b) Detectability gates (run only inside crossing windows)
        R, _ = range_gate(obj_f["r"], sens_f["r"], cfg["rho_max_km"])
        S = sunlit_gate(obj_f["r"], cfg, t_fine)
        A = slew_feasible(sens_f, cfg, t_fine) & duty_cycle_ok(cfg, t_fine)

        D = C & R & S & A
        detections += boolean_to_intervals(t_fine["ts"], D)

    # 4) Summaries
    metrics = summarize_metrics(crossings, detections, cfg)
    event_table = build_event_table(crossings, detections, metrics)

    return event_table, metrics
Keep it “Domain F”: the code should reveal the reasoning chain, not hide it.

F.11 — Interview-Level Integrated Case Study

Assumptions → reasoning chain → failure modes → metrics

F.11 demonstrates structured engineering reasoning under constraints. It should read like an interview response: fast scan, traceable logic, measurable conclusion.

1) Problem statement

Interview prompt

You are tasked with evaluating 48-hour tracking performance for a newly deployed SSO satellite. Determine whether mission constraints are satisfied and identify what limits performance most.

2) Assumptions & constraints

Declare assumptions before conclusions — like a real engineering discussion.

Category Assumption / Constraint
OrbitAltitude, inclination, target LTAN (SSO intent)
SensorFOV half-angle, pointing law
Rangerho_max maximum usable range
IlluminationSunlit requirement (or defined lighting threshold)
SlewMaximum slew rate / keep-out zones
Duty cycleObservation fraction / max tasking density
Time resolutionCoarse dt + fine dt (coarse→refine)

3) Reasoning chain (explicit domain links)

  • F.4: injection timing and plane alignment define RAAN/LTAN behavior.
  • F.7: propagation produces geometric crossing intervals.
  • F.8: detectability gates reduce crossings to feasible observations.
  • F.9: coarse→refine + interval storage makes it computationally realistic.

4) Failure modes & mitigations

  • Launch timing error → RAAN/LTAN mismatch → lighting degradation
    Mitigation: tighten launch window selection + time-tag consistency checks.
  • Coarse step misses short crossings → missed windows → wrong coverage conclusion
    Mitigation: refine only inside candidate windows; boundary search on AOS/LOS.
  • Eclipse dominates → detections collapse
    Mitigation: alternate pointing law, add sensor type, or multi-sensor configuration.
  • Slew limits violated → schedule infeasible
    Mitigation: reduce tasking density; prioritize; widen FOV strategy.

5) Metrics summary

  • #crossings, #detections
  • total_detectable_time, worst_gap_duration
  • min_range_during_detections
  • compute_time / throughput

6) Next improvements (fidelity scaling)

  • Use ephemeris-based Sun vector (instead of placeholders).
  • Upgrade eclipse model: cylindrical → conical shadow.
  • Include $J_2$ precession for longer horizons (LTAN drift realism).
  • Add scheduling optimization (priority rules → solver).
  • Introduce probabilistic detection models (SNR/attitude uncertainty).

F.11 algorithm (interview checklist)

  1. Write assumptions (orbit, sensor, constraints).
  2. Confirm frames + time tags are consistent.
  3. Build coarse candidate windows (range gate → angle/FOV gate).
  4. Refine inside windows; extract crossing intervals.
  5. Apply detectability gates; extract detection intervals.
  6. Summarize metrics; identify dominant constraint.
  7. Recommend mitigation + next fidelity upgrade.

The Domain F Integration Insight

One sentence that ties F.4 → F.9 together

Keep this as a callout

A perfect orbit can still fail the mission if access windows do not align with operational constraints.

This single sentence ties:

  • F.4 — Plane intent (injection timing + plane alignment)
  • F.7 — Geometry windows (crossing intervals)
  • F.8 — Detectability filtering (illumination, range, constraints)
  • F.9 — Scalable event architecture (coarse→refine, interval storage)

It preserves the Domain F mindset: not isolated equations, not isolated modules — a connected mission reasoning pipeline.

Continue in Domain F

Next: Engineering Thinking Under Constraints →

← Back to Domain F Overview