How F.10–F.11 Extend This Style (Without Changing Your Identity)
Domains F.4 through F.9 quietly established something powerful: each technical block — guidance, control, estimation, geometry, detectability, and scaling — is not an isolated discipline. Each block produces outputs that become inputs to the next. Each block constrains the others. And if one block is even slightly inconsistent (frame mismatch, timing drift, unrealistic actuator assumptions), it can silently invalidate the logic downstream.
By the time a reader reaches F.9, they have already learned how mission reasoning should be built:
- F.4 gave the plane-targeting truth: “Pitch gets you to orbit; yaw + time get you into the plane.”
- F.5 made hardware real: stability means nothing if actuators saturate.
- F.6 made state knowledge real: filters fail from tuning and dropouts, not from missing equations.
- F.7 introduced event logic: geometry produces crossing intervals, not just sampled booleans.
- F.8 applied operational truth: crossing ≠ detection (illumination + range gates matter).
- F.9 made scaling real: reject cheaply, refine only inside windows, store intervals not samples.
So what remains is not a new topic. What remains is integration. F.10 and F.11 should not introduce a new visual style, a new reasoning style, or a new tone. They should unify what already exists into a single mission-level loop.
Two rules to preserve your Domain F identity
The goal is not to add complexity. The goal is to make the bridge logic explicit. Keep the same pattern: mission question → shared state → event windows → constraints → outputs.
F.10 — Integrated Mission Template
Purpose
F.10 combines everything built so far into one coherent, end-to-end mission evaluation loop:
Up to now, these have been explored individually. F.10 shows how they interact in sequence, where each stage constrains and shapes the next. This is where Domain F transitions from “technical modules” to “mission reasoning.”
1) Integrated Mission Scenario
Begin with a mission framing that naturally demands cross-domain reasoning.
Example scenario
Deploy into Sun-Synchronous Orbit and evaluate tracking capability over a 48-hour horizon.
This single sentence forces interaction between:
- Plane targeting (F.4) — correct inclination, RAAN, and LTAN drift logic.
- Propagation + geometry (F.7) — when crossings exist in time.
- Detectability gates (F.8) — which crossings are physically usable.
- Scaling mindset (F.9) — how to run this without brute force.
2) System Architecture — What Talks to What
A mission is not a single equation. It is a chain of modules passing shared state forward. Each module consumes and transforms state. Each module depends on upstream correctness.
- Launch window solver → chooses
t0for target plane/LTAN alignment. - Injection state definition → outputs
x_injatt_inj. - Propagator → produces ephemerides across the horizon.
- Geometry engine → converts ephemerides into crossing intervals.
- Detectability gates → filters crossings into detectable intervals.
- Scheduler/scoring → converts intervals into mission metrics and ranked opportunities.
3) Shared State Definitions — The Glue
Integration only works if shared state objects are defined clearly and consistently. These must be explicit (because ambiguity here breaks everything downstream):
t0— UTC liftoff timet_inj— injection epochx_inj— injected state vector or orbital elements- Frames — ECI/TEME/ECEF stated clearly (never implicit)
Crossings = [(t_start, t_end), ...]Detections = [(t_start, t_end), ...]- Constraints: sunlit, range, slew, duty cycle, station-night (if ground-based)
Why this matters
Frame mismatch alone can invalidate geometry results. Time-tag inconsistency can distort illumination logic. F.10 is “mission reasoning” only if the shared objects are clear enough that downstream modules cannot silently drift.
F.10 Algorithm — Coarse-to-Refine Integrated Evaluation
Reuse the core principle from F.7–F.9: reject cheaply, refine only when necessary, and store event intervals instead of dense time-series booleans.
Integrated mission loop (end-to-end)
- Choose
t0from launch window logic (plane/LTAN alignment). - Compute injection state at
t_inj. - Propagate sensor + object states on a coarse grid (e.g., 60 s).
- Apply cheap gates to build candidate windows (relaxed range, relaxed FOV with margin).
- Inside candidate windows, resample on a fine grid (e.g., 5–10 s).
- Extract crossing intervals (AOS/LOS style boundary detection).
- Apply detectability gates only inside crossings (sunlit, strict range, slew/duty checks).
- Convert detectable samples → detectable intervals.
- Store intervals + mission metrics (not dense time series).
- Output ranked opportunities and “constraint dominance” summary.
Pseudocode: interval-first evaluation
# coarse grid scan -> candidate windows -> refine -> interval extraction
t_coarse = grid(t0, horizon=48h, dt=60s)
propagate(sensor, object, t_coarse)
candidate = relaxed_range_gate & relaxed_fov_gate
cand_windows = boolean_to_intervals(t_coarse, candidate)
crossings = []
detections = []
for w in cand_windows:
t_fine = grid(w.start-pad, w.end+pad, dt=10s)
propagate(sensor, object, t_fine)
C = fov_crossing_boolean(t_fine)
crossings += boolean_to_intervals(t_fine, C)
D = C & sunlit(t_fine) & strict_range(t_fine) & slew_ok(t_fine) & duty_ok(t_fine)
detections += boolean_to_intervals(t_fine, D)
metrics = summarize(crossings, detections)
event_table = build_event_table(crossings, detections, metrics)
Mission-facing outputs
- Coverage percentage across 48 hours
- Worst gap duration
- Total detectable minutes per day
- Number of crossings vs detections
- Constraint dominance (illumination vs geometry vs slew)
- Sensitivity: pointing law change or small maneuver insertion
Deliverable standard
F.10 should feel like a mission review slide: a pipeline, an event table, and decisions — not a wall of time-series plots.
Python Snippet (F.10) — Integrated Pipeline Skeleton
This is intentionally a “glue skeleton,” not a full package. It’s the pipeline shape that matters: coarse scan → candidate windows → refine → interval extraction → metrics.
Open: minimal mission-style function skeleton
import numpy as np
def integrated_mission_eval(cfg):
"""
cfg contains:
- t0, horizon_hours
- injection_state (x_inj) or injection_orbit
- sensor model: FOV, pointing law
- constraints: rho_max, sunlit model, duty cycle, slew, keep-out zones
- dt_coarse, dt_fine
returns:
- event_table (intervals)
- metrics (mission-facing summary)
"""
# 1) Propagate (coarse grid)
t_coarse = make_time_grid(cfg["t0"], cfg["horizon_hours"], cfg["dt_coarse"])
sens_c = propagate_sensor(cfg, t_coarse)
obj_c = propagate_object(cfg, t_coarse)
# 2) Cheap gates -> candidate windows (use relaxed margins)
Cc = crossing_boolean(obj_c["r"], sens_c["r"], sens_c["b_hat"], cfg["fov_half_deg_margin"])
Rc, _ = range_gate(obj_c["r"], sens_c["r"], cfg["rho_max_km_margin"])
cand = Cc & Rc
windows = boolean_to_intervals(t_coarse["ts"], cand)
crossings = []
detections = []
# 3) Refine inside windows only
for (ta, tb) in windows:
t_fine = make_time_grid_window(ta, tb, cfg["dt_fine"], pad_s=cfg.get("pad_s", 30))
sens_f = propagate_sensor(cfg, t_fine)
obj_f = propagate_object(cfg, t_fine)
# 3a) Geometry intervals
C = crossing_boolean(obj_f["r"], sens_f["r"], sens_f["b_hat"], cfg["fov_half_deg"])
crossings += boolean_to_intervals(t_fine["ts"], C)
# 3b) Detectability gates (run only inside crossing windows)
R, _ = range_gate(obj_f["r"], sens_f["r"], cfg["rho_max_km"])
S = sunlit_gate(obj_f["r"], cfg, t_fine)
A = slew_feasible(sens_f, cfg, t_fine) & duty_cycle_ok(cfg, t_fine)
D = C & R & S & A
detections += boolean_to_intervals(t_fine["ts"], D)
# 4) Summaries
metrics = summarize_metrics(crossings, detections, cfg)
event_table = build_event_table(crossings, detections, metrics)
return event_table, metrics
F.11 — Interview-Level Integrated Case Study
F.11 demonstrates structured engineering reasoning under constraints. It should read like an interview response: fast scan, traceable logic, measurable conclusion.
1) Problem statement
Interview prompt
You are tasked with evaluating 48-hour tracking performance for a newly deployed SSO satellite. Determine whether mission constraints are satisfied and identify what limits performance most.
2) Assumptions & constraints
Declare assumptions before conclusions — like a real engineering discussion.
| Category | Assumption / Constraint |
|---|---|
| Orbit | Altitude, inclination, target LTAN (SSO intent) |
| Sensor | FOV half-angle, pointing law |
| Range | rho_max maximum usable range |
| Illumination | Sunlit requirement (or defined lighting threshold) |
| Slew | Maximum slew rate / keep-out zones |
| Duty cycle | Observation fraction / max tasking density |
| Time resolution | Coarse dt + fine dt (coarse→refine) |
3) Reasoning chain (explicit domain links)
- F.4: injection timing and plane alignment define RAAN/LTAN behavior.
- F.7: propagation produces geometric crossing intervals.
- F.8: detectability gates reduce crossings to feasible observations.
- F.9: coarse→refine + interval storage makes it computationally realistic.
4) Failure modes & mitigations
-
Launch timing error → RAAN/LTAN mismatch → lighting degradation
Mitigation: tighten launch window selection + time-tag consistency checks.
-
Coarse step misses short crossings → missed windows → wrong coverage conclusion
Mitigation: refine only inside candidate windows; boundary search on AOS/LOS.
-
Eclipse dominates → detections collapse
Mitigation: alternate pointing law, add sensor type, or multi-sensor configuration.
-
Slew limits violated → schedule infeasible
Mitigation: reduce tasking density; prioritize; widen FOV strategy.
5) Metrics summary
#crossings,#detectionstotal_detectable_time,worst_gap_durationmin_range_during_detectionscompute_time/ throughput
6) Next improvements (fidelity scaling)
- Use ephemeris-based Sun vector (instead of placeholders).
- Upgrade eclipse model: cylindrical → conical shadow.
- Include $J_2$ precession for longer horizons (LTAN drift realism).
- Add scheduling optimization (priority rules → solver).
- Introduce probabilistic detection models (SNR/attitude uncertainty).
F.11 algorithm (interview checklist)
- Write assumptions (orbit, sensor, constraints).
- Confirm frames + time tags are consistent.
- Build coarse candidate windows (range gate → angle/FOV gate).
- Refine inside windows; extract crossing intervals.
- Apply detectability gates; extract detection intervals.
- Summarize metrics; identify dominant constraint.
- Recommend mitigation + next fidelity upgrade.
The Domain F Integration Insight
Keep this as a callout
A perfect orbit can still fail the mission if access windows do not align with operational constraints.
This single sentence ties:
- F.4 — Plane intent (injection timing + plane alignment)
- F.7 — Geometry windows (crossing intervals)
- F.8 — Detectability filtering (illumination, range, constraints)
- F.9 — Scalable event architecture (coarse→refine, interval storage)
It preserves the Domain F mindset: not isolated equations, not isolated modules — a connected mission reasoning pipeline.