AI-Generated Content — May Contain Errors — Not Independently Fact-Checked — Research Use Only

Assessment Disclosure

This is an open-source strategic intelligence assessment of the ongoing Iran-US conflict, now in its second week. It does not represent official government intelligence analysis or classified assessments. All analytical judgments are derived from publicly available information, institutional research, and established analytical frameworks.

No classified or sensitive information was used in the creation of this assessment. All analytical frameworks, probability methods, and structural approaches are derived from publicly available sources on intelligence analysis methodology, including RAND, CSIS, CFR, Brookings, and other research institutions.

Forward-looking projections, Monte Carlo simulations, and probability estimates represent structured analytical judgments under conditions of extreme uncertainty. They should be treated as frameworks for planning, not predictions. Readers should apply their own judgment and monitor the strategic indicators identified throughout this assessment.

Analytical Framework

Structured Analytic Techniques Employed

This assessment employs several structured analytic techniques (SATs) drawn from the Intelligence Community's standard methodology as described in Richards Heuer's Psychology of Intelligence Analysis and the US Government's Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis.

Scenario Analysis

Multiple plausible future states are developed and assessed for probability. Each scenario is constructed to be internally consistent and driven by identifiable causal mechanisms. Scenarios are not predictions but structured frameworks for thinking about uncertainty. The goal is to bound the range of plausible outcomes and identify the key variables that differentiate one trajectory from another.

Application in this assessment: Used extensively in the Strategic Forecast section, where 30/60/120-day scenarios are developed for battlefield, economic, and geopolitical trajectories.

Key Assumptions Check (KAC)

Every major analytical judgment rests on assumptions—some explicit, some implicit. The Key Assumptions Check systematically identifies and evaluates the assumptions underlying an analysis, assessing how sensitive the conclusions are to changes in those assumptions.

Key assumptions in this assessment are documented in the Analytical Assumptions section below. Readers should evaluate which assumptions they find most questionable and consider how conclusions would change if those assumptions prove incorrect.

Analysis of Competing Hypotheses (ACH)

For judgments involving adversary intent or capability, multiple competing hypotheses are evaluated against available evidence. This technique guards against confirmation bias by forcing consideration of alternative explanations for observed data.

Application: Used in assessing Iranian leadership intent, proxy force decision-making, and attribution of cyber operations.

Indicators and Warnings (I&W)

For each major risk and scenario, specific observable indicators are identified that would signal movement toward that outcome. These indicators serve as early warning markers for monitoring and forecast revision.

Application: Each section of the assessment includes "Key Indicators to Monitor" lists designed to support ongoing situational awareness.

Red Team Analysis

Selected judgments incorporate adversary perspective analysis—attempting to model Iranian, Russian, and Chinese decision-making from their strategic perspective rather than assuming they share US priorities or risk calculus.

Probability Methodology: The Kent-Sherman Chart

Probability Language Standards

This assessment follows the probability language conventions established by Sherman Kent for the CIA's Board of National Estimates and subsequently refined by the Intelligence Community Directive (ICD) 203. These conventions translate qualitative probability language into quantitative ranges to reduce ambiguity in analytical communication.

Verbal Expression Probability Range Usage Context
Almost certainly / Nearly certain 93–99% Reserved for judgments with overwhelming evidence and minimal alternative explanations
Very likely / Highly probable 81–92% Strong evidence base with limited plausible alternatives
Likely / Probable 63–80% Preponderance of evidence supports the judgment
Roughly even chance 40–62% Evidence is balanced or insufficient to favor one outcome
Unlikely / Improbable 20–39% Evidence weighs against the judgment but it remains plausible
Very unlikely / Highly improbable 8–19% Limited evidence supports the judgment; most evidence contradicts it
Almost certainly not / Remote 1–7% Near-zero probability but cannot be completely excluded

Important note: Where this assessment uses numerical probability ranges (e.g., "30–40%"), these represent the analyst's best estimate of the probability that a specific event will occur within a stated timeframe. They are subjective probability estimates informed by evidence and structured analysis, not statistical calculations from empirical data. Readers should treat them as calibrated judgments, not precise measurements.

Confidence Framework

Confidence Level Definitions

Confidence levels in this assessment reflect two factors: (1) the quality and quantity of available evidence, and (2) the degree of agreement among analytical approaches. Confidence is distinct from probability—a judgment can be low-probability but high-confidence (e.g., "we are highly confident that nuclear use is unlikely").

High Confidence

The judgment is based on high-quality information from multiple independent sources. Analytical logic is well-established. Alternative explanations have been considered and found substantially less plausible. Key assumptions are well-supported.

In this assessment: Applied to military capability assessments based on well-documented order-of-battle data, economic projections based on established market mechanisms, and judgments about well-understood adversary behaviors.

Moderate Confidence

The judgment is based on credibly sourced information but with gaps in coverage, limited corroboration, or some analytical uncertainty. Alternative explanations remain plausible. Some key assumptions are reasonable but unverified.

In this assessment: Applied to adversary intent assessments, medium-term forecasts, geopolitical alignment projections, and judgments that depend on leadership decision-making under crisis conditions.

Low Confidence

The judgment is based on fragmentary information, significant intelligence gaps, questionable source reliability, or inherently unpredictable dynamics. Multiple alternative explanations are equally or nearly equally plausible. Key assumptions may be speculative.

In this assessment: Applied to long-range forecasts (120+ days), black swan risk probabilities, assessments of covert programs or hidden capabilities, and judgments about adversary behavior in unprecedented situations.

Scenario Modeling Framework

Methodology for Constructing Scenarios

The scenarios presented in this assessment (particularly in the Strategic Forecast and Black Swan Risks sections) are constructed using a multi-step process:

  1. Identify key drivers: Determine the 3–5 most important variables that will shape the conflict trajectory. For this assessment, the primary drivers are: (a) Iranian military resilience, (b) proxy theater intensity, (c) economic/energy market dynamics, (d) Iranian political succession, and (e) great-power involvement.
  2. Map the possibility space: For each key driver, identify the range of plausible outcomes (e.g., Iranian military: rapid collapse ↔ sustained resistance). The combination of driver states defines the scenario space.
  3. Construct internally consistent scenarios: Select combinations of driver states that are mutually consistent and represent distinct trajectory types. Not all combinations are plausible—for example, rapid Iranian military collapse is inconsistent with sustained proxy theater escalation.
  4. Assign probabilities: Estimate the probability of each scenario using available evidence, historical analogies, and structured judgment. Probabilities should sum to approximately 100% when scenarios are exhaustive and mutually exclusive (they rarely are in practice, so overlap is acknowledged).
  5. Identify indicators: For each scenario, identify observable markers that would increase or decrease confidence that the scenario is materializing. These indicators enable real-time monitoring and forecast revision.
  6. Stress test: Challenge each scenario against historical analogies, adversary perspective analysis, and explicit identification of the assumptions most likely to be wrong.

Historical Analogies Used

The following historical cases inform the scenario construction and probability estimation in this assessment. No historical analogy is perfect—each is used selectively for specific analytical dimensions:

  • Gulf War 1991: Coalition air campaign against state military capability, Strait of Hormuz security, Iraqi Scud missile retaliation, coalition management dynamics
  • Iraq War 2003: Regime change operations, post-conflict state collapse dynamics, WMD intelligence uncertainty, coalition political sustainability
  • Libya 2011: Air campaign without ground forces, regime collapse consequences, weapons proliferation from unsecured arsenals
  • Israel-Hezbollah 2006: Rocket warfare dynamics, asymmetric deterrence, civilian infrastructure targeting, information warfare
  • Tanker War 1987–1988: Persian Gulf maritime operations, mine warfare, escort operations, Hormuz transit security, Iran Air 655 accidental engagement
  • 1973 Oil Embargo: Energy supply shock dynamics, economic transmission mechanisms, strategic petroleum reserve utility
  • Soviet-Afghan War 1979–1989: Great-power proxy dynamics, covert arms transfers, regional destabilization
  • Russia-Ukraine 2022–present: Drone warfare evolution, cyber operations in conflict, information warfare, energy market weaponization, sanctions dynamics

Analytical Assumptions

Core Assumptions Underlying This Assessment

All intelligence analysis rests on assumptions. Transparency about those assumptions enables readers to evaluate the analysis and identify where their own judgment may differ. The following core assumptions underpin this assessment:

Assumption 1: No Nuclear Weapons Use

Assumption [Source] Vulnerability: Moderate

All primary scenarios assume no nuclear weapons are used by any party. While nuclear escalation is addressed as a black swan risk, it is not the baseline planning assumption. If this assumption fails, all other forecasts become invalid.

Assumption 2: Rational Actor Decision-Making (Modified)

Assumption [Source] Vulnerability: High

The assessment assumes that all primary state actors (US, Iran, Israel, Russia, China) generally pursue policies consistent with their perceived strategic interests, even if those perceptions differ from objective analysis. This assumption is particularly challenged by Iran's leadership transition—Mojtaba Khamenei was elected Supreme Leader on March 8 under IRGC pressure, and decision-making under existential wartime conditions may not reliably follow rational-actor models. This assumption is modified for non-state actors (Hezbollah, Houthis, PMF), which may operate under different risk calculi.

Assumption 3: No Direct Great-Power Military Confrontation

Assumption [Source] Vulnerability: Low-Moderate

Primary scenarios assume Russia and China maintain their current posture of diplomatic opposition and limited material support without direct military engagement. Deviation from this assumption is addressed in the black swan analysis.

Assumption 4: US Military Operational Capability

Assumption [Source] Vulnerability: Low

The assessment assumes US military forces maintain current readiness levels and are not significantly degraded by a separate crisis (e.g., Taiwan contingency, North Korean provocation, domestic emergency). US precision-strike capability, intelligence superiority, and logistics capacity are assumed to function at or near demonstrated historical levels.

Assumption 5: Proxy Force Autonomy

Assumption [Source] Vulnerability: Moderate

The assessment assumes that Iranian proxy forces (Hezbollah, Houthis, Iraqi PMF) retain significant operational autonomy even if Iranian C2 is degraded. Historical evidence suggests these groups can sustain operations independently for extended periods. If proxy forces prove more dependent on Iranian direction than assessed, their operational tempo may decline faster than projected.

Assumption 6: Information Reliability

Assumption [Source] Vulnerability: High

In any wartime assessment, the analyst must account for deliberate deception, information warfare, and the fog of war. The information environment is actively contested, and adversaries invest significant resources in shaping the information available to analysts. Open-source intelligence is particularly susceptible to influence operations during active conflict. All claims in this assessment are tagged with confidence badges and cross-referenced against multiple independent sources where possible.

Assumption 7: Economic Model Continuity

Assumption [Source] Vulnerability: Moderate

Economic projections (oil prices, market impacts, recession risk) assume that existing economic models and market mechanisms continue to function. Unprecedented disruptions could produce non-linear effects that exceed model predictions. The 2008 financial crisis demonstrated that correlated failures can produce outcomes outside the range of historical experience.

Source Categories

Classification of Sources by Type

Intelligence analysis draws on multiple source categories, each with distinct strengths and limitations. In an official intelligence assessment, sources would be classified and protected. This open-source assessment references the following publicly available source categories:

Open Source Intelligence (OSINT)

  • News reporting from major wire services (Reuters, AP, AFP)
  • Satellite imagery analysis (commercial providers: Maxar, Planet, BlackSky)
  • Social media monitoring and analysis
  • Academic publications and conference proceedings
  • Government publications and official statements
  • Financial market data and economic indicators

Strengths: Broad coverage, timeliness, verifiability. Limitations: Subject to information operations, limited access to classified activities.

Technical Intelligence (TECHINT) Analogues

  • Published technical assessments of weapons systems
  • IAEA inspection reports and safeguards documentation
  • Cybersecurity industry threat intelligence reports
  • Shipping and maritime tracking data (AIS, vessel registries)
  • Energy market and infrastructure technical data

Strengths: Objective, measurable, less subject to deception. Limitations: Technical data requires expert interpretation, may not reveal intent.

Expert Analysis and Think Tank Research

  • Published research from policy research institutions
  • Military journals and professional publications
  • Congressional Research Service reports
  • Former official memoirs and public commentary
  • Academic area studies and regional expertise

Strengths: Deep expertise, analytical rigor, institutional knowledge. Limitations: May reflect institutional biases, often backward-looking rather than predictive.

Historical Case Studies

  • Declassified intelligence assessments from previous conflicts
  • Military after-action reports and lessons-learned documents
  • Academic military history and strategic studies
  • Post-conflict reconstruction analyses

Strengths: Provides tested patterns and precedents. Limitations: No historical situation perfectly maps to the present; false analogies can mislead.

Reference Institutions

Policy Research and Analysis Organizations

The analytical frameworks, data sources, and subject-matter expertise informing this assessment are drawn from or informed by the published work of the following institutions. These organizations are cited for their publicly available research; citation does not imply endorsement of this assessment by any institution.

Military and Security Analysis

  • RAND Corporation — Defense policy research, wargaming methodology, force structure analysis. Particularly relevant: reports on Iran military capabilities, Persian Gulf security, and post-conflict stabilization.
  • International Institute for Strategic Studies (IISS) — Annual Military Balance publication providing comprehensive order-of-battle data. Iran chapter provides foundational force structure assessments used in the Military analysis.
  • Stockholm International Peace Research Institute (SIPRI) — Arms transfer databases, military expenditure data, conflict trend analysis. SIPRI data informs weapons system assessments and arms transfer tracking.
  • Center for Naval Analyses (CNA) — Naval warfare analysis, maritime security assessments, Persian Gulf operational studies.

Policy and Strategy

  • Center for Strategic and International Studies (CSIS) — Middle East program, defense analysis, missile threat assessments. The CSIS Missile Defense Project provides detailed Iranian missile capability data.
  • Council on Foreign Relations (CFR) — Iran policy analysis, crisis management frameworks, alliance dynamics research.
  • Brookings Institution — Middle East policy research, economic analysis, arms control studies. The Brookings Iran project provides foundational policy analysis.
  • Carnegie Endowment for International Peace — Nuclear policy, Iran domestic politics analysis, non-proliferation research.

Economic and Energy Analysis

  • International Monetary Fund (IMF) — Global economic forecasting, financial stability assessments, country economic profiles. IMF models inform the economic impact projections in the Economics section.
  • International Energy Agency (IEA) — Oil market reports, energy security assessments, strategic reserve data. IEA supply/demand data provides the foundation for energy market analysis in the Energy section.
  • US Energy Information Administration (EIA) — Detailed petroleum supply and demand data, infrastructure mapping, pricing analysis.
  • World Bank — Commodity market outlook, development impact assessments, humanitarian response frameworks.

Nuclear and Arms Control

  • International Atomic Energy Agency (IAEA) — Iran safeguards reports, enrichment monitoring data, facility inspection records. IAEA quarterly reports on Iran provide the baseline for nuclear capability assessments.
  • Institute for Science and International Security (ISIS) — Nuclear program technical analysis, enrichment capability modeling, breakout timeline estimates.
  • Nuclear Threat Initiative (NTI) — WMD security assessments, radiological threat analysis.
  • Arms Control Association (ACA) — Treaty compliance analysis, arms control policy frameworks.

Cybersecurity and Technology

  • MITRE ATT&CK Framework — Adversary tactics and techniques taxonomy used to characterize Iranian cyber operations.
  • CrowdStrike / Mandiant / Recorded Future — Threat intelligence on Iranian APT groups (APT33, APT34, APT35, MuddyWater). Attribution methodologies and malware analysis.
  • CISA (Cybersecurity and Infrastructure Security Agency) — Critical infrastructure vulnerability assessments, threat advisories, incident response data.
  • Atlantic Council Digital Forensic Research Lab — Information operations tracking, disinformation campaign analysis, social media manipulation detection.

Regional and Area Studies

  • Middle East Institute (MEI) — Iran domestic politics, regional dynamics, Gulf security analysis.
  • Washington Institute for Near East Policy (WINEP) — Military analysis of Iranian forces, Hezbollah/proxy assessments, Israel security studies.
  • Crisis Group (International Crisis Group) — Conflict analysis, mediation frameworks, humanitarian impact assessments.
  • Chatham House (Royal Institute of International Affairs) — Iran program research, energy security studies, European policy perspectives.

Assessment Badge System

Information Classification Badges

Throughout this assessment, information is tagged with badges indicating its analytical status:

Verified [Source] — Information that has been corroborated by multiple independent sources and/or is based on directly observable evidence. "Verified" denotes information corroborated by multiple independent news sources and official records.

Assumption [Source] — A premise that is assumed to be true for the purpose of the analysis but has not been independently verified. Assumptions are made explicit so readers can evaluate their validity and consider how conclusions would change if the assumption proves incorrect.

Forecast [Source] — A forward-looking analytical judgment about future events. All forecasts carry inherent uncertainty and are accompanied by probability estimates and confidence levels. Forecasts should be treated as structured judgments, not predictions.

Analyst Assessment [Source] — A judgment that reflects the analyst's interpretation of available evidence. These assessments go beyond reporting facts to offer analytical value-added, including causal reasoning, pattern recognition, and probabilistic judgment. They are the analyst's best assessment, not established fact.

Analytical Limitations

Known Limitations and Biases

Intellectual honesty requires acknowledging the limitations of any analytical product. The following limitations apply to this assessment:

Cognitive Biases

  • Anchoring bias: Initial assessments formed in the first days of the conflict may disproportionately influence later analysis, even as the situation evolves. This assessment attempts to mitigate anchoring through regular reassessment of key judgments.
  • Mirror imaging: The tendency to assume adversaries think and decide as we would. Iranian, Russian, and Chinese decision-making may follow cultural, institutional, and strategic logics that differ fundamentally from Western frameworks. Red team analysis is employed to mitigate this bias.
  • Availability bias: Dramatic, recent, or vivid events may receive disproportionate analytical weight. The cyber attack incidents and deepfake examples are particularly susceptible to this bias.
  • Optimism/pessimism bias: Analysts may systematically over- or under-estimate the probability of negative outcomes depending on institutional culture and individual disposition.

Information Environment Limitations

  • Fog of war: Active conflict degrades information quality. Reports from the battlefield are frequently inaccurate, delayed, or deliberately manipulated. This assessment attempts to account for information uncertainty but cannot eliminate it.
  • Adversary deception: Iran, its proxies, and potentially third parties are actively engaged in information operations designed to mislead both the public and analytical communities. Deception-aware analysis is practiced but deception detection is inherently imperfect.
  • Classification constraints: An official government assessment would benefit from classified intelligence sources unavailable in an open-source analysis. Significant intelligence gaps exist regarding Iranian leadership decision-making, covert military capabilities, and cyber operational details.
  • Temporal limitations: This assessment reflects conditions as of the publication date. The situation may have evolved significantly since analysis was completed. All judgments should be evaluated against the most current available information.

Structural Limitations

  • Scenario framing effects: The scenarios presented inevitably constrain imagination about the future. Outcomes that fall between or outside defined scenarios are possible and may be more likely than any single scenario.
  • Quantitative precision illusion: Numerical probability estimates create an impression of precision that may not be warranted. A judgment of "30–40% probability" should be read as "our best estimate is roughly in this range" rather than as a statistically rigorous calculation.
  • Linear extrapolation risk: Many forecasts extrapolate from current trends. Non-linear dynamics—tipping points, cascading failures, emergent phenomena—can produce outcomes that linear models fail to anticipate.

How to Use This Assessment

Guidance for Readers

  • This is not a prediction. The assessment presents structured analytical judgments about a range of possible futures. No single scenario should be treated as "what will happen."
  • Challenge the assumptions. The most valuable use of this assessment is to identify which assumptions you find most questionable and consider how conclusions would change if they fail.
  • Monitor the indicators. The indicators listed throughout the assessment provide a framework for updating judgments as new information becomes available.
  • Consider the confidence levels. High-confidence judgments warrant different planning responses than low-confidence ones. Invest hedging resources proportional to uncertainty.
  • Read the black swans. Low-probability events are not zero-probability events. The Black Swan Risks assessment identifies the contingencies most likely to invalidate primary forecasts.
  • This is open-source analysis, not official intelligence. This assessment uses publicly available information and established analytical frameworks. It does not represent government assessments or classified intelligence products.

Key Verified Sources Used in This Assessment

Primary News and Wire Services

Official Government Sources

Economic and Energy

Encyclopedic References

Document Version History

Version Date Changes
1.0 March 13, 2026 Initial publication of complete assessment across all sections