AI Prompt
The AI Prompt feature uses Large Language Models (LLMs) to generate, extend, and explain Latch‑X reliability models from natural‑language descriptions. It accelerates model creation and helps teams understand complex architectures—while keeping results schema‑valid and ready to analyze.
Key benefits
Why use AI Prompt
- Rapid prototyping: turn plain English into a working YAML model.
- Learning aid: see best‑practice patterns applied to your system.
- Smart extension: add components and dependencies to existing models.
- Built‑in explanations: get plain‑language summaries of model logic and risks.
Core functions
1) Model generation
Create a complete model from a textual description of your system (or extend an existing one).
What it does
- Proposes components with reasonable
mttf/mttrorprobvalues. - Builds dependencies (
do,and,or,n_of_k) using best practices. - Honors schema rules (one dependency type per target, hours as time unit, latch rules).
2) Model explanation
Summarizes an existing model, surfaces critical paths, and highlights likely failure modes.
What it does
- Explains structure and key dependencies.
- Points out critical contributors and outage scenarios.
- Suggests improvements (e.g., add redundancy, fix latch patterns).
AI model options (LLMs)
Availability depends on your workspace configuration and tier.
GPT‑4.1‑mini (default)
Strengths: fast responses, cost‑efficient, excellent for standard architectures.
Best for: iterative development, learning, quick prototypes.
GPT‑5 (advanced)
Strengths: deeper reasoning and richer explanations for complex systems.
Best for: unique/complex architectures, detailed reviews, production designs.
How to use AI Prompt
Access
- Go to BN,MC engine tab
- Click the AI Prompt tab.
- Choose your AI model (e.g., GPT‑4.1-mini or GPT‑5).
- Enter your prompt → Generate (or Explain for explanations).
Writing effective prompts
Do
- Be specific about tiers, redundancy, and failover.
- Include known availability targets and any RTO constraints.
- Provide realistic MTTF/MTTR or prob when you have them.
- Use technical terms and established patterns.
Avoid
- Vague descriptions (“make it reliable”).
- Missing critical elements (DB, network, power).
- Conflicting requirements (both “no single point” and “single DB”).
- Pure business wording without technical details.
Prompt templates (copy‑paste)
A) New model from scratch
Build a high‑availability web service model.
Requirements:
- Load balancer fronting 3 web servers (active‑active).
- API tier with 2 nodes (active‑active).
- Primary database with hot standby failover (≤ 1 h RTO).
- Redis cache is optional (degrades performance only).
- Global CDN and managed DNS.
- Target availability: 99.99 %.
- Use hours for MTTF/MTTR. Prefer exp distributions.
- Use a latch for DB failover (mttf=0.5 h, max_delay=1 h).
B) Extend an existing model
Extend the current model with monitoring that does not affect availability:
Add:
- monitoring_service (prob=0.998)
- alert_manager (prob=0.999)
- pagerduty_integration (prob=0.9995)
- dashboard (prob=0.995)
Rules:
- System stays UP if monitoring is DOWN.
- Output only the YAML patch (components+dependencies to merge).
- Keep one dependency type per target.
C) Explain this model
Explain this model’s architecture, list the three most critical contributors to unavailability, and suggest one improvement per tier.
YAML contract (guardrails the LLM follows)
- Top level:
root,components,dependencies(all required). - Types:
normal | logical | latch. - Normals: either
probor (mttfandmttr), not both. Time unit = hours. - Latches: either
probor (mttfandmax_delay); ifmttfis set,max_delayis required; nomttr. - Distributions (experimental):
mttf_dist,mttr_distin{exp, norm, lognorm}(deltaallowed formttr_dist);norm/lognormrequiresigma. - Dependencies: exactly one of
do | and | or | n_of_kper target. - n_of_k shape:
yaml target: n_of_k: n: <int> inputs: [a, b, c]
Best practices
Generation tips
- Start simple → core tiers first; add details iteratively.
- Name risks clearly when using
prob(e.g.,not_natural_disaster). - Prefer OR for instant redundancy, latch for time‑constrained failover.
Validation steps
- Run Validate (Components → Validate).
- Check units (hours), latch max_delay, and distribution sigma.
- Use Sensitivity Explorer to spot logic bugs.
Limitations
- Very long prompts can be truncated by the model context window.
- Generated parameters are estimates; adjust to your environment.
- The LLM may not know proprietary or niche tech; provide specifics.
Pro tips
- Iterate: ask the AI for a minimal skeleton, then refine.
- Constrain output: ask for “valid YAML only”.
- Patch workflows: request only the changed components/dependencies.
- Review diffs before running analysis.
Common issues & solutions
Model generation
- Too simple / missing pieces → give a bullet list of required components and relationships.
- Unrealistic values → specify targets (e.g.,
target_availability: 0.9999) or concrete MTTF/MTTR. - Wrong dependencies → say “requires”, “active‑active”, “quorum 2‑of‑3”, “time‑constrained failover via latch”.
Generation failures
- Errors / empty output → simplify prompt; remove exotic symbols; switch model (mini ↔ advanced).
- Slow / timeouts → break a large prompt into smaller steps.
Explanation quality
- Too generic → provide the YAML and ask targeted questions (“Which 3 nodes drive unavailability?”).
Privacy & safety
- Don’t include secrets in prompts (passwords, private keys, tenant IDs).
- Review generated content before sharing externally.
- Follow your organization’s data‑handling policies.
Next steps
- → Model Management — access the AI Prompt tab
- → Components — types and parameters
- → Dependencies — modeling logic
- → Run analysis — analyze AI‑generated models