RAG Answer Generation
AI ↔ AIEmbed query, retrieve context, generate answer, validate grounding.
5 nodes · 5 edgesai
agentdbapi
Visual
Embed User Queryapi
Convert natural language query to vector embedding via embedding model.
↓sequential→ Vector Search
Vector Searchdb
Search vector store for top-k relevant document chunks.
↓sequential→ Generate Answer
Generate Answeragent
Produce answer grounded in retrieved context passages.
↓sequential→ Hallucination Check
Hallucination Checkagent
Verify every claim in the answer traces back to a retrieved chunk.
↓conditional→ Return Response
↓fallback→ Generate Answer
Return Responseapi
Deliver validated answer to user with source citations.
uc-rag-pipeline.osop.yaml
osop_version: "1.0"
id: "rag-pipeline"
name: "RAG Answer Generation"
description: "Embed query, retrieve context, generate answer, validate grounding."
nodes:
- id: "embed_query"
type: "api"
name: "Embed User Query"
description: "Convert natural language query to vector embedding via embedding model."
- id: "retrieve"
type: "db"
name: "Vector Search"
description: "Search vector store for top-k relevant document chunks."
timeout_sec: 5
- id: "generate"
type: "agent"
subtype: "llm"
name: "Generate Answer"
description: "Produce answer grounded in retrieved context passages."
security:
risk_level: "low"
- id: "validate"
type: "agent"
subtype: "llm"
name: "Hallucination Check"
description: "Verify every claim in the answer traces back to a retrieved chunk."
- id: "deliver"
type: "api"
name: "Return Response"
description: "Deliver validated answer to user with source citations."
edges:
- from: "embed_query"
to: "retrieve"
mode: "sequential"
- from: "retrieve"
to: "generate"
mode: "sequential"
- from: "generate"
to: "validate"
mode: "sequential"
- from: "validate"
to: "deliver"
mode: "conditional"
when: "validation.grounded == true"
- from: "validate"
to: "generate"
mode: "fallback"
label: "Regenerate with stricter prompt"