Execution Phases of Use Case Tool

Detailed breakdown of the execution phases in the Qubitz AI multi-agent research pipeline.

Execution Phases

Phase 1: Initialization (0:00 - 0:15)

The system establishes connectivity and plans the research strategy.

[SYSTEM]               → Connecting to use case tool...
[RESEARCHER: PLANNING] → Planning the research strategy and subtasks...
[SYSTEM]               → Establishing secure connection...

Phase 2: Initial Research (0:15 - 2:00)

The AGENT: RESEARCH begins broad research based on the submitted business context. The agent crawls the company website, analyzes publicly available information, and gathers market intelligence. The Connection Log shows elapsed time counters (10s to 120s) as research progresses.

[AGENT: RESEARCH] → Conducting initial research... (10s elapsed)
[STATUS]          → Research in progress...
[AGENT: RESEARCH] → Conducting initial research... (120s elapsed)

Phase 2: Initial Research

Phase 3: Subtopic Research (2:00 - 4:00)

The research agent decomposes the broad analysis into specific subtopics and investigates each in parallel. The Connection Log displays a raw JSON-formatted agent message with the research agent's identifier, followed by messages for each subtopic.

Typical subtopics observed for a cloud services company:

  • Company's Market Positioning in the cloud/AI ecosystem
  • Priority AI Automation Use Cases with quantified ROI potential
  • Platform Enhancement Opportunities with competitive differentiation
  • Implementation Framework and Guidelines for enterprise adoption

Phase 3: Subtopic Research

Two writer agents work in parallel:

AGENT: STRUCTURED_WRITER generates the AI Discovery Report using a structured template. Elapsed time: 12s to 96s. Produces ~24,087 characters of structured content including decision matrices, architecture diagrams, scoring tables, and roadmap phases.

AGENT: WRITER composes the Deep Research Report as a narrative document with sections, data tables, and citations. Elapsed time: ~50s. Produces ~55,867 characters of research narrative covering competitive positioning, industry opportunities, platform strategy, and strategic roadmap.

Phase 3: Parallel Subtopic Research

Phase 4: Report Writing (4:00 - 6:30)

Two specialized writer agents work in parallel to generate comprehensive reports from the research data.

Steps:

  1. Research data is processed and structured
  2. Two writer agents activate simultaneously:
    • AGENT: STRUCTURED_WRITER -- Generates AI Discovery Report
    • AGENT: WRITER -- Composes Deep Research Report
  3. Each agent produces distinct content types with specific formatting
  4. Elapsed time counters track writing progress (12s to 96s)
  5. Character counts are tracked internally

AGENT: STRUCTURED_WRITER Output

  • Format: Structured template with sections, tables, matrices
  • Content: Decision matrices, architecture diagrams, scoring tables, roadmap phases
  • Length: ~24,087 characters
  • Elapsed Time: 12s to 96s
[AGENT: STRUCTURED_WRITER] → Generating AI Discovery Report... (12s)
[AGENT: STRUCTURED_WRITER] → Writing decision matrices and scoring tables... (48s)
[AGENT: STRUCTURED_WRITER] → Finalizing structured report sections... (96s)

AGENT: WRITER Output

  • Format: Narrative document with sections, citations, data tables
  • Content: Competitive positioning, industry opportunities, platform strategy, strategic roadmap
  • Length: ~55,867 characters
  • Elapsed Time: ~50s
[AGENT: WRITER] → Composing Deep Research Report... (15s)
[AGENT: WRITER] → Writing research narrative and analysis... (35s)
[AGENT: WRITER] → Finalizing report sections and citations... (50s)

Phase 4: Report Writing (6:30 - 7:00)

The dual publisher agent publishes both reports and confirms completion with exact character counts.

Steps:

  1. AGENT: DUAL_PUBLISHER receives completed reports from both writers
  2. Reports are validated for completeness and format
  3. Exact character counts are calculated and logged
  4. Reports are published to the artifact system
  5. System transitions to use case generation mode
[AGENT: DUAL_PUBLISHER] → Publishing AI Discovery Report...
[AGENT: DUAL_PUBLISHER] → Publishing Deep Research Report...
[AGENT: DUAL_PUBLISHER] → Reports published (standard: 55,867 chars, structured: 24,087 chars)
[STATUS]                → Multi-agent research completed — preparing for use case generation
[STEP: REVIEW]          → Initiating use case generation from research data...

Phase 5: Publishing & Review (7:00 - 8:00)

The system streams real-time use case generation based on the completed research.

Steps:

  1. Research data is processed for use case extraction
  2. AI identifies priority automation opportunities
  3. Each use case is generated with detailed metadata
  4. ROI calculations are performed for each use case
  5. Alignment scores are calculated based on business goals
  6. Use cases are prioritized by impact and feasibility

Real-Time Display

  • Counter Badge: Increments live ("3 use cases generated" to "6 use cases generated" to "12 use cases generated")
  • JSON Preview: Displays each use case as it's created
  • "View Analysis..." Button: Appears when generation completes

Each Use Case Includes

FieldDescription
TitleDescriptive name (e.g., "Automated Security Threat Detection")
DescriptionDetailed explanation of the use case
estimated_roiQuantified ROI percentage or dollar value
alignment_score0-100 score based on strategic fit
implementation_timelineEstimated months to deploy
business_valueHigh / Medium / Low classification
technical_complexityComplexity rating
required_capabilitiesList of needed technologies and skills
[STATUS] → Generating use cases... (3 generated)
[STATUS] → Generating use cases... (6 generated)
[STATUS] → Generating use cases... (9 generated)
[STATUS] → Generating use cases... (12 generated)
[STATUS] → Use case generation complete

Typical output: 10-12 use cases with full metadata.

Phase 6: Use Case Generation

Analysis modal closes and the Results Dashboard loads with the completed project.

Steps:

  1. Analysis modal transitions to success state
  2. Green checkmark icon appears
  3. Status updates: "Saving Your Project..." to "Project Saved"
  4. Auto-generated Project ID is created and displayed
  5. Project ID becomes editable in input field
  6. Four report tabs appear below success indicator
  7. Use case grid populates with all generated use cases

Success State Components

ComponentDetails
Success IconGreen checkmark
Status Text"Saving Your Project..." to "Project Saved"
Project IDAuto-generated format: company-slug-numeric-id (e.g., cloud202-897)
Edit FieldEditable input for renaming project
Save Button"Saving..." spinner to "Saved" confirmation
Helper Text"You can add more use cases once the Project starts."

Report Tabs

TabIconActionDestination
Start againRefresh (green)Reset form for new analysisUse Case Tool form (blank)
AI Use Case DiscoveryDocumentOpen AI Discovery ReportArtifact viewer with PDF + AI chat
Deep ResearchDocumentOpen Deep Research ReportArtifact viewer with PDF + AI chat
Use Case ListDocumentOpen Use Case ListArtifact viewer with PDF + AI chat

Use Case Grid

  • Layout: Two-column card grid
  • Card Content: Title, truncated description, impact badge
  • Impact Badges:
    • High Impact (red badge)
    • Medium Impact (yellow badge)
    • Low Impact (blue badge, if applicable)
  • Interaction: Click any card to view full use case details

Results Dashboard

Each entry follows: [AGENT_TYPE: SUBTYPE] → Message... (elapsed time)

Color coding:

Agent TypeColor
SYSTEMWhite
RESEARCHER / AGENT: RESEARCHBlue
AGENT: STRUCTURED_WRITER / WRITERPurple
AGENT: DUAL_PUBLISHERGreen
STEP: REVIEWYellow

Error Handling & Recovery

If the pipeline encounters an error during execution:

  • The Connection Log displays the error message
  • The timer pauses
  • The Cancel button remains available to abort
  • If using "Run in Background," the project is saved with partial results where possible

To retry after an error, use the "Start again" tab on the Results Dashboard to re-submit the form.