Glossary
Key terms and definitions used across the Qubitz AI system.
| Term | Definition |
|---|---|
| Alignment Score | AI-calculated score (0-1) measuring how well a use case aligns with the company's stated strategic intent |
| App Deployer Agent | Agent that handles the end-to-end deployment pipeline -- provisions infrastructure, builds and pushes containers, and reports deployment status |
| Application Management | Post-deployment dashboard with tabs for Overview, Agents, App Config, Observability, FinOps, and Settings |
| Artifacts | Generated documents and deliverables stored in the project's Artifacts panel |
| Best Fit | Green badge auto-assigned to the use case with the highest combined rating and sentiment score |
| Connection Log | Real-time scrollable log in the analysis modal showing each agent's actions during pipeline execution |
| Control Hub | The workspace where generated architecture specs are refined, tested, and deployed -- accessed via the Design App button |
| Deep Analysis | The full multi-agent research pipeline triggered by the "Deep Analysis" button |
| Design App | Button on the Jamming Board that opens Control Hub for a selected use case |
| Design Doc Agent | Agent that generates a comprehensive High-Level Design (HLD) document for deployed applications -- covers 13 sections (overview, agent system, tools, knowledge base, deployment, monitoring, security, performance, responsible AI, roadmap, support) plus 4 appendices. Outputs PDF to S3 |
| DevGuide | Agent that auto-generates developer documentation for deployed projects |
| Dual Publisher | Agent that publishes both report formats (standard Deep Research + structured Discovery) |
| Eraser API | Third-party diagramming service used to render AWS architecture diagrams |
| Feature Engineering Agent | Agent in Control Hub that handles feature extraction and data ingestion planning for the generated architecture |
| FinOps | Financial operations tab in Application Management -- tracks cost and usage of deployed AI applications |
| Impact Badge | Color-coded classification (High Impact = red, Medium Impact = yellow) assigned to each generated use case |
| Jamming Board | Visual board displaying generated use cases as cards with ratings, impact badges, and alignment scores |
| Observability Agent | Agent in Control Hub that sets up monitoring, tracing, and alerting for deployed applications |
| Pipeline | The multi-agent orchestration system that processes business context through sequential phases |
| PPTX Agent | Agent that generates the Executive Presentation artifact -- a PowerPoint deck summarizing use case findings for stakeholders |
| Priority Score | Calculated score in the AI Discovery Report combining Business Value, Feasibility, Risk, and other dimensions |
| Project ID | Auto-generated identifier for each saved analysis project (format: company-slug-numeric-id) |
| Qubitz API Gateway | FastAPI-based WebSocket proxy that authenticates API key requests and routes them to the correct AgentCore runtime |
| SOW (Statement of Work) | Auto-generated PDF document with project scope, effort breakdown, and e-signature block |
| SOW Agent | Agent that auto-generates the Statement of Work PDF -- runs 11 parallel Bedrock calls covering scope, effort, delivery approach, and e-signature block |
| Structured Writer | Agent specialized in generating the templated AI Discovery Report with decision matrices and scoring |
| Testbed | Sandbox environment within Control Hub for testing generated architecture before deployment |
| UCT (Use Case Tool) | The main analysis tool where users input business context and receive AI-generated use cases |
| WAFR | Well-Architected Framework Review -- agent that generates AWS compliance reports |
| WAFR AI Agent | Agent that runs an automated AWS Well-Architected Framework Review against a deployed application's infrastructure -- uses a two-pass AI inference system to map detected AWS services to best practices, then generates a compliance report uploaded to S3 |