AI Command Center - User Guide
Monitor and manage your AI Use Cases from a centralized dashboard. Track performance, control costs, and troubleshoot AI implementations in your Salesforce environment.
What you can do:
- Create and manage AI Use Cases
- Monitor execution logs and performance metrics
- Track costs and optimize spending
- Review chat conversations for conversational agents
Getting Started
Accessing the AI Command Center
Navigate to AI Command Center from the main menu. You'll see the dashboard with all your AI Use Cases displayed as cards. Each card shows key metrics like execution count, success rate, and total cost.
Understanding the Dashboard
The main dashboard displays:
- Summary Statistics: Total use cases, executions, costs, and average success rate at the top
- Date Range Filter: Controls data visibility for specific time periods
- Search Box: Find specific use cases by name
- Sort Options: Order by name, executions, cost, or creation date
- Use Case Cards: Individual cards showing metrics for each AI implementation
Use Case Types
There are two types of AI implementations you can manage:
- SimpleT AI Component: Single-turn AI operations that process input and provide immediate output (e.g., translation, classification, content generation)
- Chat Agent: Multi-turn conversational AI that maintains context across multiple messages (e.g., customer support bots, virtual assistants)
Creating Use Cases
Step-by-Step Use Case Creation
Step 1: Click the Create Use Case button in the top-right corner of the dashboard.
Step 2: Fill out the creation form with these required details:
- Name: Enter a descriptive name that clearly identifies the purpose (e.g., “Customer Support Email Classifier”, “Sales Lead Chat Assistant”)
- Description: Provide a detailed explanation of what this use case accomplishes
- Type: Select either “SimpleT AI Component” for single operations or “Chat Agent” for conversations
- Status: Choose “Active” to enable immediate execution or “Inactive” for testing
Step 3: Configure tracking connections:
- Use Case API Names: Add API endpoint identifiers that will be associated with this use case. When these APIs are called, execution logs will automatically attach to this use case for tracking.
- Active Prompt Builders: Select which prompt templates from the Prompt Builder will be connected. Executions using these prompts will be tracked under this use case.
Step 4: Set cost parameters (optional):
- Fixed Cost: Enter any operational costs beyond token usage (e.g., infrastructure, licensing)
- Cost per Execution: Set a baseline cost if not using token-based pricing
How Execution Logs Connect
Once created, execution logs automatically attach to your use case based on:
- API Name Matching: When any of your specified Use Case API Names are called in the system, those execution logs are linked to this use case
- Prompt Builder Connections: When any of your connected Active Prompt Builders are executed, those logs are automatically tracked
- Real-time Tracking: New executions appear immediately in your use case metrics and detailed logs
Updating Use Cases
Modifying Existing Use Cases
- Click anywhere on the use case card to open detail view
- Click the “Edit Use Case” button in the top-right
- Access the full editing form with all configuration options
- Save changes and return to dashboard
What You Can Update
- Basic Information: Name, description, and status
- API Connections: Add or remove Use Case API Names for tracking
- Prompt Builder Links: Connect or disconnect Active Prompt Builders
- Cost Settings: Modify fixed costs and operational parameters
- Activation Status: Enable or disable the use case
Impact of Updates
When you update a use case:
- Historical Data: Past execution logs remain unchanged
- Future Tracking: New API Names and Prompt Builders take effect immediately
- Metrics Recalculation: Dashboard metrics update in real-time
- Log Association: New executions will be tracked based on updated connections
Execution Logs Analysis
Accessing Execution Logs
Navigate to the Execution Logs tab to view comprehensive execution data. This is your primary tool for debugging, performance monitoring, and understanding AI behavior patterns.
Filtering and Search Capabilities
The execution logs interface provides powerful filtering options:
- Date Range Filter: Select specific time periods (last 24 hours, 7 days, 30 days, or custom range)
- Status Filter: View only successful executions, failures, or pending operations
- Use Case Filter: Focus on specific use cases or view all executions
- User Filter: Filter by Salesforce user ID or username to see individual user patterns
- Organization Filter: View executions by Salesforce organization (useful for multi-org setups)
- AI Model Filter: Filter by specific AI models (GPT-4, Claude, etc.)
Chat Sessions Management
Understanding Chat Sessions
For Chat Agent use cases, the system tracks complete conversation threads. Navigate to Chat Sessions to analyze multi-turn conversations and user interactions.
Session Organization
Chat sessions are organized hierarchically:
- Session Groups: Conversations grouped by user and time period
- Individual Sessions: Complete conversation threads with start and end times
- Message Threads: Individual messages within each session
- Context Preservation: How context carries forward between messages
Session Details and Analytics
Each chat session provides comprehensive information:
Session Overview:
- Duration: Total conversation length from start to finish
- Message Count: Number of exchanges between user and AI
- Completion Status: Whether the conversation reached a natural conclusion
- User Satisfaction: Feedback provided at the end of the session
Conversation Flow Analysis:
- Turn-by-Turn Breakdown: Each user message and AI response pair
- Response Quality: Relevance and helpfulness of each AI response
- Context Maintenance: How well the AI maintained conversation context
- Topic Progression: How the conversation evolved over time
Performance Metrics per Session:
- Average Response Time: How quickly the AI responded to each message
- Total Token Usage: Combined tokens across all messages in the session
- Cost per Session: Complete cost for the entire conversation
- Resolution Rate: Whether user questions were successfully answered
Performance Metrics & Cost Tracking
Comprehensive Performance Dashboard
The performance overview provides real-time insights into your AI operations with detailed metrics and trends.
Success Rate Analysis
Monitor AI performance with detailed success metrics:
- Overall Success Rate: Percentage of successful executions across all use cases
- Success Rate by Use Case: Individual performance for each AI implementation
- Success Rate Trends: Performance changes over time periods
- Error Rate Analysis: Common failure patterns and their frequencies
- Model Comparison: Success rates across different AI models
Response Time Monitoring
- Average Response Times: Mean processing time across all executions
- Response Time Distribution: Histogram showing response time patterns
- Peak Performance Times: When the system performs fastest/slowest
- Timeout Analysis: Executions that failed due to timeouts
- Model Performance Comparison: Response times by AI model type
Detailed Cost Tracking
Comprehensive cost analysis helps optimize spending and budget planning:
Cost Breakdown Components:
- Token-based Costs: Variable costs based on input/output token usage
- Fixed Operational Costs: Predetermined costs per execution or use case
- Model-specific Pricing: Different rates for GPT-4, Claude, and other models
- Peak Usage Charges: Additional costs during high-demand periods
Cost Analysis Views:
- Total Costs: Complete spending across all use cases and time periods
- Cost per Use Case: Individual spending for each AI implementation
- Cost per User: Spending patterns by Salesforce user
- Cost per Execution: Average cost efficiency across different operations
- Daily/Monthly Trends: Spending patterns over time for budget planning
Cost Optimization Insights:
- Most Expensive Use Cases: Which implementations consume the most budget
- Token Efficiency: Cost per token across different prompt designs
- Model Cost Comparison: Relative costs of different AI models for similar tasks
- Usage Optimization: Recommendations for reducing costs while maintaining quality
User Feedback Analytics
- Feedback Distribution: Ratio of positive to negative feedback
- Feedback Trends: User satisfaction changes over time
- Common Complaints: Most frequent negative feedback themes
- Improvement Areas: Specific use cases needing attention
- User Engagement: How often users provide feedback
Getting Help
- Technical issues: Contact system administrator
- AI performance: Work with Prompt Builder team
- Usage questions: Contact team lead or trainer