Skip to main content

Corporate AI Platform Theme API

The Corporate AI Platform theme provides RESTful API endpoints for real-time AI system monitoring and analytics integration.

Overview​

This theme extends the LLM Platform Manager with corporate-specific endpoints that provide:

  • Real-time AI health monitoring via /api/llm/health
  • Granular service status via /admin/llm/status/{type}
  • Live performance metrics via /admin/llm/metrics/live
  • Usage analytics collection via /llm-ui/analytics
  • MCP server monitoring via /admin/mcp/server/count

Authentication​

All endpoints use Drupal's built-in authentication:

  • Session-based: For web interface (recommended)
  • JWT tokens: For programmatic access
  • API keys: For service-to-service communication

Required Permissions​

EndpointPermission Required
/api/llm/healthadminister llm
/admin/llm/status/*administer llm
/admin/llm/metrics/liveadminister llm
/llm-ui/analyticsaccess content
/admin/mcp/server/countadminister llm

Endpoints​

GET /api/llm/health​

Returns comprehensive health status for all AI services.

Response Example:

{
"overallScore": 96,
"systemStatus": {
"level": "healthy",
"message": "All Systems Operational"
},
"services": [
{
"name": "Ollama",
"status": "healthy",
"statusText": "Online",
"responseTime": 45
},
{
"name": "OpenAI",
"status": "healthy",
"statusText": "Connected",
"responseTime": 120
}
],
"timestamp": "2025-01-20T15:30:00Z"
}

GET /admin/llm/status/{status_type}​

Get detailed status for specific service types.

Parameters:

  • status_type: One of general, ollama, openai, anthropic, vector_db, system

Response Example:

{
"name": "Ollama",
"status": "healthy",
"statusText": "Online",
"responseTime": 45,
"lastCheck": "2025-01-20T15:30:00Z",
"metadata": {
"version": "0.1.17",
"models_loaded": 3,
"memory_usage": "2.1GB"
}
}

GET /admin/llm/metrics/live​

Returns real-time performance metrics.

Response Example:

{
"totalRequests": 1247,
"totalTokens": 892453,
"averageResponseTime": 1.2,
"modelsActive": 3,
"uptime": "7d 12h 34m",
"timestamp": "2025-01-20T15:30:00Z"
}

POST /llm-ui/analytics​

Submit usage analytics from the corporate theme interface.

Request Body:

{
"event": "ai_status_check",
"component": "corporate-dashboard",
"user_agent": "Mozilla/5.0...",
"timestamp": "2025-01-20T15:30:00Z",
"metadata": {
"theme": "corporate_ai_platform",
"version": "1.0.0"
}
}

Response:

{
"success": true,
"message": "Analytics recorded",
"id": "analytics_12345"
}

GET /admin/mcp/server/count​

Returns count of Model Context Protocol servers.

Response Example:

{
"total": 5,
"active": 4,
"inactive": 1,
"lastUpdated": "2025-01-20T15:30:00Z"
}

JavaScript Integration​

The corporate theme's JavaScript automatically uses these endpoints:

// AI status monitoring (every 30 seconds)
function updateAIStatus(element) {
fetch('/api/llm/health')
.then(response => response.json())
.then(data => {
const status = data.systemStatus.level === 'healthy' ? 'online' : 'offline';
element.dataset.aiStatus = status;
// Update UI indicators
});
}

// Analytics tracking
function trackUsage(event, component) {
fetch('/llm-ui/analytics', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
event: event,
component: component,
timestamp: new Date().toISOString(),
metadata: {
theme: 'corporate_ai_platform',
version: '1.0.0'
}
})
});
}

Rate Limits​

Endpoint CategoryLimit
Health endpoints120 requests/minute
Analytics endpoints60 requests/minute
Configuration endpoints30 requests/minute

Error Handling​

All endpoints return standardized error responses:

{
"error": "ERROR_CODE",
"message": "Human-readable error message",
"timestamp": "2025-01-20T15:30:00Z"
}

Common error codes:

  • UNAUTHORIZED (401): Authentication required
  • FORBIDDEN (403): Insufficient permissions
  • NOT_FOUND (404): Endpoint or resource not found
  • BAD_REQUEST (400): Invalid request parameters
  • INTERNAL_SERVER_ERROR (500): Server error

Caching​

Response caching is implemented per endpoint:

  • Health endpoints: 30 second cache
  • Metrics endpoints: 60 second cache
  • Status endpoints: 120 second cache

Cache headers are included in responses:

Cache-Control: public, max-age=30

Schema Validation​

All requests and responses are validated against the OpenAPI 3.0.3 schema:

  • Request validation: Ensures required fields and correct data types
  • Response validation: Guarantees consistent API responses
  • Schema documentation: Available at /openapi.yaml

Development Testing​

Test endpoints locally with DDEV:

# Start DDEV environment
ddev start

# Test health endpoint
curl http://localhost:8080/api/llm/health

# Test with authentication
curl -H "Authorization: Bearer YOUR_JWT_TOKEN" \
http://localhost:8080/admin/llm/metrics/live

# Submit analytics
curl -X POST \
-H "Content-Type: application/json" \
-d '{"event":"test","timestamp":"2025-01-20T15:30:00Z"}' \
http://localhost:8080/llm-ui/analytics

Monitoring​

The theme provides built-in monitoring capabilities:

  • Request logging: All API calls are logged with timestamps
  • Performance tracking: Response times and error rates
  • Health alerts: Automatic notifications for service issues
  • Usage analytics: Built-in analytics collection and reporting

Security​

Security measures implemented:

  • CSRF protection: All POST endpoints require CSRF tokens
  • Input validation: Strict validation of all input parameters
  • SQL injection prevention: Parameterized queries only
  • XSS protection: Output sanitization
  • Rate limiting: Per-endpoint request limits
  • Audit logging: Complete request/response audit trail

Integration Examples​

React Component Integration​

import { useEffect, useState } from 'react';

function AIHealthWidget() {
const [health, setHealth] = useState(null);

useEffect(() => {
const fetchHealth = async () => {
const response = await fetch('/api/llm/health');
const data = await response.json();
setHealth(data);
};

fetchHealth();
const interval = setInterval(fetchHealth, 30000);
return () => clearInterval(interval);
}, []);

return (
<div className="ai-health-widget">
<h3>AI System Health</h3>
{health && (
<div className={`status status--${health.systemStatus.level}`}>
Score: {health.overallScore}/100
<br />
Status: {health.systemStatus.message}
</div>
)}
</div>
);
}

Drupal Service Integration​

<?php

namespace Drupal\your_module\Service;

use Drupal\Core\Http\ClientFactory;

class AIHealthService {

protected $httpClient;

public function __construct(ClientFactory $http_client) {
$this->httpClient = $http_client->fromOptions();
}

public function getHealthStatus() {
try {
$response = $this->httpClient->get('/api/llm/health');
return json_decode($response->getBody(), TRUE);
}
catch (\Exception $e) {
\Drupal::logger('ai_health')->error('Health check failed: @error', [
'@error' => $e->getMessage()
]);
return FALSE;
}
}
}

See Also​