Python SDK Reference¶
The HG Content Generation System provides comprehensive Python APIs for content generation, LLM integration, and content processing. This reference covers all available Python components.
Overview¶
The Python SDK consists of three main modules:
- Content Production Module (CPM) - Core content generation service
- External API - Public-facing REST API for external integrations
- Instructions Module (IM) - Prompt generation and optimization service
Installation¶
To use the Python APIs, ensure you have the required dependencies:
For development with all dependencies:
Quick Start¶
Using the LLM Client Library¶
The LLM client provides a unified interface for multiple AI providers:
from apps.cpm.llm_client import LLMClientFactory, LLMConfig, LLMProvider
# Create a client
config = LLMConfig(
provider=LLMProvider.OPENAI,
model="gpt-4o-mini",
temperature=0.7
)
client = LLMClientFactory.create_client(config)
# Generate content
response = await client.generate("Write about renewable energy")
print(response.content)
print(f"Cost: ${response.estimated_cost:.4f}")
Using the External API Models¶
For external API integration, use the Pydantic models:
from apps.external.models import SEOArticleRequest
# Create a request
request = SEOArticleRequest(
type="seo_article",
businessId="biz_123",
keywords=["solar energy", "renewable power"],
targetLength=800,
tone="professional",
llmProvider="openai"
)
# Validate and serialize
print(request.json())
Content Processing¶
Process and analyze generated content:
from apps.cpm.content_processor import process_content
# Analyze content
result = process_content(
content="Your generated article content here...",
content_type="blog",
keywords=["renewable energy", "solar"]
)
print(f"Title: {result.title}")
print(f"Word Count: {result.word_count}")
print(f"Reading Time: {result.reading_time_minutes} minutes")
print(f"SEO Analysis: {result.seo_notes}")
API Reference¶
Content Production Module (CPM)¶
The CPM provides the core content generation capabilities with multi-LLM support.
Key Components¶
- FastAPI Application - REST API endpoints
- LLM Client Library - Multi-provider LLM integration
- Content Processor - Content analysis and optimization
Supported LLM Providers¶
Provider | Models | API Key Required |
---|---|---|
OpenAI | GPT-4, GPT-4o, GPT-4o-mini | OPENAI_API_KEY |
Anthropic | Claude 3.5 Sonnet, Claude 3 Haiku | ANTHROPIC_API_KEY |
Gemini 2.0 Flash, Gemini 1.5 Pro | GOOGLE_API_KEY | |
Groq | Llama 3.1, Mixtral | GROQ_API_KEY |
Ollama | Local models | None (local) |
External API¶
The External API provides a public-facing interface for third-party integrations.
Key Components¶
- API Application - Public REST endpoints
- Request/Response Models - Pydantic data models
Supported Content Types¶
- SEO Articles - Blog posts with keyword optimization
- Hyperlocal Content - Location-based content for local businesses
- Google Business Profile Posts - Social media posts for GBP
Instructions Module (IM)¶
The Instructions Module handles prompt generation and optimization (implementation in progress).
Authentication¶
CPM Authentication¶
The CPM uses API key authentication with client validation:
# Headers for API requests
headers = {
"Authorization": "Bearer your_api_key_here",
"Content-Type": "application/json"
}
External API Authentication¶
The External API uses a similar authentication scheme:
import requests
response = requests.post(
"https://api.hgcontent.com/api/v1/content/generate",
headers={
"Authorization": "Bearer your_external_api_key",
"Content-Type": "application/json"
},
json={
"type": "seo_article",
"businessId": "your_business_id",
"keywords": ["your", "keywords"]
}
)
Error Handling¶
All APIs use structured error responses:
try:
response = await client.generate(prompt)
except Exception as e:
print(f"Generation failed: {e}")
# Handle specific error types
Common error types: - ValidationError - Invalid request parameters - AuthenticationError - Invalid API key or permissions - ServiceUnavailableError - Temporary service issues - RateLimitError - API quota exceeded
Configuration¶
Environment Variables¶
Set up your environment with the required API keys:
# Required for LLM providers
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="AI..."
export GROQ_API_KEY="gsk_..."
# Database connection (Supabase)
export DATABASE_URL="postgresql://..."
export SUPABASE_URL="https://..."
export SUPABASE_KEY="eyJ..."
# Redis for caching (optional)
export REDIS_URL="redis://localhost:6379"
Model Defaults¶
Each provider has optimized default models:
from apps.cpm.llm_client import DEFAULT_MODELS, LLMProvider
defaults = {
LLMProvider.OPENAI: "gpt-4o-mini",
LLMProvider.ANTHROPIC: "claude-3-5-sonnet-20241022",
LLMProvider.GOOGLE: "gemini-2.0-flash-exp",
LLMProvider.GROQ: "llama-3.1-8b-instant"
}
Performance and Scaling¶
Async/Await Support¶
All APIs are designed for async operation:
import asyncio
async def generate_multiple():
tasks = [
client1.generate("Topic 1"),
client2.generate("Topic 2"),
client3.generate("Topic 3")
]
results = await asyncio.gather(*tasks)
return results
Cost Optimization¶
The LLM client provides cost estimation:
# Compare costs across providers
providers = ["openai", "anthropic", "groq"]
costs = []
for provider in providers:
client = LLMClientFactory.create_from_string(provider)
response = await client.generate(prompt)
costs.append((provider, response.estimated_cost))
# Choose the most cost-effective option
cheapest = min(costs, key=lambda x: x[1])
print(f"Cheapest option: {cheapest[0]} at ${cheapest[1]:.4f}")
Examples¶
Full Content Generation Workflow¶
import asyncio
from apps.cpm.llm_client import LLMClientFactory
from apps.cpm.content_processor import process_content
async def full_workflow():
# 1. Generate content
client = LLMClientFactory.create_from_string(
"openai",
model="gpt-4o-mini"
)
response = await client.generate(
"Write a 500-word article about renewable energy benefits"
)
# 2. Process and analyze
analysis = process_content(
content=response.content,
content_type="blog",
keywords=["renewable energy", "solar", "wind", "benefits"]
)
# 3. Return structured result
return {
"content": response.content,
"title": analysis.title,
"word_count": analysis.word_count,
"reading_time": analysis.reading_time_minutes,
"seo_analysis": analysis.seo_notes,
"cost": response.estimated_cost,
"provider": response.provider,
"model": response.model
}
# Run the workflow
result = asyncio.run(full_workflow())
print(result)
Batch Processing¶
async def batch_generate(topics, provider="openai"):
client = LLMClientFactory.create_from_string(provider)
tasks = []
for topic in topics:
tasks.append(client.generate(f"Write about {topic}"))
responses = await asyncio.gather(*tasks)
results = []
for i, response in enumerate(responses):
analysis = process_content(
content=response.content,
content_type="blog",
keywords=[topics[i]]
)
results.append({
"topic": topics[i],
"title": analysis.title,
"word_count": analysis.word_count,
"cost": response.estimated_cost
})
return results
# Generate content for multiple topics
topics = ["renewable energy", "electric vehicles", "sustainable farming"]
results = asyncio.run(batch_generate(topics))
Support and Documentation¶
- API Documentation: Available at
/docs
endpoint for each service - OpenAPI Spec: Generated automatically by FastAPI
- Source Code: Available in the respective module directories
- Tests: Comprehensive test suites in
tests/
directories
For issues and support, refer to the project documentation and test files for usage examples.