Getting Started with Maticlib
Welcome to Maticlib! This guide will help you get started with building intelligent AI agents using our simple and powerful library.
Quick Start
Maticlib provides easy-to-use clients for multiple LLM providers. Let's start with a simple example.
Google Gemini Client
Basic Usage
Here's how to make your first request with Google Gemini:
from maticlib.llm.google_genai import GoogleGenAIClient
# Initialize with API key
client = GoogleGenAIClient(api_key="YOUR_GOOGLE_API_KEY")
# Or use environment variable GOOGLE_API_KEY
client = GoogleGenAIClient()
# Make a request
response = client.complete("Hello! Tell me about Python")
print(response.content)
With Custom Configuration
You can customize the client with various options:
from maticlib.llm.google_genai import GoogleGenAIClient
client = GoogleGenAIClient(
model="gemini-2.5-flash", # Choose your model
api_key="YOUR_API_KEY",
thinking_budget=0, # For extended reasoning
verbose=True, # Enable detailed logging
return_raw=False # Get Pydantic model response
)
response = client.complete("Explain quantum computing")
print(response.content)
# Access response metadata
print(f"Total tokens: {response.total_tokens}")
print(f"Prompt tokens: {response.prompt_tokens}")
print(f"Completion tokens: {response.completion_tokens}")
Multi-turn Conversations
Maticlib makes it easy to maintain conversation context:
from maticlib.llm.google_genai import GoogleGenAIClient
from maticlib.messages import HumanMessage, AIMessage, SystemMessage
client = GoogleGenAIClient(api_key="YOUR_API_KEY")
# Option 1: Using message objects
conversation = [
SystemMessage("You are a helpful Python tutor"),
HumanMessage("What are list comprehensions?"),
AIMessage("List comprehensions are a concise way to create lists..."),
HumanMessage("Can you show me an example?")
]
response = client.complete(conversation)
print(response.content)
# Option 2: Using dictionaries
messages = [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hi! How can I help you?"},
{"role": "user", "content": "What's the weather like?"}
]
response = client.complete(messages)
print(response.content)
Asynchronous Requests
For better performance with multiple requests:
import asyncio
from maticlib.llm.google_genai import GoogleGenAIClient
async def main():
client = GoogleGenAIClient(api_key="YOUR_API_KEY")
# Make multiple async requests
tasks = [
client.async_complete("Tell me a joke"),
client.async_complete("What is AI?"),
client.async_complete("Explain Python decorators")
]
responses = await asyncio.gather(*tasks)
for i, response in enumerate(responses, 1):
print(f"\nResponse {i}:")
print(response.content)
asyncio.run(main())
Mistral AI Client
Basic Usage
Using Mistral AI is just as simple:
from maticlib.llm.mistral import MistralClient
# Initialize with API key
client = MistralClient(api_key="YOUR_MISTRAL_API_KEY")
# Or use environment variable MISTRAL_API_KEY
client = MistralClient()
# Make a request
response = client.complete("What is the best French cheese?")
print(response.content)
Different Models
Mistral offers various models for different use cases:
from maticlib.llm.mistral import MistralClient
# Use different Mistral models
client = MistralClient(
model="mistral-large-latest", # or "mistral-medium-latest", "mistral-small-latest"
api_key="YOUR_API_KEY",
verbose=True
)
response = client.complete("Write a short poem about coding")
print(response.content)
Multi-turn with Mistral
from maticlib.llm.mistral import MistralClient
client = MistralClient(api_key="YOUR_API_KEY")
messages = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Bonjour! How can I assist you?"},
{"role": "user", "content": "Tell me about the Eiffel Tower"}
]
response = client.complete(messages)
print(response.content)
Working with Messages
Maticlib provides convenient message classes:
from maticlib.messages import SystemMessage, HumanMessage, AIMessage
# Create messages
system_msg = SystemMessage("You are a helpful assistant")
print(system_msg.content) # Access content
print(system_msg.message_type) # MessageType enum
print(system_msg.message_type.name) # "SYSTEM"
human_msg = HumanMessage("Hello!")
print(human_msg.message_type.value) # "user"
ai_msg = AIMessage("Hi there!")
print(ai_msg.message_type.name) # "AI"
Error Handling
Always handle errors gracefully in production:
from maticlib.llm.google_genai import GoogleGenAIClient
try:
client = GoogleGenAIClient(api_key="YOUR_API_KEY")
response = client.complete("Your prompt")
print(response.content)
except ValueError as e:
print(f"Configuration error: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
Environment Setup
For better security, use environment variables for API keys:
# Linux/Mac
export GOOGLE_API_KEY="your-google-api-key"
export MISTRAL_API_KEY="your-mistral-api-key"
# Windows PowerShell
$env:GOOGLE_API_KEY="your-google-api-key"
$env:MISTRAL_API_KEY="your-mistral-api-key"
# Windows CMD
set GOOGLE_API_KEY=your-google-api-key
set MISTRAL_API_KEY=your-mistral-api-key
Then use clients without passing keys:
from maticlib.llm.google_genai import GoogleGenAIClient
from maticlib.llm.mistral import MistralClient
# API keys loaded from environment
google_client = GoogleGenAIClient()
mistral_client = MistralClient()
response = google_client.complete("Hello!")
print(response.content)
.env file and load them using python-dotenv for better security and convenience.
Next Steps
- Explore the Google GenAI Client Documentation for advanced features
- Learn about Mistral Client configuration options
- Build complex workflows with MaticGraph
- Understand Message Types for better conversation handling
Common Patterns
Retry Logic
import time
from maticlib.llm.google_genai import GoogleGenAIClient
client = GoogleGenAIClient(api_key="YOUR_API_KEY")
def complete_with_retry(prompt, max_retries=3):
for attempt in range(max_retries):
try:
response = client.complete(prompt)
return response
except Exception as e:
if attempt == max_retries - 1:
raise
time.sleep(2 ** attempt) # Exponential backoff
response = complete_with_retry("Tell me a story")
print(response.content)
Streaming Responses (Coming Soon)
Best Practices
- Use environment variables for API keys instead of hardcoding
- Enable verbose mode during development for debugging
- Handle errors gracefully with try-except blocks
- Use async methods for better performance with multiple requests
- Set appropriate timeouts for production environments
- Monitor token usage to control costs
- Implement retry logic for production reliability
Need Help?
If you encounter any issues or have questions:
- Check the GitHub Issues
- Read the full API Documentation
- Email us at arvohsoft@gmail.com