Modules Overview
Maticlib is designed with a layered architecture, moving from basic message types to complex graph orchestration. This guide provides a detailed technical reference for each component, including import paths, parameter definitions, and practical implementation examples.
LLM Messages (maticlib.messages)
Unified message containers that ensure consistent data formatting across all supported LLM providers.
Core Message Types
| Class | Role Alias | Purpose |
|---|---|---|
SystemMessage |
system, developer |
High-level instructions that define the model's behavior. |
HumanMessage |
user, human |
Direct input from the end-user. |
AIMessage |
assistant, model, ai |
Responses generated by the LLM or simulated history. |
Code Example
from maticlib.messages import HumanMessage, AIMessage, SystemMessage
# Creating a conversation history
history = [
SystemMessage(content="You are a helpful coding assistant."),
HumanMessage(content="How do I use MaticGraph?"),
AIMessage(content="You can initialize it via ` MaticGraph()`.")
]
Parameters
content(str): The text payload of the message.message_type(MessageType): An internal enum defining the sender role.
LLM Clients (maticlib.llm)
A universal interface for interacting with diverse LLM providers while maintaining common request/response schemas.
Initialization
Each client is provider-specific but follows a shared constructor pattern.
from maticlib.llm.openai import OpenAIClient
from maticlib.llm.google_genai import GoogleGenAIClient
from maticlib.llm.mistral import MistralClient
# OpenAI (Responses API)
oa_client = OpenAIClient(model="gpt-4o", api_key="sk-...", verbose=True)
# Google Gemini
gemini_client = GoogleGenAIClient(model="gemini-2.5-flash", thinking_budget=0)
# Mistral AI
mistral_client = MistralClient(model="mistral-large-latest")
Universal Methods
complete(input, response_model=None): Synchronous completion.- Input:
strorList[BaseMessage]. - Output:
LLMResponseBase(see below).
- Input:
async_complete(input, response_model=None): Asynchronous completion.get_text_response(response): Helper to extract the primary text string from a response object.
The Response Object (LLMResponseBase)
All clients return a standardized response object with these key fields:
content(str): The primary generated text.total_tokens(int): Total count of prompt + completion tokens.finish_reason(str): Why the generation stopped (e.g.,stop,length).parsed_output(Any): Contains the validated Pydantic model if aresponse_modelwas used.
Response Parsers (maticlib.core.parsers)
Advanced utilities to force LLMs into generating structured, machine-readable data.
Available Parsers
PydanticResponseParser: (Recommended) Validates against a PydanticBaseModel.JSONResponseParser: Extracts a raw Python dictionary from JSON blocks.XMLResponseParser: Extracts data from flat XML tags.
Integration Workflow
You don't need to call the parser manually. Simply pass your model to the client.
from pydantic import BaseModel
from maticlib.llm.openai import OpenAIClient
class Sentiment(BaseModel):
score: float
label: str
client = OpenAIClient()
# Maticlib automatically injects formatting instructions and parses the result
response = client.complete(
input="Analyze this: I love structural documentation!",
response_model=Sentiment
)
# Access the validated Pydantic object directly
sentiment = response.parsed_output
print(f"{sentiment.label}: {sentiment.score}")
Tools (maticlib.tools)
The Tools module enables LLMs to execute external Python functions by automatically generating the necessary JSON Schemas for provider-specific "Function Calling" features.
Imports
How it Works
Maticlib uses introspection to minimize boilerplate. When you apply the @tool decorator, it inspects your function to build a metadata object:
| Component | Source | Outcome |
|---|---|---|
| Name | func.__name__ |
Identified as the tool name for the LLM. |
| Description | docstring |
Provided as instructions to the LLM on when to use the tool. |
| Parameters | type hints |
Converted to JSON Schema types (string, integer, etc.). |
| Required Fields | signature |
Parameters without default values are marked as required. |
Guided Example
This example demonstrates defining a tool with specific type hints and using it with an asynchronous client.
from maticlib.tools import tool
from maticlib.llm.openai import OpenAIClient
# 1. Define your tool with type hints and a clear docstring
@tool
def calculate_itinerary(destination: str, days: int = 3):
"""
Generates a travel itinerary for a specific destination and duration.
"""
return f"A {days}-day trip to {destination} is planned."
# 2. Use it with any client
client = OpenAIClient()
async def main():
response = await client.async_complete(
input="I want a 5-day trip to Tokyo",
tools=[calculate_itinerary]
)
# 3. Check for tool requests
if response.tool_calls:
for call in response.tool_calls:
print(f"Model wants to call: {call['function']['name']}")
print(f"With arguments: {call['function']['arguments']}")
# The extracted schema is stored on the function itself
print(calculate_itinerary.matic_tool_metadata)
Provider Support
The native tool calling system is optimized for these provider-specific schemas:
| Provider | Schema / Feature | Implementation Details |
|---|---|---|
| OpenAI | /v1/responses |
Uses the modern Response API with tool_calls. |
| Google Gemini | function_declarations |
Maps Python signatures to Gemini's native declarations. |
| Mistral AI | tools array |
Implements core tool schema for chat completions. |
MaticGraph (maticlib.graph)
A high-performance orchestration engine for building complex AI workflows as directed graphs.
The Architecture
from maticlib.graph import MaticGraph
# Initialize with an optional state schema (Pydantic model recommended)
graph = MaticGraph(stateful=True, state_schema=MyStateModel)
# 1. Add Nodes (Pure Python functions)
def my_node(state: MyStateModel):
return {"data": "processed"}
graph.add_node("PROCESS", my_node)
# 2. Define Edges (Routing)
graph.add_edge("START", "PROCESS")
graph.set_entry("START")
# 3. Execution
final_state = graph.run(initial_state={"input": "data"}, verbose=True)
Advanced Routing
parallel_group(from_node, parallel_nodes, join_node): Execute multiple nodes concurrently.add_conditional_edge(from_node, condition_func, routes): Route dynamically based on code logic.when(from_node, **routes): Simple routing based on anextfield in the state.