InferiaLLM
SDKs

Python SDK

Official Python client for Inferia LLM

Installation

pip install inferiallm

CLI Reference

The inferiallm package comes with a built-in CLI for managing the entire platform.

usage: inferiallm [-h] {init,start} ...

Commands

inferiallm init

Initializes the database schema, creates roles, and sets up the initial admin user.

  • Usage: inferiallm init
  • Requires: Valid DATABASE_URL in .env

inferiallm start

Starts all services (Inference, Filtration, Orchestration gateways + Dashboard).

  • Usage: inferiallm start
  • Options:
    • No arguments: Starts all services
    • orchestration: Start only Orchestration Gateway + Worker + DePIN Sidecar
    • inference: Start only Inference Gateway
    • filtration: Start only Filtration Gateway

Examples:

# Start all services
inferiallm start

# Start specific services
inferiallm start orchestration
inferiallm start inference
inferiallm start filtration

Service Ports

ServiceDefault Port
Filtration Gateway8000
Inference Gateway8001
Orchestration Gateway8080
Dashboard3001

Using the OpenAI SDK

Since InferiaLLM provides an OpenAI-compatible API, you can use the standard OpenAI Python SDK:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8001/v1",
    api_key="sk-inferia-..."  # Your API key from Dashboard
)

response = client.chat.completions.create(
    model="my-deployment-name",  # Use your Deployment Name
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

Streaming

response = client.chat.completions.create(
    model="my-deployment-name",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in response:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

On this page