SDKs
Python SDK
The official GeoInfer Python client library — sync and async, published to PyPI.
Python SDK
The official Python client for the GeoInfer API. Supports Python 3.9+, sync and async, and all five file input forms.
Installation
pip install geoinferQuick start
from geoinfer import GeoInfer
client = GeoInfer(api_key="geo_...")
# List models your key has access to
for m in client.predictions.models():
print(m.model_id, m.model_type, f"{m.credits_per_use} credit(s)")
# Predict — returns geographic clusters
result = client.predictions.predict("photo.jpg", model_id="global_v0_1")
top = result.prediction.clusters[0]
print(top.location.name, top.center.latitude, top.center.longitude)Async support
import asyncio
from geoinfer import AsyncGeoInfer
async def main():
async with AsyncGeoInfer(api_key="geo_...") as client:
result = await client.predictions.predict("photo.jpg", model_id="global_v0_1")
top = result.prediction.clusters[0]
print(top.location.name, top.center.latitude, top.center.longitude)
asyncio.run(main())File input forms
predict() accepts five forms:
from pathlib import Path
# URL — fetched client-side and uploaded as bytes
client.predictions.predict("https://example.com/photo.jpg", model_id="global_v0_1")
# Path string
client.predictions.predict("photo.jpg", model_id="global_v0_1")
# pathlib.Path
client.predictions.predict(Path("photo.jpg"), model_id="global_v0_1")
# Raw bytes
with open("photo.jpg", "rb") as f:
client.predictions.predict(f.read(), model_id="global_v0_1")
# Binary file-like object (streamed)
with open("photo.jpg", "rb") as f:
client.predictions.predict(f, model_id="global_v0_1")Maximum file size: 10 MB.
Check credits
summary = client.credits.summary()
print(summary.summary.total_available) # total credits available
print(summary.subscription.remaining) # subscription allowance remaining
print(summary.summary.topup_credits) # one-time top-up balanceCredits consumed are also reported on every prediction response:
result = client.predictions.predict("photo.jpg", model_id="global_v0_1")
print(f"Used {result.credits_consumed} credit(s). ID: {result.prediction_id}")Rate limits
Rate-limit metadata is available on every prediction response:
rl = result.rate_limit
print(f"{rl.remaining} requests remaining, resets in {rl.reset}s")When the limit is exceeded, RateLimitError is raised with a retry_after attribute:
import time
from geoinfer import RateLimitError
try:
result = client.predictions.predict("photo.jpg", model_id="global_v0_1")
except RateLimitError as e:
if e.retry_after:
time.sleep(e.retry_after)Error handling
All exceptions inherit from GeoInferError.
| Exception | HTTP | When raised |
|---|---|---|
AuthenticationError | 401 | Missing or invalid API key |
InsufficientCreditsError | 402 | No credits remaining |
ForbiddenError | 403 | Access denied |
FileTooLargeError | 413 | Image > 10 MB |
InvalidFileTypeError | 422 | Unsupported format |
RateLimitError | 429 | Rate limit exceeded |
InvalidModelError | — | model_id not in your account's model list |
APIError | other | Unexpected non-2xx response |
from geoinfer import (
GeoInferError,
AuthenticationError,
InsufficientCreditsError,
InvalidModelError,
RateLimitError,
)
try:
result = client.predictions.predict("photo.jpg", model_id="global_v0_1")
except InvalidModelError as e:
print(e.message)
except RateLimitError as e:
if e.retry_after:
time.sleep(e.retry_after)
except InsufficientCreditsError:
print("Top up at geoinfer.com")
except AuthenticationError:
print("Check your API key at geoinfer.com/api")
except GeoInferError as e:
print(e.message_code, e.message)Context managers
# Sync
with GeoInfer(api_key="geo_...") as client:
result = client.predictions.predict("photo.jpg", model_id="global_v0_1")
# Async
async with AsyncGeoInfer(api_key="geo_...") as client:
result = await client.predictions.predict("photo.jpg", model_id="global_v0_1")Configuration
| Parameter | Default | Description |
|---|---|---|
api_key | required | Your GeoInfer API key |
base_url | https://api.geoinfer.com | Override the API base URL |
timeout | 60 | HTTP timeout in seconds |
model_cache_ttl | 300 | Seconds to cache model list |