Managed hosting from £15/mo — early adopters get preferential rates

Hard Budget Enforcement
For Any AI Tool.

One misconfigured agent ran GPT-4 in a loop for 72 hours. The bill: $3,600. Fencepost would have stopped it at $20.

Helicone logs it. Langfuse graphs it. Fencepost blocks the call.

Works with your stack

Claude Code Cursor Aider Continue Cline OpenAI SDK Anthropic SDK OpenClaw LangChain CrewAI
Running in production on our own platforms — 15 MCP servers, 8 agents, real workloads. We built Fencepost because we needed it, and now we're sharing it with the world.
fencepost — works with any AI tool
# Point any AI tool at Fencepost in one line:
$ export ANTHROPIC_BASE_URL=https://your-vps:3001/proxy/anthropic
$ export OPENAI_BASE_URL=https://your-vps:3001/proxy/openai
# Then use Claude Code, Cursor, Aider as normal...
$ claude "refactor auth module"
{"totalCalls": 47, "totalCostUsd": 3.21, "budgetRemaining": 16.79}
# Budget hit? Fencepost blocks the call.
{"allowed": false, "reason": "Monthly budget exceeded: $20.12 / $20.00"}
^ Your wallet just said "thank you"

Powered by

Hetzner EU | Cloudflare | Stripe | Docker

The $3,600 Problem

One AI agent. One misconfigured loop. GPT-4 running for 72 hours. $3,600 on the invoice. It happens with Claude Code, Cursor, OpenClaw — any tool that makes LLM calls without a hard cap.

$3,600
Largest reported monthly bill

Real users. Real bills. One misconfigured agent ran GPT-4 in a loop for 72 hours. No one noticed until the invoice arrived.

135K
Exposed OpenClaw instances

Security researchers found 135,000+ exposed instances and 341 malicious ClawHub skills. Budget enforcement is part of the hardening story.

85%
AI deployments with no cost visibility

Only 15% of GenAI deployments have LLM observability. Even fewer have enforcement. Most teams are flying blind on API spend.

Enforcement. Not Monitoring.

LLM observability tools are rearview mirrors. Fencepost is a guardrail. Helicone and Langfuse show you what happened after the invoice arrives. Fencepost blocks the API call before the damage.

Three Steps. Ten Minutes.

Pick a managed plan and we handle everything. You're protected in minutes.

1

Deploy Fencepost

Pick a managed plan and we spin up a dedicated VPS for you. Pre-configured with Fencepost + OpenClaw. Under 10 minutes.

BYOK — Bring Your Own Key. You pay your LLM provider directly. We never see your keys.
2

Point Your Tools

Set one environment variable and your AI tools route through Fencepost. Works with Claude Code, Cursor, Aider, OpenClaw, or any SDK.

export ANTHROPIC_BASE_URL=https://your-server:3001/proxy/anthropic
3

Set Budgets & Go

Set a monthly budget. Fencepost tracks every penny. Hit your limit? The API call is blocked. Your wallet stays safe.

Works with everything — IDE tools, agent frameworks, chat platforms, custom apps

Hard Caps. Not Soft Alerts.

Fencepost is a lightweight cost-control proxy. It does one job brilliantly: blocks the call when the budget hits. Works with Claude Code, Cursor, Aider, OpenClaw, or any tool that makes LLM API calls.

🛡️

Hard Budget Cap

Blocks the API call the moment your limit hits. Not after. Not "alerts you." Blocks. No exceptions, no overruns.

📊

Per-Agent Cost Tracking

See exactly which agent or skill is burning money. Every call logged with model, tokens, cost, and latency. Real-time. Per model.

Circuit Breaker

Provider down? Fencepost stops hammering it after 3 failures. Auto-recovers after cooldown. No cascading failures.

🔌

Universal AI Proxy

Works with any tool that calls an LLM API. Set one env var and your Claude Code, Cursor, Aider, or custom agent routes through Fencepost.

📋

Full Audit Trail

SQLite log of every AI call. Who called what, when, how much. Compliance-ready. Export anytime.

🔑

BYOK (Your Keys)

Bring your own API keys. We never see them. They stay on your server. You pay your LLM provider directly.

Premium Features — All Managed Plans
🔔
All Plans

Budget Alerts

Get notified at 80%, 90%, and 100% of budget via webhook. Slack, Discord, email — your call. Never be surprised by a bill again.

📤
All Plans

Usage Export

Download CSV or JSON reports with date-range filtering. Per-agent breakdowns for accounting, invoicing, or chargebacks.

🔄
All Plans

Provider Failover

When OpenAI goes down, Fencepost auto-routes to OpenRouter. Zero-downtime AI calls. Circuit breaker meets smart routing.

👥
All Plans

Team Budgets

Group agents under team namespaces with umbrella budget caps. Engineering, marketing, support — each team gets its own spend limit.

Simple Pricing. Hard Enforcement.

We host Fencepost on a dedicated VPS in your chosen region, manage the SSL, handle the updates, and guarantee 99.9% uptime — so you can focus on building agents. BYOK — you bring your own LLM keys, you pay your provider directly.

Launch pricing — early adopters get preferential rates on all future plans

Choose your hosting region

Your data stays in-region. Compliant with GDPR, UK DPA, and local regulations.

Starter
£15/month

2 vCPU • 4 GB RAM • 80 GB SSD

Perfect for personal projects and solo developers with a few agents.

  • OpenClaw + Fencepost pre-configured
  • Hard budget caps & cost tracking
  • Dedicated VPS • HTTPS • BYOK
  • Full audit trail
Recommended
Standard
£25/month

4 vCPU • 8 GB RAM • 160 GB SSD

Run multiple agents across channels. Our recommended plan for most teams.

  • Everything in Starter
  • 5–10 concurrent agents
  • Priority support
  • Daily backups • Custom domain
Pro
£45/month

8 vCPU • 16 GB RAM • 240 GB SSD

For teams running heavy workloads, many agents, or high-volume channels.

  • Everything in Standard
  • 10–25+ concurrent agents
  • Dedicated support • SLA 99.9%
  • Multi-team budget isolation

Need a custom configuration? — Custom server specs, regions, or integrations can be requested and will be reviewed and approved. Submit a request — we collect requests to plan and expand supported configurations and regions.

All plans include a dedicated VPS provisioned from our recommended infrastructure partners. ScaleRight AI Ltd is not liable for the content hosted on your VPS nor for failures of third-party LLM providers. We guarantee system responsiveness and availability per our standard SLA. No per-seat charges. No hidden fees. You bring your own LLM API keys — you pay your provider directly.

Why pay when you can self-host for free?

Self-hosting isn't free — it costs developer time. Updates, SSL, backups, monitoring. A mid-level engineer costs ~£50/hour. If self-hosting takes even one hour per month, you're spending more than £25/mo.

10 min
Setup to first protected call
0 hrs
Monthly maintenance by you
99.9%
Uptime SLA on managed plans

Refer & Earn — A referral programme is coming soon. Earn Fencepost credits for every customer you refer — use them towards your own usage. Join the waitlist to be notified.

Enforcement Where Others Only Monitor

Observability tools show you the bill. Fencepost stops it before it happens.

Capability Fencepost
Cost tracking
Alerts on overrun
Hard budget enforcement
Per-agent breakdown
Self-hosted option
Managed hosting (turnkey)
IDE tool support (Claude Code, Cursor, Aider)
Provider failover
Team budget namespaces
Managed pricing from £15/mo

Scroll table or view on desktop for full comparison

Portkey's hard budget caps are Enterprise-only (custom pricing). LiteLLM enforces budgets but requires manual config.
Fencepost gives you hard enforcement + IDE tool support + managed hosting — plans from £15/mo.

FAQ

Is this just another monitoring tool?
No. Helicone and Langfuse are observability tools — they log and graph what happened after the fact. Fencepost enforces — it blocks API calls the moment your budget is hit. It's the difference between a fire alarm and a fire extinguisher.
What happens when my budget is exceeded?
Fencepost blocks all AI calls immediately via the before_tool_call hook. Your channels (WhatsApp, Telegram, etc.) stay connected — but AI responses are paused until the next billing cycle or you increase your budget. You get an alert before it happens.
Does Fencepost only work with OpenClaw?
No! Fencepost works with any tool that makes LLM API calls. Set ANTHROPIC_BASE_URL or OPENAI_BASE_URL to your Fencepost endpoint and your Claude Code, Cursor, Aider, Continue, or Cline sessions route through it automatically. It also works with LangChain, CrewAI, AutoGen, the OpenAI/Anthropic SDKs, or any custom agent. OpenClaw has a one-click plugin, but Fencepost is truly universal.
What is BYOK (Bring Your Own Key)?
You provide your own API keys from OpenAI, Anthropic, or OpenRouter. Your keys live on your server — we never see or store them. You pay your LLM provider directly. We only charge for hosting.
Where is my data stored?
On a dedicated VPS in your chosen region (EU, UK, US, or APAC). No shared databases, no multi-tenant mixing. Your server = your data. EU hosting is GDPR compliant by architecture.
Who is liable for content on the VPS?
You are responsible for all content hosted on your dedicated VPS, including agent configurations, channel data, and API usage. ScaleRight AI Ltd is not liable for hosted content nor for failures of third-party LLM providers. We guarantee system responsiveness and availability per our standard SLA.
Can I upgrade or downgrade my plan?
Yes. You can switch between Starter, Standard, and Pro at any time. Upgrades take effect immediately (we migrate you to a larger VPS). Downgrades take effect at the next billing cycle. Your data and config are preserved.
What LLM models are supported?
All of them. Fencepost proxies to OpenAI, Anthropic, and OpenRouter. OpenRouter alone gives you access to 200+ models including Claude, GPT-4o, Gemini, Llama, Mistral, and more.
How does Portkey compare?
Portkey is an excellent AI gateway, but their hard budget enforcement is gated behind Enterprise pricing (custom/contact sales). Fencepost gives you hard enforcement out of the box — managed plans from £15/mo. No Enterprise-only gate.
How do I use Fencepost with Claude Code or Cursor?
Set one environment variable: export ANTHROPIC_BASE_URL=https://your-server:3001/proxy/anthropic (for Claude Code) or export OPENAI_BASE_URL=https://your-server:3001/proxy/openai (for Cursor/Aider). Then use the tool exactly as before — all calls route through Fencepost for cost tracking and budget enforcement. No code changes needed.
What premium features are included?
All managed plans include: (1) Budget Alerts — webhook notifications at 80/90/100% of budget, (2) Usage Export — CSV/JSON reports with date-range filtering, (3) Provider Failover — auto-route to a backup provider when one goes down, and (4) Team Budgets — group agents under team namespaces with umbrella budget caps. Plus hard budget enforcement, cost tracking, and full audit trail.
Can I request a custom configuration or region?
Yes. If the standard plans or available regions don't fit your needs, submit a custom configuration request via email. We review all requests and use them to plan which configurations and regions to support next. Custom setups are approved on a case-by-case basis.
Can I cancel anytime?
Yes. Monthly plans can be cancelled anytime — no lock-in, no exit fees. Your VPS stays running until the end of the billing period. We can export your data on request.

Don't Be the Next $3,600 Story.

Whether you're running Claude Code, Cursor, Aider, or a fleet of AI agents — you're one misconfigured loop away from an eye-watering bill. Deploy Fencepost today. Sleep tonight.

Early adopters get preferential rates — join now

Start Saving on AI Costs Today

We're offering launch pricing for a limited time only. Once the window closes, prices go up. Early customers get their rate locked in forever — no increases, no surprises.

Founder pricing locked forever

Your launch rate never increases

OpenClaw + Fencepost bundled

23+ channels, budget enforcement, audit trail

Your keys, your server

BYOK model — API keys never leave your VPS

Dedicated infrastructure

No shared hosting — your own VPS in your region

No spam. You'll be the first to hear when we go live. Unsubscribe anytime.