Back
Fastrouter vs Helicone

FastRouter vs. Helicone

Helicone built one of the cleanest LLM observability products in the category. Mintlify acquired it in March 2026 and the team has been clear: maintenance mode, no new features. Here's what to do if you're still on it.

Andrej Gamser
Andrej Gamser
9 Min Read|Published Updated

Acquisition status — March 2026

Mintlify acquired Helicone in March 2026. Founders Justin Torre and Cole Gottdank joined Mintlify. The cloud product is in maintenance mode — security patches, bug fixes, and new model support continue, but no new features. Plan a migration before the gap becomes a problem.

Disclosure. Published by FastRouter. Helicone built a great product and the founders deserve credit for it. This page is meant to help current users plan, not to dunk on the acquisition. Spot something inaccurate? Email us and we'll fix it.

Don't panic. Don't wait.

Helicone still works. The cloud product hasn't been turned off, the OSS repo still accepts PRs, and the gateway primitives — observability, sessions, prompt management, experiments, multi-provider routing, caching — are all intact. The risk is not "today." The risk is the next 12 months: no eval roadmap, no new gateway features, no security work beyond patches, and the Mintlify team is — understandably — focused on Mintlify's product, not Helicone's.

FastRouter covers Helicone's gateway and observability surface and adds the layer that maintenance-mode Helicone won't catch up on: continuous Smart and Automatic Evaluations, GEPA prompt optimization, video evals, MCP credential vaulting, and an active routing roadmap (7 strategies including AI Auto Model Router).

Three reasonable paths from here: migrate to a managed alternative (FastRouter, others), self-host Helicone OSS to keep it running on your own infra, or instrument with OpenTelemetry and stay backend-agnostic. We'll cover all three at the bottom.

What "maintenance mode" actually means here

Mintlify's stated rationale for the acquisition is "AI knowledge infrastructure" — they want to build agents that autonomously maintain and update documentation, and the Helicone gateway/observability primitives are useful inputs. Mintlify was already a Helicone customer pre-acquisition.

What that means for current Helicone users:

  • Cloud product still runs. Free, Pro ($79/mo), Team ($799/mo), and Enterprise tiers continue to be supported. SLAs unchanged.
  • Security patches and new model support continue. The team has committed to keeping the lights on.
  • No new features. No new evaluators, no new gateway capabilities, no new analytics surfaces.
  • OSS repo accepts community PRs. But there's no core team driving feature work.
  • Mintlify-internal use likely deepens. Some Helicone capabilities may end up tightly coupled to Mintlify's agent platform over time.

The pattern is similar to other "acqui-hire-with-a-product" situations: the existing platform doesn't disappear, but it stops being a competitive product. The longer you wait, the more your stack accumulates dependencies on a tool that isn't moving.

Suggested timeline

If you're on Helicone today, the right window for migration planning is 6–12 months. Long enough to do it without panic, short enough that you're not still on a static product when the next round of evals/routing/MCP capabilities ships across the rest of the category.

Feature matrix

Where the two products compare today. ✓ supported, ✗ not supported, ◑ partial, ⏸ frozen (maintenance only).

Capability

FastRouter

Active

Helicone

Maintenance

Roadmap status

Active development

Maintenance only (Mintlify, March 2026)

OpenAI-compatible endpoint

Async / non-proxy mode

 Sidecar logging

 Helicone's signature option

Multi-provider routing

7 strategies incl. AI Auto Router

Provider routing + failover; no new strategies coming

AI Auto Model Router

Smart / Automatic Evaluations

 Live production evals

 LLM-as-judge experiments shipped, no new evaluators

GEPA prompt optimization

Video evaluations

Sessions / multi-step traces

 Strong implementation

Custom properties / metadata

Prompt management / versioning

 Prompt versioning, A/B testing

Caching

 Semantic + simple

 Cross-provider Redis cache

Per-user / per-feature cost tracking

MCP credential vaulting

Workspace governance / RBAC

 5 orgs on Team plan; SOC-2 / HIPAA on Team+

Open source / self-host

 Managed only

 Apache 2.0 on GitHub; Docker, Helm

Active community / new features

 Frozen at acquisition

Migration assistance

 Hands-on migration support included

The good news: this is the easy part to replace

Helicone's observability layer is genuinely well-designed — sessions, custom properties, per-user/per-feature cost aggregation, request/response tracing across 100+ providers. It's also the part of Helicone that has the most parity in the broader market. FastRouter ships an observability stack at feature parity for almost all of those primitives, and the rest of the category (Langfuse, Braintrust, OpenTelemetry-native backends) covers the same surface.

If your reason for being on Helicone is observability, the migration is structurally low-risk. The data model maps cleanly to FastRouter's traces, sessions, and custom properties; the dashboard surfaces are similar; per-user cost tracking is included; OpenTelemetry export keeps you backend-agnostic if you'd rather stay portable.


Helicone built the right primitives. The bad news for Helicone is those primitives are commoditized now.

Helicone evolved into a gateway. The roadmap stopped there.

Helicone shipped a real Rust-based gateway in late 2025 — provider routing with weighted policies, instant failover on rate limits/timeouts/server errors, Redis-based caching with cross-provider compatibility (cache OpenAI for Anthropic, get up to 95% cost reduction on cache hits), per-user rate limiting, and roughly 10K req/sec capacity per node. As of the acquisition, those capabilities are stable but won't extend further.

FastRouter's gateway is being actively developed: 7 routing strategies, AI Auto Model Router, MCP credential vaulting, weighted shuffle for canary releases, category-based routing as a first-class strategy. If routing depth and AI-driven model selection are part of your roadmap, you'll outgrow what maintenance-mode Helicone can offer.

The widest gap is here

Helicone's experiments and evaluators were genuinely interesting — LLM-as-judge with side-by-side comparison, dataset-driven testing, prompt versioning paired with experimentation. The team had ambitious roadmap signals before the acquisition. Those signals are now frozen.

FastRouter's eval layer is the part of the product that's moving fastest:

  • Smart Evaluations — AI quality scoring on live production calls, surfacing the best-performing model for your workload automatically.
  • Automatic Evaluations — background sampler that benchmarks competing models on real traffic without your involvement.
  • GEPA — Generative Evolutionary Prompt Architecture iterates across prompt and model combinations toward Pareto-optimal cost/quality.
  • Video evaluations — compare model output on video inputs, a content type no other gateway in the category currently supports.

If evals were one of the reasons you chose Helicone in the first place, the gap will widen meaningfully over the next 6–12 months across the whole category.

Helicone OSS is a real option — with real costs

Helicone is Apache 2.0 on GitHub. There's a Docker Compose for development, a Helm chart for Kubernetes, and the gateway repo is also open source. Self-hosting Helicone keeps the product running indefinitely on your own infrastructure — patches still come, models still get added, and you're not exposed to the cloud product's roadmap freeze.

The trade-off is the same trade-off as any self-hosted observability platform. You're now operating Postgres, Clickhouse, Redis, container orchestration, log retention, and the upgrade path. None of that is unique to Helicone — it's the carrying cost of any stateful platform — but it's worth pricing honestly. For most teams that originally chose Helicone Cloud to avoid running infrastructure, going OSS is a solution that costs more than it saves.

If self-hosting is what you want anyway (compliance, sovereignty, code visibility), Helicone OSS is a reasonable target. LiteLLM is the more common self-hosted choice for the gateway role; pair with Langfuse for evals.

Three paths off Helicone Cloud

1) Move to a managed alternative — FastRouter

Lowest engineering lift. Both products are OpenAI-compatible: change the base URL, swap the API key, port your custom property names. Sessions, traces, prompts, and per-user cost tracking all map cleanly. FastRouter offers hands-on migration support for production cutovers and can run in passive audit mode against a slice of your traffic for 7 days first, so you have measured numbers before the switch.

2) Self-host Helicone OSS

Highest fidelity to what you have today. Apache 2.0 repo, Docker Compose for dev, Helm chart for production. You inherit the operational burden — Postgres, Clickhouse, Redis, scaling, upgrades — and you're frozen at the OSS feature set, but the product keeps running on your terms. Best fit if you have a strong DevOps function and Helicone-specific workflows you don't want to retrain.

3) Instrument with OpenTelemetry, stay backend-agnostic

Strategically the most defensive option. Wrap your LLM calls with OpenTelemetry GenAI semantic conventions (or use OpenLLMetry / OpenInference), then send to whatever backend you want — Langfuse, FastRouter, your own Clickhouse, multiple at once. You decouple from any single vendor's roadmap. Highest engineering lift up front, but you don't have to do this migration again next time someone gets acquired.

Hybrid is fine

You don't have to pick one. A common approach is option 3 (OpenTelemetry instrumentation) + option 1 (FastRouter as the primary backend with the routing/eval layer on top). The OTel layer keeps you portable; FastRouter gives you the active product surface.

Common questions

1) Do I have to migrate?

Not today. Helicone Cloud still runs, the OSS code is still there, and Mintlify is committed to security patches and new model support. The reason to plan a migration now is the trajectory — every quarter you stay on a frozen product, the gap to actively-developed alternatives widens.

2) How long will Helicone Cloud keep running?

Mintlify hasn't announced an end-of-life date, and the public commitment has been to continued operation. We'd plan around a 12–24 month horizon for "still works fine" and a longer horizon for "still competitive." If you have time-sensitive workloads, give yourself a runway before the second half of that range

3) How hard is the FastRouter migration?

For most stacks, low effort. Both expose OpenAI-compatible endpoints; the bulk of work is endpoint and key changes. Custom property names port directly. Session and trace data models map well. We help with the cutover for production workloads. The 7-day passive audit means you can measure the destination before fully committing.

4) What about Helicone-specific features I rely on?

Almost everything Helicone Cloud ships has parity in FastRouter — sessions, custom properties, per-user cost tracking, prompt management, multi-provider routing, caching. The exceptions are usually edge cases. If you depend on something specific (a particular dashboard surface, a custom integration), tell us and we'll be specific about whether it ports or whether you need a workaround.

5) Should I self-host Helicone instead of migrating?

If your reason to be on Helicone Cloud was avoiding infrastructure, no — self-hosting trades the acquisition risk for a different kind of operational tax. If you'd be comfortable running Postgres + Clickhouse + Redis anyway and you specifically want to stay on Helicone's data model, yes — Apache 2.0 is a real safety net.

6) What about Langfuse?

Langfuse is the most common Helicone migration target for teams whose primary need is observability and evals (not gateway routing). It's open source, OpenTelemetry-friendly, and has a healthy roadmap. If you want a gateway alongside the observability, Langfuse pairs well with FastRouter — observability in Langfuse, routing/governance/evals in FastRouter.

7) Is Helicone's data going anywhere?

Mintlify hasn't announced any data retention changes for the cloud product. Data export is available through the existing API, so we'd recommend exporting your historical traces and sessions on whatever cadence makes sense before doing any migration — independent of which destination you pick.


See the difference on your own traffic

Run the FastRouter audit against your Helicone usage.

Seven days, passive, zero code changes. We'll send back a feature-parity report, a side-by-side cost analysis, and an EU-deployment plan if you need one.

Related Articles

Fastrouter vs OpenRouter
Fastrouter vs OpenRouter
AI Gateway Comparisons

FastRouter vs. OpenRouter

Both put a single API in front of every major LLM provider. Past that, the products diverge — on cost, routing depth, evaluations, and the governance tooling that decides whether you can still use either one at $100K/month in spend.

Ritesh Prasad
Ritesh Prasad
11 Min ReadMay, 9 2026
Fastrouter vs Langfuse
Fastrouter vs Langfuse
AI Gateway Comparisons

FastRouter vs. Langfuse

FastRouter is a gateway. Langfuse is an observability and eval platform. They're not really competing — they're often used together. This page is here to make that decision sharp instead of confusing.

Andrej Gamser
Andrej Gamser
10 Min ReadMay, 7 2026
Fastrouter vs Requesty
Fastrouter vs Requesty
AI Gateway Comparisons

FastRouter vs. Requesty

Both put a single API in front of every major LLM provider. Past that, the products diverge — on cost, routing depth, evaluations, and the governance tooling that decides whether you can still use either one at $100K/month in spend.

Siv Souvam
Siv Souvam
10 Min ReadMay, 7 2026