Lean API Platform

Your agents are guessing at APIs.
Give them the actual Agent-Native spec.

Without specs, agents hallucinate and fail 60% of the time. LAP gives them verified, agent-native API specs.

2X
Accuracy
10x
Compression
35%
Cheaper
+
APIs
Claude Code npx @lap-platform/lapsh init
Cursor npx @lap-platform/lapsh init --target cursor
Codex npx @lap-platform/lapsh init --target codex
OpenClaw Browse on ClawHub
The Problem

Agents can't find API specs. And when they do, the specs aren't built for them.

No specs

0.399
Accuracy

Agents guess endpoints from memory. The remarkable thing isn't that they fail 60% -- it's that they get 40% right while making it all up.

Wrong format

1M+
Tokens of YAML

Existing specs are written for human developers. Handing that to an agent is like giving someone the dictionary when they asked for a phone number.

With LAP

0.860
Accuracy

One command installs a verified, agent-native spec. 10x smaller. 35% cheaper. As reliable as a traditional human-readable API.

500 benchmark runs across 50 APIs. Full report →

Get Started

One command. Every API.

1Pick your tool

Claude Code

$ npx @lap-platform/lapsh init

Cursor

$ npx @lap-platform/lapsh init --target cursor

Codex

$ npx @lap-platform/lapsh init --target codex

OpenClaw

Install from ClawHub →
2Just prompt

"Integrate Discord into the project, use LAP to fetch the spec"

3Agent gets fresh specs

LAP fetches a verified, compressed spec from the registry. Your agent gets exactly what it needs -- endpoints, auth, types -- without hallucinating.

Browse APIs
For API Providers

Publish your API to the registry.

Make your API instantly usable by AI agents worldwide. Compile, generate a skill, and publish.

Step 1

Compile

$ npx @lap-platform/lapsh compile api.yaml

Turn any OpenAPI, GraphQL, or AsyncAPI spec into a structured LAP file.

Step 2

Generate skill

$ npx @lap-platform/lapsh compile api.yaml --skill

Add auth, question routing, and execution playbooks on top of the compiled spec.

Step 3

Publish

$ npx @lap-platform/lapsh publish api.lap

Share your specs and skills with + APIs already in the registry.

View on GitHub
npm install -g @lap-platform/lapsh
pip install lapsh
The Registry

APIs. Pre-compiled. Ready to use.

Every spec in the registry is already compiled to LAP format. Search, download, and feed directly to your agent.

Browse the Registry
The Format

A Compressed Format That Is Built for Agentic Coding.

Typed contracts that prevent hallucination. enum(succeeded|pending|failed) means agents can't invent values. And at 10x smaller, they load fast and cost less.

OpenAPI Spec Verbose, human-oriented

# Create a Payment Intent
#
# A PaymentIntent guides you through the process
# of collecting a payment from your customer.
# We recommend that you create exactly one
# PaymentIntent for each order or customer session
# in your system...

POST /v1/payment_intents

# Parameters:
# amount (required) - Amount intended to be
#   collected. A positive integer representing
#   how much to charge in the smallest currency
#   unit (e.g., 100 cents to charge $1.00)...
# currency (required) - Three-letter ISO code
# payment_method_types (optional) - The list of
#   payment method types that this PI is allowed
#   to use. If not provided, defaults to...

LAP Format Structured, agent-native

@endpoint POST /v1/payment_intents
@desc Create a payment intent
@required {amount: int, currency: iso4217}
@optional {payment_method_types: [str]}
@returns(200) {id: str, amount: int, status: str}
@errors {400, 401, 402}

Compiles from any format

OpenAPI
3.x and Swagger 2.0
Postman Collections
Collections v2.x
AsyncAPI
Event-driven APIs
Protobuf
Protocol Buffers
GraphQL SDL
Schemas and introspection
Comparison

The difference between guessing and knowing

Raw API Specs Hand-written Skills LAP
Accuracy 0.399 (guessing) Varies 0.860 (verified)
Setup time None, but agent struggles Hours per API One command
Structure Prose-heavy, variable Custom per author Consistent, typed
Scalability Copy-paste per API Manual per API APIs ready
Auth handling Implicit Manual Declarative
Agent readability Low Varies Optimized
Maintenance Specs drift Manual updates Re-compile anytime
Skills

From spec to agent skill in one command.

Skills wrap a LAP spec with auth setup, question routing, and execution playbooks. Your agent gets everything it needs to call any API, out of the box.

Built for Claude Code Cursor Codex OpenClaw
npx @lap-platform/lapsh compile stripe-com --skill
@skill stripe-com
@version 2024-12-01

@auth
  type: bearer
  header: Authorization
  env: STRIPE_API_KEY

@questions
  "charge a customer"       -> POST /v1/payment_intents
  "list all subscriptions"  -> GET  /v1/subscriptions
  "refund a payment"       -> POST /v1/refunds

@hints
  Amounts are in cents (e.g., $10.00 = 1000)
  Currency uses ISO 4217 lowercase (e.g., "usd")

@playbook
  1. Read @auth and set up authentication
  2. Match user intent to @questions
  3. Look up endpoint in the LAP spec
  4. Build request with @required params
  5. Execute and return structured response

Auth setup

Declares auth type, header, and env variable. Your agent knows how to authenticate before making a single call.

Question routing

Maps natural language intents to specific endpoints. No prompt engineering needed.

Playbook generation

Step-by-step execution instructions so your agent can go from intent to API call without guessing.

Try It

Paste any API spec. See LAP output instantly.

Input
LAP Output
Output will appear here after compiling...
FAQ

Frequently asked questions

Why do agents hallucinate API calls?
Because they have no way to find the spec, and even if they could, it's a million tokens of YAML written for humans. Agents without specs score 0.399 accuracy. Give them a LAP spec and accuracy jumps to 0.860. The spec doesn't make the agent smarter. It makes guessing unnecessary.
What is LAP?
LAP (Lean API Platform) is a structured format that makes any API instantly usable by AI agents. It compiles existing API specs (OpenAPI, Swagger, HTML docs, or raw URLs) into a compact, agent-native representation that cuts token usage by up to 88%.
How is LAP different from OpenAPI?
OpenAPI is designed for humans and tooling. LAP is designed for AI agents. A typical OpenAPI spec might use thousands of tokens to describe an endpoint; LAP compresses the same information into a fraction of that while preserving everything an agent needs to make the call. You can think of LAP as a compilation target, not a replacement.
How is LAP different from MCP?
MCP (Model Context Protocol) defines how agents talk to tool servers. LAP defines what the agent knows about an API before making the call. They work at different layers: MCP is a transport protocol, LAP is a knowledge format. You can use LAP specs inside an MCP server, or without MCP at all. They are complementary, not competing.
What input formats does LAP support?
The compiler accepts OpenAPI 3.x, Swagger 2.0, raw HTML documentation pages, and direct API URLs. If it describes an HTTP API, LAP can probably compile it.
Is LAP open source?
Yes. The format spec, CLI, and compiler are all open source under Apache 2.0. The registry is free to use and the compiled outputs are free to redistribute.
Do I need to modify my API to use LAP?
No. LAP works with your API as-is. Point the compiler at any existing spec or docs page and it generates a LAP file. No server changes, no SDK, no integration required.
What are Skills?
Skills are curated bundles of related API endpoints grouped by task, for example "send a message" or "create an invoice". They let agents load only the capabilities they need instead of an entire API surface, which further reduces token usage and improves accuracy.
How do I get started?
API providers: run lap compile on your spec to generate a LAP file, then publish it to the registry so agents can discover your API.
Developers: add skills from the plugin marketplace with lap-platform/claude-marketplace, ask your agent to browse the registry, or download and point to any spec directly.
Agent frameworks: fetch LAP files from the registry at runtime and feed them directly into your model's system prompt or tool definitions.
Are the APIs in the registry verified?
About 3% of registry entries are community-flagged and labeled as such. Flagged APIs have been reviewed for general correctness and auth documentation. All other entries are compiled automatically and may contain gaps.
Is LAP free?
Yes. The CLI, compiler, format spec, and public registry are all free. There are no paid tiers, usage fees, or premium features.
Where do the benchmark numbers come from?
The headline stats (88% fewer tokens, 35% cheaper, 29% faster) come from 500 standardized benchmark runs comparing LAP-compiled output against raw OpenAPI specs across a diverse set of real-world APIs. The methodology and raw data are published in the project's GitHub repository.
Can I run LAP privately?
Yes. The CLI and compiler run entirely on your machine. You can compile specs locally, store the output wherever you like, and never touch the public registry. For teams, you can host a private registry instance.
How is LAP different from Context7?
Different problem, complementary tools. Context7 focuses on framework and library documentation -- curated markdown guides for how to use React, Next.js, and similar tools. LAP is specifically for APIs: the actual endpoint contracts (paths, parameters, auth, types). We compile OpenAPI, GraphQL, and AsyncAPI specs into a compact format that agents can use to make correct API calls instead of guessing. 1,500+ APIs pre-compiled and ready to use.
How can I contribute?
Contributions are welcome. You can verify registry entries, submit new API compilations, improve the compiler, or propose changes to the format spec. Everything lives in the Lap-Platform GitHub org.