# LAP - Deep Technical Context > This file contains comprehensive technical details about LAP (Lean API Platform) for AI systems that need to answer detailed questions about the project. ## Overview LAP (Lean API Platform) is a compiler and format that transforms verbose API specifications into a token-efficient, structured format optimized for AI agents. It consists of three components: the Format (a structured text grammar), the Registry (a hosted catalog of pre-compiled specs), and Skills (agent-ready bundles with auth, routing, and playbooks). - Version: 0.4.0 - License: Apache 2.0 - GitHub: https://github.com/Lap-Platform - Website: https://lap.sh - Registry: https://registry.lap.sh --- ## LAP Format Specification (v0.3) ### Header Directives ``` @lap v0.3 # Required, must be first line @api # API name @base # Base URL for all endpoints @version # API version identifier @auth # Default auth (e.g., "Bearer bearer") @endpoints # Total endpoint count (enables truncation detection) @common_fields {...} # Fields available on all endpoints @type {...} # Reusable named type definitions @toc # Table of contents ``` ### Endpoint Block ``` @endpoint @desc # Description (standard mode only) @auth # Override header auth @required {...} # Required parameters @optional {...} # Optional parameters @body -> # Request body references named type @returns(HTTP_CODE) {...} # Response schema @errors {...} # Error codes @example_request # Inline example ``` ### Type System | Syntax | Meaning | Example | |--------|---------|---------| | str | String | name: str | | int | Integer | amount: int | | num | Float | price: num | | bool | Boolean | active: bool | | map | Untyped object | metadata: map | | map{...} | Typed object | address: map{street: str, zip: str} | | [type] | Array | tags: [str] | | type? | Nullable | email: str? | | type(fmt) | Format hint | created: int(unix-timestamp), id: str(uuid) | | type=default | Default value | limit: int=10 | | enum(a\|b\|c) | Enumeration | status: enum(active\|inactive\|pending) | | any | Any type | data: any | Common format hints: email, uri, uuid, date, date-time, unix-timestamp, ISO4217. ### Standard vs Lean Mode | Aspect | Standard | Lean | |--------|----------|------| | Descriptions | Included (@desc) | Removed | | Comments | Inline (#) | Removed | | Use case | Debugging, human review | Production agents | | Compression | ~5.2x median (OpenAPI) | ~8.7x median (OpenAPI) | ### How Compression Works 1. Structural removal (~30%): Strip YAML scaffolding (paths:, requestBody:, schema:) 2. Directive grammar (~25%): @directives replace nested structures 3. Type compression (~10%): type: string, format: uuid becomes str(uuid) 4. Redundancy elimination (~20%): Extract common fields via @common_fields 5. Lean mode (~15%): Strip descriptions (optional) --- ## Supported Input Formats The compiler accepts seven input formats: 1. **OpenAPI 3.x / Swagger 2.0** - The primary input format. Extracts paths, operations, parameters, request bodies, response schemas, security definitions. Handles $ref resolution with cycle detection. Supports inline type generation up to depth 2. 2. **GraphQL SDL** - Converts SDL types to LAP format. Handles queries, mutations, subscriptions as "endpoints". Resolves nested types inline. Also accepts introspection JSON. 3. **AsyncAPI 2.0/2.1/3.0** - Converts pub/sub patterns to LAP endpoints. Handles message schemas and payload types. 4. **Protobuf / gRPC** - Accepts .proto files or directories. Converts RPC services to HTTP-equivalent endpoints. Returns list of specs for multi-proto projects. 5. **Postman Collections v2.1** - Extracts requests, folders (grouping), scripts as descriptions. Handles nested folder hierarchy. 6. **AWS Smithy** - Accepts .smithy IDL files, JSON AST, or smithy-build.json projects. Extracts @http bindings, auth schemes (SigV4, Bearer, ApiKey). 7. **AWS SDK JSON** - ~300 AWS services supported. Parses operations, shapes, auth, metadata. --- ## CLI Commands Install: `npm install -g @lap-platform/lapsh` or `pip install lapsh` ### compile ```bash lapsh compile [-o OUTPUT] [-f FORMAT] [--lean] [--skill] ``` Compile any API spec to LAP format. Auto-detects format. Use --lean for maximum compression (strips descriptions). Use --skill to generate a skill bundle with auth, question routing, and playbooks. ### validate ```bash lapsh validate ``` Prove zero information loss by round-tripping OpenAPI to LAP to structured data. Reports: endpoints matched, parameters matched, error codes matched. ### inspect ```bash lapsh inspect [--endpoint "METHOD /path"] ``` Parse and display a LAP file with formatted output. Filter to specific endpoint. ### convert ```bash lapsh convert [-f FORMAT] [-o OUTPUT] ``` Convert LAP back to OpenAPI YAML. Used for round-trip verification and interoperability. ### diff ```bash lapsh diff [--format summary|changelog] [--version VERSION] ``` Detect API changes with breaking-change classification. Semver severity: MAJOR (breaking), MINOR (additions), PATCH (descriptions). ### skill / skill-batch / skill-install ```bash lapsh skill [-o OUTPUT] [--ai] [--install] lapsh skill-batch -o [--ai] lapsh skill-install ``` Generate Claude Code skill directories from API specs. Layer 1: mechanical (no LLM, deterministic). Layer 2: optional LLM enhancement via --ai flag. Output: SKILL.md + references/api-spec.lap (3000-token budget). ### publish ```bash lapsh publish --provider SLUG [--name NAME] [--source-url URL] [--skill] ``` Publish a compiled spec to the LAP registry. Requires GitHub auth via `lapsh login`. ### benchmark ```bash lapsh benchmark lapsh benchmark-all ``` Measure token counts and compression ratios for specs. --- ## Registry The LAP Registry is a public API catalog hosted at registry.lap.sh. ### Key Stats - 1,389+ pre-compiled API specs - 45,000+ total endpoints - 350+ API providers - Average 72% size reduction (82% in lean mode) ### API Endpoints | Method | Path | Description | |--------|------|-------------| | GET | /v1/apis | List all APIs (pagination: limit, offset) | | GET | /v1/apis/:name | Get spec (returns text/lap) | | GET | /v1/apis/:name@:version | Get specific version | | GET | /v1/apis/:name/versions | List version history | | GET | /v1/search?q= | Fuzzy search specs | | GET | /v1/stats | Registry analytics | | GET | /v1/providers | List providers | | POST | /v1/compile | Compile spec to LAP (10 MB max, CORS: lap.sh only) | | POST | /v1/apis/:name | Publish spec (auth required) | | GET | /llms.txt | LLM-oriented registry index | | GET | /sitemap.xml | Dynamic XML sitemap | | GET | /registry.json | Structured JSON export | ### Rate Limits - Public endpoints are rate-limited per IP - Compile endpoint: subject to CORS restrictions (lap.sh origin only) ### Authentication GitHub OAuth with two token types: - Session tokens: 30-day web sessions - API tokens: 1-year CLI tokens (via `lapsh login`) --- ## Benchmark Results ### Methodology - 500 total benchmark runs (100% completion rate) - 50 production APIs tested across 5 formats - Model: Claude Sonnet 4.5 (claude-sonnet-4-5-20250929) - 5 documentation tiers: none, pretty (original), minified, lap-standard, lap-lean - 2 tasks per API, scored on endpoint identification (60%), parameter accuracy (30%), code quality (10%) - Total cost: $130.68 ### Headline Results | Metric | Value | |--------|-------| | Token reduction (LAP-Lean vs pretty) | 88% | | Cost savings (LAP-Lean vs pretty) | 35% | | Wall time improvement | -29 seconds per run | | Quality (LAP-Lean) | 0.851 average score | | Quality (pretty/original) | 0.825 average score | | No-documentation baseline | 0.40 average score | Key finding: LAP-Lean achieves equivalent or slightly better quality than original specs while using 88% fewer tokens and costing 35% less. ### Tier Performance | Tier | Avg Score | Description | |------|-----------|-------------| | LAP-Lean | 0.851 | Best performer, maximum compression | | LAP-Standard | 0.845 | With descriptions | | Minified | 0.838 | Whitespace removed | | Pretty (original) | 0.825 | Full original spec | | None (no docs) | 0.40 | Prior knowledge only | ### Compression by Format | Format | Specs Tested | Median Compression | Best Case | |--------|-------------|-------------------|-----------| | OpenAPI | 30 | 5.2x | 39.6x (Notion) | | Postman | 36 | 4.1x | 24.9x | | Protobuf | 35 | 1.5x | 60.1x | | AsyncAPI | 31 | 1.4x | 39.1x | | GraphQL | 30 | 1.3x | 40.9x | ### Top OpenAPI Compressions - Notion: 39.6x (68,587 -> 1,733 tokens) - Snyk: 38.7x (201,205 -> 5,193 tokens) - Zoom: 37.8x (848,983 -> 22,474 tokens) ### Statistical Significance - Documentation vs no-documentation: p << 0.001 (highly significant) - Among documented tiers (quality): p > 0.05 (not statistically significant -- LAP matches original quality) - Cost/token metrics (LAP vs original): p < 0.001 (highly significant savings) ### APIs Tested - OpenAPI: Figma, Stripe, Twilio, GitHub REST, DigitalOcean, Slack, Spotify, Box, Plaid, Resend - AsyncAPI: Streetlights, Slack RTM, Adeo Kafka, Social Media, Gitter Streaming, Gemini WebSocket, Kraken WebSocket, Correlation ID, Operation Security, RPC Server - GraphQL: GitHub, SWAPI, Yelp, Shopify, Artsy, Linear, Saleor, Elasticsearch, Coral, Unraid - Postman: Twilio, Postman Echo, Adobe, SAP, Stripe, Azure DevOps, Auth0, Braintree, InfluxDB, Akeneo - Protobuf: Google Storage, Pub/Sub, Vision, Data Catalog, Translate, Spanner, Firestore, Talent, Language, Billing --- ## Skills System Skills are agent-ready bundles that wrap a LAP spec with: 1. **Auth setup**: Declares auth type, header name, and environment variable. 2. **Question routing**: Maps natural language intents to specific endpoints. 3. **Hints**: Domain-specific guidance (e.g., "amounts are in cents"). 4. **Playbook**: Step-by-step execution instructions for the agent. ### Generation - Layer 1 (mechanical): Deterministic, no LLM required. Generates from spec structure. - Layer 2 (enhanced): Optional LLM polishing via claude CLI for better question routing. - Token budget: 3000 tokens per skill. ### Output Structure ``` skill-name/ SKILL.md # Main skill file with YAML frontmatter references/ api-spec.lap # Embedded LAP spec (lean mode) ``` ### Example Skill ``` @skill stripe-com @version 2024-12-01 @auth type: bearer header: Authorization env: STRIPE_API_KEY @questions "charge a customer" -> POST /v1/payment_intents "list all subscriptions" -> GET /v1/subscriptions "refund a payment" -> POST /v1/refunds @hints Amounts are in cents (e.g., $10.00 = 1000) Currency uses ISO 4217 lowercase (e.g., "usd") @playbook 1. Read @auth and set up authentication 2. Match user intent to @questions 3. Look up endpoint in the LAP spec 4. Build request with @required params 5. Execute and return structured response ``` --- ## Programmatic Usage ### Python ```python from lap.core.compilers import compile, detect_format from lap.core.parser import parse_lap # Compile any spec spec = compile("api.yaml") # Returns LAPSpec print(spec.to_lap(lean=True)) # Parse existing LAP file text = Path("api.lap").read_text() spec = parse_lap(text) for ep in spec.endpoints: print(f"{ep.method} {ep.path}: {ep.summary}") for p in ep.required_params: print(f" {p.name}: {p.type}") ``` ### TypeScript ```typescript import { compile, detectFormat } from '@lap-platform/lapsh'; const spec = compile('api.yaml'); const format = detectFormat('api.yaml'); ``` ### Framework Integrations **LangChain:** ```python from lap.middleware import LAPDocLoader loader = LAPDocLoader("stripe.lap", lean=True) docs = loader.load() # One document per endpoint ``` **CrewAI:** ```python from integrations.crewai.lap_tool import LAPLookup tool = LAPLookup(specs_dir="output/") ``` **OpenAI Function Calling:** Each LAP endpoint converts to a function definition with name, description, and JSON Schema parameters. **MCP (Model Context Protocol):** Each LAP endpoint becomes an MCP tool with name from method+path, description from @desc, and input schema from @required/@optional. --- ## Diff & Change Detection The diff engine performs semantic comparison of LAP specs: ```bash lapsh diff v1.lap v2.lap --format changelog ``` Change categories: - endpoint_added / endpoint_removed (breaking if removed) - param_added / param_removed (breaking if required param removed) - param_type_changed (usually breaking) - response_field_added / removed (non-breaking) - error_added / removed (non-breaking) - description_changed (non-breaking) Severity mapping: MAJOR (breaking changes), MINOR (new optional params/endpoints), PATCH (descriptions, error codes). --- ## Featured Registry APIs | API | Endpoints | Size Reduction | |-----|-----------|---------------| | Stripe | 327 | 81% | | GitHub | 930 | 79% | | Slack | 174 | 74% | | Discord | 103 | 76% | | Twilio | 278 | 72% | | OpenAI | 68 | 77% | | Cloudflare | 531 | 83% | | Spotify | 94 | 75% | | Anthropic | 12 | 69% | | SendGrid | 156 | 78% | --- ## FAQ **What is LAP?** LAP (Lean API Platform) is a structured format that makes any API instantly usable by AI agents. It compiles existing API specs (OpenAPI, Swagger, HTML docs, or raw URLs) into a compact, agent-native representation that cuts token usage by up to 88%. **How is LAP different from OpenAPI?** OpenAPI is designed for humans and tooling. LAP is designed for AI agents. A typical OpenAPI spec might use thousands of tokens to describe an endpoint; LAP compresses the same information into a fraction of that while preserving everything an agent needs to make the call. You can think of LAP as a compilation target, not a replacement. **What input formats does LAP support?** The compiler accepts OpenAPI 3.x, Swagger 2.0, raw HTML documentation pages, and direct API URLs. If it describes an HTTP API, LAP can probably compile it. **Is LAP open source?** Yes. The format spec, CLI, and compiler are all open source under Apache 2.0. The registry is free to use and the compiled outputs are free to redistribute. **Do I need to modify my API to use LAP?** No. LAP works with your API as-is. Point the compiler at any existing spec or docs page and it generates a LAP file. No server changes, no SDK, no integration required. **What are Skills?** Skills are curated bundles of related API endpoints grouped by task, for example "send a message" or "create an invoice". They let agents load only the capabilities they need instead of an entire API surface, which further reduces token usage and improves accuracy. **How do I get started?** API providers: run `lap compile` on your spec to generate a LAP file, then publish it to the registry so agents can discover your API. Developers: add skills from the plugin marketplace with lap-platform/claude-marketplace, ask your agent to browse the registry, or download and point to any spec directly. Agent frameworks: fetch LAP files from the registry at runtime and feed them directly into your model's system prompt or tool definitions. **Are the APIs in the registry verified?** About 3% of registry entries are community-flagged and labeled as such. Flagged APIs have been reviewed for general correctness and auth documentation. All other entries are compiled automatically and may contain gaps. **Is LAP free?** Yes. The CLI, compiler, format spec, and public registry are all free. There are no paid tiers, usage fees, or premium features. **Where do the benchmark numbers come from?** The headline stats (88% fewer tokens, 35% cheaper, 29% faster) come from 500 standardized benchmark runs comparing LAP-compiled output against raw OpenAPI specs across a diverse set of real-world APIs using Claude Sonnet 4.5. The methodology and raw data are published in the project's GitHub repository. **Can I run LAP privately?** Yes. The CLI and compiler run entirely on your machine. You can compile specs locally, store the output wherever you like, and never touch the public registry. For teams, you can host a private registry instance. **How can I contribute?** Contributions are welcome. You can verify registry entries, submit new API compilations, improve the compiler, or propose changes to the format spec. Everything lives in the Lap-Platform GitHub org.