Back to home

Fast Apply Models: Deterministic Merges for LLM Edits

Fast Apply is Morph’s merge model: instruction + code + update in, fully merged output out. Part of Morph’s three‑model stack.

Morph Engineering Team

Posted by Morph Engineering

1 minute read


Fast Apply is Morph’s deterministic merge model. It turns an LLM’s intent into a clean, merged file output that you can safely apply.

What Are Fast Apply Models?

Traditional LLMs generate edits, but applying them reliably is the hard part. Fast apply models solve the merge step.

The core contract is simple: instruction + code + update → merged output. This keeps your tool deterministic and reviewable.

Morph’s stack combines Fast Apply with Embeddings and WarpGrep so the right context is retrieved before the merge.

Technical Architecture

The architecture has four components working in concert.

Input Processing extracts the edit intention and identifies the target scope.

Context Retrieval uses Embeddings + WarpGrep to pull the right code quickly without bloating context.

Merge applies the update snippet deterministically to produce a full, merged output.

Output Validation lets your system diff and audit before writing files.

API Integration

The API is OpenAI-compatible. Point your existing SDK at Morph's endpoint and switch the model name.

import { OpenAI } from 'openai';

const client = new OpenAI({
  apiKey: process.env.MORPH_API_KEY,
  baseURL: 'https://api.morphllm.com/v1'
});

const response = await client.chat.completions.create({
  model: 'morph-v3-fast',
  messages: [
    {
      role: 'user',
      content: `<instruction>${instruction}</instruction>
${originalCode}</code>
${updateSnippet}</update>`
    }
  ],
  stream: true
});

The format is simple: include <instruction>, the current <code>, and the <update> snippet. The model returns the complete modified file.

Streaming is supported for real-time feedback on larger files.

Performance

Performance in production depends on retrieval quality and how tightly you scope edits. Deterministic merges help keep outputs stable and reviewable.

Reliability: Clean merges reduce retry loops and make diffs easier to approve.

Previewability: Full merged outputs let you show diffs before writing to disk.

Scale: Combine Embeddings + WarpGrep to scope edits before applying for large files.

Getting Started

Get an API key from the dashboard and wire the apply call into your tool.

# Install OpenAI SDK
npm install openai

# Set your API key
export MORPH_API_KEY="your-api-key-here"

# Test the API
curl -X POST "https://api.morphllm.com/v1/chat/completions" \
  -H "Authorization: Bearer $MORPH_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "morph-v3-fast",
    "messages": [{
      "role": "user",
      "content": "<code>function greet(name) { return "Hello " + name; }</code>\n<update>Add error handling</update>"
    }]
  }'

The response contains the modified code. For streaming responses, set "stream": true and process chunks as they arrive.

Try Morph Apply in the playground or integrate via API.

Try Playground