Back to Blog

How to turn a website into an MCP server

2026-04-186 min read

I turned my portfolio into an MCP server. Connect https://mcp.davidloor.com/mcp to Claude Desktop and the agent gets six tools. Search my blog, fetch a full post, list recent posts, read my profile, read my services, or send me a consultation request. No scraping, no hallucinated summaries, just structured access to exactly the content I want agents to see.

Here's how I did it, using my own site as the worked example. You can follow the same path for yours.

Decide what tools to expose

This is the most important step and the one people skip. An MCP server is only as useful as the tools it exposes. Start by listing what an agent would actually want to do with your site.

For my portfolio I landed on six.

  • search_blog(query, limit, locale) finds posts by keyword
  • get_post(slug, locale) returns the full content of one post
  • list_recent_posts(limit, locale) lists the latest
  • get_profile() returns my bio, experience, and links
  • get_services() returns what I offer as a consultant
  • request_consultation(name, email, topic, context) emails me a lead

Three content tools (so the agent can read), two directory tools (so the agent knows who I am and what I do), and one action tool (so the agent can actually do something for the user). That last one is where most of the business value lives. A portfolio that's read-only is a static library. A portfolio that can take a lead is a sales channel.

Pick your runtime

I went with a standalone Cloudflare Worker, separate from the Next.js site that serves davidloor.com. That split makes sense for a few reasons. The MCP server has its own deploy lifecycle, its own rate behavior, and its own transport semantics (Streamable HTTP with session IDs). Keeping it out of Next.js middleware avoided a lot of integration friction.

Cloudflare's agents package ships an McpAgent class that wraps the official @modelcontextprotocol/sdk and plugs into Durable Objects automatically. The DO handles session state and message ordering, which you'd otherwise have to build yourself.

Scaffold the worker

The worker is roughly 150 lines of actual logic. Here's the shape.

// workers/portfolio-mcp/src/server.ts
import { McpAgent } from "agents/mcp";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";

export class PortfolioMCP extends McpAgent {
server = new McpServer({
name: "davidloor-portfolio",
version: "0.1.0",
});

async init() {
this.server.tool(
"search_blog",
"Search blog posts by keyword",
searchBlogSchema,
searchBlog
);
// ...register the rest
}
}

And the entrypoint wires the MCP routes.

// workers/portfolio-mcp/src/index.ts
export { PortfolioMCP } from "./server";

export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
if (url.pathname === "/mcp") {
return PortfolioMCP.serve("/mcp").fetch(request, env, ctx);
}
return new Response("Not found", { status: 404 });
},
};

Tools themselves are pure functions. search_blog looks like this.

import { z } from "zod";
import { searchPosts } from "../data/posts";

export const searchBlogSchema = {
query: z.string().min(1).max(200),
limit: z.number().int().min(1).max(20).default(5),
};

export async function searchBlog(input) {
const results = searchPosts(input.query).slice(0, input.limit);
if (results.length === 0) {
return { content: [{ type: "text", text: "No posts found" }] };
}
const body = results.map(p => \`- \${p.title} (\${p.date})\`).join("\n");
return { content: [{ type: "text", text: body }] };
}

Reuse the data you already have

My Next.js build already generates app/lib/generated/posts.manifest.json with every post's title, excerpt, tags, translations, and HTML content. I didn't want a second source of truth.

The worker's prebuild hook copies that same file into src/data/posts.manifest.json before each build. The tool code imports it as a regular JSON module. One manifest, two consumers (the site and the MCP server), zero drift.

If your site has a content folder, a CMS API, or a database, the pattern is the same. Keep your data layer, wrap it behind tool functions.

The gotcha that will cost you an hour

The MCP SDK uses AJV for schema validation by default. AJV is CommonJS and pulls in JSON files at runtime, which workerd (Cloudflare's runtime) can't load through its module fallback service.

In production, wrangler's esbuild bundles it fine. In tests under @cloudflare/vitest-pool-workers, it fails at import time. Two fixes.

For production, swap the default validator for CfWorkerJsonSchemaValidator that ships with the SDK. For tests, add a Vite enforce pre plugin in vitest.config.ts that stubs ajv, ajv-formats, and secure-json-parse. Not pretty, but it keeps real schema validation in production while letting the integration test run against real workerd.

Add logging from day one

MCP traffic is invisible otherwise. I wrap every tool handler in a logging shim that emits a JSON line per call.

function logged(name, fn) {
return async (input, extra) => {
const start = Date.now();
try {
const out = await fn(input, extra);
console.log(JSON.stringify({
event: "mcp_tool_call",
tool: name,
ok: true,
ms: Date.now() - start,
}));
return out;
} catch (err) {
console.log(JSON.stringify({ event: "mcp_tool_call", tool: name, ok: false, error: String(err) }));
throw err;
}
};
}

Then npx wrangler tail --env production streams every tool call in real time, with which tool, how long it took, and which inputs showed up. You get traffic analytics for free.

Deploy and wire the custom domain

In wrangler.toml add a custom domain under the production env.

[env.production]
routes = [{ pattern = "mcp.davidloor.com", custom_domain = true }]

[[env.production.durable_objects.bindings]]
name = "MCP_OBJECT"
class_name = "PortfolioMCP"

Then npx wrangler deploy --env production ships the worker and provisions the subdomain if your DNS is already on Cloudflare. Custom domain, SSL cert, and route all set up in one shot. Roughly 90 seconds from deploy to live.

Advertise the server

Publish a server card at /.well-known/mcp/server-card.json on your main site so agents and directories can auto-discover it.

{
"serverInfo": {
"name": "davidloor-portfolio",
"version": "0.1.0",
"homepage": "https://davidloor.com"
},
"transport": {
"type": "streamable-http",
"url": "https://mcp.davidloor.com/mcp"
},
"tools": [
{ "name": "search_blog", "description": "Search blog posts" }
]
}

Then add a Connect button on your homepage that deep-links to Claude Desktop. The protocol handler is claude://mcp/install?url=. One click, the user has your server installed.

Try it

Connect my server to see what it feels like from the other side.

{
"mcpServers": {
"davidloor": {
"type": "http",
"url": "https://mcp.davidloor.com/mcp"
}
}
}

Paste into Claude Desktop's MCP config, restart, then ask using the davidloor MCP, summarize his recent blog posts and tell me what services he offers. Watch the agent pick the right tools on its own.

Then go build one for your site. The whole worker is about 500 lines including tests. Six tools will cover most content sites. The hard part isn't the code, it's deciding what agents should be able to do on your behalf.

Stay Updated

Get the latest posts and insights delivered to your inbox.

Unsubscribe anytime. No spam, ever.