Writing Control for AI Drafts
PublishReady turns prose into measurable structure: readability formulas, target compliance, AI-sounding marker audits, reference drift, and ranked revision levers. Run it as an MCP server, CLI, or TypeScript engine without sending drafts to a remote model.
Source-first and local-first. Built for agents, editors, and CI-style content gates that need repeatable evidence instead of taste-by-vibe scoring.
Overview
PublishReady is a local-first MCP server and writing-control toolkit for turning AI drafts into measurable, testable, publishable prose.
Give writing agents callable tools for analysis, target checks, template checks, hotspot detection, profile comparison, reference drift, and revision planning.
Run over stdio, Streamable HTTP, CLI, or embedded TypeScript. No external model calls, no API keys, and no draft exfiltration.
Replace fuzzy quality judgments with stable metrics, exact evidence, and revision levers that can be tested and repeated.
Where ordinary writing review breaks
AI-assisted publishing often fails at the last mile because the quality check is vague, subjective, or hidden behind another model.
- Asking an LLM if a draft is good produces inconsistent feedback and makes regression testing nearly impossible.
- Manual editorial review is valuable, but it does not scale as a first-pass gate for many drafts, docs pages, or generated variants.
- Basic word counters miss the structural signals that make prose feel bloated, generic, difficult, or off-brand.
- Black-box AI detectors are not reliable enough to be hard publishing gates.
- Agents need specific next actions, not a general instruction to "make this better."
What that costs
Without deterministic writing control, teams end up spending human attention on problems a tool should catch first.
- AI drafts can ship with stock transitions, generic phrasing, and over-polished assistant residue.
- Long sentences and dense paragraphs survive because no one has a measurable threshold for them.
- Style drift appears across product docs, landing pages, support content, and AI-generated revisions.
- Editors cannot prove whether a revision got clearer, tighter, or closer to the target voice.
The PublishReady System
PublishReady is a layered writing-control stack with MCP tools, a CLI, a core engine, and shared schemas.
Get structural counts, lexical metrics, scannability signals, readability formulas, and formula pressure in one deterministic result.
Check drafts against explicit numeric targets, built-in templates, or reusable reference profiles.
Inventory deterministic AI-sounding prose markers, tracked phrases, stock transitions, and exact match locations.
Rank revision levers so an agent or editor knows which changes will have the highest impact.
Source-first setup
The repository is structured for public packages, but the source workflow is the reliable path today.
git clone https://github.com/veldica/publishready-mcp.git
cd publishready-mcp
npm install
npm run build
node packages/mcp/dist/index.js --transport=http --port=3000
MCP client configuration
Use the built server entrypoint directly from your MCP client until the npm package is published.
Point your MCP client at the built package entrypoint.
{
"mcpServers": {
"publishready": {
"command": "node",
"args": [
"/path/to/publishready-mcp/packages/mcp/dist/index.js"
]
}
}
}
Once connected, agents can call focused writing-control tools instead of improvising their own review rubric.
{
"tool": "analyze_against_template",
"arguments": {
"text": "In today's fast-paced landscape...",
"template_id": "technical_docs",
"options": {
"include_sentence_details": true
}
}
}
What agents get back
PublishReady responses are designed to be used in automated editing loops, not just read by a person once.
- Quality summaries: Fit scores, pass/fail status, violations, and readable explanations.
- Formula detail: Flesch, Gunning Fog, SMOG, consensus grade, and linked contributors when requested.
- Hotspots: Specific sentences and paragraphs that create scannability or complexity problems.
- AI marker evidence: Exact phrases, categories, counts, and marker density for deterministic AI-sounding prose audits.
- Revision levers: Ranked suggestions such as shortening long sentences, replacing difficult words, or reducing abstract wording.
Practical agent workflow
Use PublishReady as the editor a writing agent can call after drafting and before publishing.
- Analyze the draft to establish structural, readability, lexical, and scannability baselines.
- Check the draft against a built-in template such as technical docs, landing page conversion, support article, or plain English.
- Audit for AI-sounding markers and tracked phrases that should not survive into final copy.
- Apply the highest-ranked revision levers first.
- Compare the revised version against the original and retest until it meets the target.
node packages/cli/dist/index.js analyze sample.txt
node packages/cli/dist/index.js audit-ai sample.txt
Choosing a writing-control method
PublishReady is for repeatable control loops, not subjective one-off scoring.
Best for taste, judgment, and final polish, but expensive as the first pass for every generated draft.
Flexible but unstable. The same draft can receive different advice, making it weak as a regression gate.
Deterministic, private, and tool-native. It gives agents measurable gates, exact evidence, and revision actions.
Keep Exploring
Use the Workflow Library to browse more guides, comparisons, and integration examples to continue your evaluation.
Return to the library of product pages, integrations, comparisons, and open-source tools.
Explore the deterministic style-contract and AI-marker library underneath the PublishReady system.
Review the rule-based readability formula library used by deterministic writing analysis pipelines.
Give your agents an editor they can call
PublishReady is built for the last mile between a generated draft and something worth publishing: measurable, private, explainable, and practical.