Analyze Writing Style with Mathematical Precision
Measure the flow and variety of your writing with specific style metrics. Built for AI auditing and content pipelines where quality has to be measured.
Fast, local, and private. Measure 'show vs. tell' and detect the repetitive patterns common in AI-generated drafts.
Overview
Measure the flow and variety of your writing with specific style metrics. Built for AI auditing and content pipelines where quality has to be measured.
Evaluate the 'show, don't tell' quality of your writing with deterministic metrics. Skip the slow and hidden AI models.
Find flat word patterns in AI-generated text. Use diversity scores and repetition ratios for better audits.
Deterministic logic. No hidden models or API calls. Run the same checks directly on your own machine.
Current workflow
Standard style analysis is often slow, expensive, and based on hidden AI rules.
- Manual review is slow and difficult to keep consistent across large teams.
- AI models can give different scores for the same text, making it difficult to test.
- Web-based tools add lag and privacy risks to your editorial workflow.
- Basic word counts are biased by length, leading to wrong diversity scores.
- Lack of 'active vs passive' metrics makes it difficult to automate pacing checks.
Where it breaks
These gaps can lead to poor content and high costs for your editorial team.
- You can't scale style checks without more people or higher API costs.
- Keeping a consistent style is difficult across many authors or AI drafts.
- Sending private drafts to third-party services creates security risks.
- Unreliable metrics can't tell the difference between plain writing and weak writing.
The Style Analysis Pipeline
@veldica/prose-analyzer gives you specific linguistic signals designed for speed and privacy.
Calculate scores for vocabulary richness that work for any text length.
Analyze dialogue and sentence flow to find active and passive writing parts.
Find sensory word clusters and abstract concept density to audit vividness.
Get raw signals that map back to your source. No black-box percentage scores.
Verified request
# Install the package
npm install @veldica/prose-analyzer
# Usage in your project
import { tokenize } from '@veldica/prose-tokenizer';
import { analyzeProse } from '@veldica/prose-analyzer';
const text = `The sun was a bright, heavy disk. "It is time," he said.`;
const stats = tokenize(text);
const results = analyzeProse(
stats.words,
stats.sentences,
stats.sentenceWordCounts,
stats.paragraphWordCounts
);
Verified response
The analyzer returns modular metrics for lexical, narrative, and linguistic categorization.
{
"lexical": {
"lexical_diversity_mattr": 0.85,
"lexical_density": 0.76,
"repetition_ratio": 0.42
},
"narrative": {
"scene_density_proxy": 0.45,
"sensory_term_density": 0.23,
"dialogue_ratio": 0.15
}
}
Output interpretation
The analyzer provides specific signals that you can use to build custom editorial rules.
- Lexical Density: Measure the ratio of content words vs. filler words (stopwords).
- MATTR Diversity: A length-aware measure of word variety.
- Scene Density: A proxy for active pacing, based on dialogue and sentence clusters.
- Sensory Clusters: Detect high amounts of sight, sound, and touch words for vivid imagery.
- Local-First: Runs entirely on your machine in milliseconds to keep your data private.
Practical Usage: Style Auditing
Add prose analysis to your automated QA or creative writing tool.
- Feed tokenized content into the analyzer for a style scan.
- Set internal benchmarks for 'Active Pacing' using the Scene Density proxy.
- Identify 'Flat Sections' that lack sensory words or vocabulary variety.
- Compare AI drafts against human baselines to detect flat writing patterns.
- Give real-time feedback to authors on repetition and wall-of-text density.
import { analyzeProse } from '@veldica/prose-analyzer';
const results = analyzeProse(words, sentences, sCounts, pCounts);
if (results.narrative.sensory_term_density < 0.1) {
console.log("Tip: Add more sensory details to make this scene more vivid.");
}
Choosing a Style Analysis Method
Rule-based analysis provides a repeatable, low-cost alternative to hidden AI scoring.
Good for nuance but impossible to scale and inconsistent between reviewers.
Capable but slow, expensive, and hidden. Scoring makes testing hard.
Fast, private, and fully explainable. Provides stable signals that support human judgment.
Keep Exploring
Use the Workflow Library to browse more guides, comparisons, and integration examples to continue your evaluation.
See the solutions, comparisons, and integration guides collected in one place.
Review grounded audit, compare, fix-plan, and report excerpts before you wire the API into anything.
Jump from the workflow page into the quickstart, endpoint guides, and full OpenAPI reference.
Measure the texture of your writing
Explore the package on GitHub or install via NPM. Build deterministic editorial tools for review pipelines.