AI Agent Linter Ecosystem: Quality, Cost and Safety Control

Five open-source tools working together: archlint checks architecture, promptlint routes to the right model, costlint tracks spending, seclint filters content. All Go, no LLM, under 10ms per request.

The Problem

AI agents write code fast, but speed comes at a cost. Literally - tokens cost money. And figuratively - code quality degrades without oversight.

I hit three problems at once:

  1. Architecture degrades - agents write working code that violates layer boundaries, creates circular dependencies and god classes
  2. Costs grow - complex prompts go to Opus at $15/1M tokens when Haiku at $0.80 would suffice
  3. Content isn’t filtered - production prompts contain things that shouldn’t be there

Each problem has its own tool. But when the tools work together, they amplify each other.

The Pipeline

Every request passes through a chain of linters, each responsible for its domain:

graph LR A[Prompt] --> B[seclint] B -->|safe| C[promptlint] B -->|blocked| X[Reject] C -->|haiku| D1[Agent: Haiku] C -->|sonnet| D2[Agent: Sonnet] C -->|opus| D3[Agent: Opus] D1 --> E[archlint] D2 --> E D3 --> E E -->|pass| F[costlint] E -->|fail| C F --> G[Done]
  1. seclint - checks content safety (6+/12+/16+/18+ ratings)
  2. promptlint - scores complexity, picks the model
  3. Agent - executes the task on the selected model
  4. archlint - validates the result for architecture violations
  5. costlint - records cost, tracks cache hit rate

If archlint rejects the result, the task escalates to a more powerful model. costlint records the escalation cost.

The Tools

archlint - Architecture Linter

Scans Go projects for structural violations.

1
archlint scan --format json ./project/

What it finds:

  • Layer violations (handler calls repository directly)
  • God classes (>20 methods or >15 dependencies)
  • Circular dependencies
  • Interface Segregation violations (interfaces >5 methods)

Metrics: fan-out, coupling (Ca/Ce), component and link counts.

GitHub: mshogin/archlint

promptlint - Complexity-Based Router

Scores prompt complexity and picks the model. No LLM, pure metrics, <10ms.

1
2
3
4
5
echo "Fix typo in README" | promptlint analyze --output-model
# haiku

echo "Design microservices with CQRS" | promptlint analyze --output-model
# opus

Signals: length, sentence count, domain keywords, action type (fix/create/refactor), code presence.

Integrates with ccproxy for real Claude Code request routing through a proxy.

GitHub: mikeshogin/promptlint

costlint - Token Cost Analysis

Tracks spending, analyzes caching, runs A/B tests between models.

1
costlint report --source telemetry.jsonl
1
2
3
4
5
6
7
Cost Report:
  Total requests: 342
  By model:
    opus:   48 requests, ~$12.60
    sonnet: 195 requests, ~$6.50
    haiku:  99 requests, ~$0.44
  With optimal routing: ~$8.20 (savings: 58%)

Cache metrics: hit rate, block reuse, content entropy, Jaccard similarity. A/B testing: 30/30/40 traffic split, per-group cost and quality metrics.

GitHub: mikeshogin/costlint

seclint / promptsec - Content Filter

Age ratings for prompts: 6+, 12+, 16+, 18+.

1
2
3
4
5
echo "Help with math homework" | seclint rate
# {"rating": "6+", "safe": true}

echo "Explain SQL injection for security course" | seclint rate
# {"rating": "16+", "safe": true, "flags": ["security_tools"]}

Considers educational context - explaining SQL injection for a security course gets 16+ instead of 18+.

GitHub: mikeshogin/seclint / mikeshogin/promptsec

Shared Principles

All tools follow the same rules:

  • Go - single stack, single build
  • No LLM - pure metrics, regex, keyword matching. <10ms per request
  • CLI + HTTP - each tool works as a command and as a server
  • JSONL telemetry - unified log format for analysis
  • Pipeline-friendly - exit codes, stdout, pipes

Numbers

On test workload (342 requests over a week):

MetricBeforeAfter
Token spending$19.54$8.20
Architecture violations6312
Requests to expensive model100% opus14% opus
Routing latency-<10ms

58% cost savings without quality loss - simple tasks go to Haiku, architecture tasks stay on Opus.

Orchestration

The tools work autonomously, but deliver maximum impact together. For orchestration we use myhome - daemon-based AI agent management with workflow stages and scheduled tasks.

The stack:

  • myhome launches agents
  • promptlint picks the model (pre-route hook)
  • archlint validates output (quality gate stage)
  • costlint tracks cost (telemetry consumer)
  • seclint filters incoming prompts (pre-filter)

Get Started

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# Install all tools
go install github.com/mshogin/archlint@latest
go install github.com/mikeshogin/promptlint/cmd/promptlint@latest
go install github.com/mikeshogin/costlint/cmd/costlint@latest
go install github.com/mikeshogin/seclint/cmd/seclint@latest

# Start routing
promptlint serve 8090 &
seclint serve 8091 &

# Check architecture
archlint scan --format json ./my-project/

# View costs
costlint report --source ~/.promptlint/telemetry.jsonl
Built with Hugo
Theme Stack designed by Jimmy