typeofmd
SERVICES

Five disciplines, one standard.

Every engagement starts with a System Audit — a week of honest analysis, no commitment. The other four are the engagement types that typically follow.

01 / AUDIT

System Audit

A 5-day read of your architecture, code, and ops. Free, no commitment, no pitch deck.

The audit answers one question: is the system under your feet load-bearing, brittle, or somewhere between? I spend five days reading your code, your infra, and your oncall history. I talk to your senior engineers. I leave you with an annotated architecture diagram, a prioritized risk register, and a fixed-price proposal for anything you want me to fix.

If the audit concludes you don't need me — or that you need someone else — I say so and send introductions. About one in four audits ends without a follow-on engagement. That's the point: I only sign on when I can commit to a specific outcome.

Handover anxiety

Nobody on the team knows what lives where.

Latency creep

p99 has been climbing for six months.

Compliance incoming

SOC 2 or HIPAA is on the roadmap.

Rewrite debate

Team is arguing about rewrite vs refactor.

02 / BUILD

Custom Software Development

Bespoke platforms architected for your business logic, data gravity, and compliance posture.

This is the bulk of the practice: greenfield builds and feature-complete rewrites, shipped to production under fixed scope. I write the code, own the release pipeline, and pair with your engineers from week one so there's no handover cliff at the end.

The bar is production-grade from commit one: typed, tested, instrumented, documented. I don't ship demos. If the scope can't be shipped to production inside the engagement window, I'll shorten the scope before I'll stretch the timeline.

Zero-to-one velocity

A product is waiting on engineering, not the other way around.

Failed re-platform

Previous vendor shipped a demo, not a system.

Regulated domain

Health, finance, or infra where bugs have teeth.

Scale inflection

Traffic is about to 10x and the current code can't.

03 / CLOUD

Cloud Architecture & Migration

Multi-region, multi-cloud designs with cost controls, observability, and zero-downtime migrations.

Cloud done right is invisible: the system scales, the bill is predictable, the pager is quiet. Cloud done wrong is the leading cause of engineering backlog in companies I audit. I fix the wrong version — replatform, rearchitect, or migrate — and leave your team owning IaC they can actually read.

Preference is AWS and GCP, Terraform + Kubernetes where the complexity earns its keep. Smaller teams ship faster on managed PaaS — I'll recommend that when it's the right answer, even though it's a smaller invoice.

Bill shock

Cloud spend is growing faster than revenue.

Blast radius

One outage takes everything with it.

Migration stalled

Lift-and-shift to the cloud got stuck at 40% two years ago.

Multi-region pain

Latency is a function of where the user lives.

04 / MODERNIZE

Legacy System Modernization

Strangle-fig migrations off brittle monoliths without breaking the business that depends on them.

Most 'legacy' systems aren't bad — they're undocumented, under-instrumented, and owned by people who quit three years ago. Modernization is 20% code and 80% archaeology. I do the archaeology first.

Pattern of choice is the strangler-fig: new system grows around the old, behind feature flags, with safety nets. Nothing gets deleted until its replacement has run in production longer than any known failure window. Boring on purpose.

Key-person risk

Only one person understands the old system.

Feature freeze

You can't ship because touching it breaks production.

Audit pressure

Auditors want changes the old system can't make.

Vendor lock-in

You're paying license fees for software you'd rather replace.

05 / AI

AI / ML Integration

Production ML — not demos. Evaluation, guardrails, retrieval, and inference infrastructure built for SLA.

The gap between a working prototype and a production LLM feature is the entire engineering discipline: evals, cost budgets, prompt/version control, observability, retrieval correctness, safety rails. I build the second half, not the first. Your team likely has the prototype already.

If the problem doesn't actually need ML — and many don't — I'll say so and build the deterministic version instead. The fastest path to 'AI-powered' is usually a careful SQL query.

LLM in production

Prototype is great in demo, flaky in prod.

Retrieval rot

RAG quality decays and nobody knows why.

Inference cost

LLM spend is now a top-3 infra line item.

Evaluation gap

You don't have a number that says 'it's working'.

Start here

Not sure where to start?

Most first conversations end with an audit. It's free, and it's the fastest way to find out which of these — if any — is the right engagement for your system.