NVSBL.DEV
An API of pain points for your middleware.
NVSBL.DEV is the public API that powers everything we build. It ingests, normalizes, and clusters customer pain from any source — support tickets, sales calls, NPS, Slack, churn surveys — and exposes it as structured, queryable data. If you're building your own tooling on top of customer signal, start here.
Discover
We kept building the same ingestion and normalization layer for every client engagement. Different sources — Zendesk, Intercom, Gong, Slack, custom CSVs — but the same problem: get the pain out of the silo and into a shape we could reason about.
After the third time we rebuilt it, we decided to build it once, properly, and expose it as a public API.
- •Every company stores customer pain differently, but the underlying structure is the same.
- •The normalization step is where most teams lose fidelity — we optimized for preserving source metadata.
- •Clustering by underlying pain (not surface keywords) was the insight that made the API useful.
"We don't need another feedback platform. We need the data to be queryable."
— CTO, Series B SaaS
Define
The problem, sharpened
Teams cannot reliably query customer pain across heterogeneous sources without building custom ETL for each one.
Success metric
A developer should be able to connect a new source and query normalized pain data within an hour, not a sprint.
Scope boundaries
- •We are not a feedback platform. We don't have a dashboard for non-developers.
- •We don't store raw PII. We normalize and hash identifiers.
- •We don't make product decisions. We surface signal; you decide.
Riskiest assumption
That normalization across wildly different source schemas would lose too much signal to be useful. We spent most of our design cycles on the normalization layer.
Design
Topology
Three-layer architecture. Connectors handle source-specific ingestion and emit a common event schema. The Normalization layer enriches events with embeddings and metadata. The Query layer exposes clustered pain via REST and GraphQL endpoints.
Prompt and policy decisions
- •Every normalized record retains a link to its source, including timestamp and original ID.
- •Clustering runs continuously; you can query at any time without triggering a batch job.
- •Rate limits are generous for authenticated users; we'd rather you build on us than around us.
Eval design
- •Connector reliability (uptime, ingestion latency, error rate).
- •Normalization fidelity (does the normalized record preserve actionable signal?).
- •Cluster coherence (do groupings make sense to a human reviewer?).
Trade-offs documented
- •Depth vs. breadth: we chose depth. Six high-fidelity connectors beat twenty shallow ones.
- •Real-time vs. batch: we chose continuous. The API is always up to date, not eventually consistent.
Deliver
Stack
Go for the ingestion workers. Postgres for state. Redis for caching. Deployed on Fly.io for low-latency edge presence. OpenAPI spec published and versioned.
What we measure in production
- •Connector uptime per source.
- •p95 query latency.
- •Normalization fidelity score (sampled audit).
- •Developer time-to-first-query for new integrations.
Have a problem like this we should build for?
If you've got a recurring pain in your operation that you wish was solved off-the-shelf, we'd like to hear about it.
Talk to us