All Signals

DataHarbor Is Now Source-Agnostic

We rebuilt the connector layer so the same no-code Virtual API can govern richer APIs today, and GraphQL, SQL Server, and Snowflake are next on the architecture.

Kevin

Kevin · Founder

Share

Until now, DataHarbor had one quiet limitation under the hood: it assumed the source was a REST API that answered a straightforward GET request. That covered more ground than you might expect. It was enough to prove the product, ship real value, and help customers put privacy and access controls in front of live APIs without writing middleware. But it also made the product smaller than its ambition.

Real systems do not always expose data through a clean GET /records endpoint. They use POST-based search APIs with JSON request bodies. They require custom headers. They package search logic into structured filters. They expose GraphQL instead of REST. Sometimes the source is not an API at all. It is SQL Server, Snowflake, or another read-oriented data system that teams still need to govern before sharing.

That is why this connector upgrade matters. We rebuilt the source layer so DataHarbor is no longer tied to one request pattern or one protocol assumption. The Virtual API experience stays the same. The policy model stays the same. But underneath, the platform is now being shaped to connect to a much wider class of sources. That may sound like an implementation detail. It is actually the difference between a useful product and a category-defining one.

The old limit was not the policy model

This is exactly where product categories get stuck if they are not careful. The customer-facing story sounds broad, but one hidden assumption in the data plane quietly narrows the real market.

The bottleneck was not the policy model. It was the connector model.

That meant some of the most common enterprise patterns fell outside the product’s comfort zone:

Source patternWhy the old model struggled
POST search endpointsThe request body was part of the query contract, not just the URL
APIs with custom headersImportant source credentials and routing data could not be modeled cleanly
GraphQL APIsThe source contract is query + variables, not endpoint + query string
SQL and warehouse readsThe source is not an HTTP resource at all

We did not want that to happen here.

What we changed

We separated the idea of a Virtual API from the idea of a specific source request pattern.

That is the architectural shift.

A Virtual API still represents the governed interface customers know: the place where privacy controls, access rules, and delivery behavior come together. But the underlying connector no longer has to pretend that every source is the same kind of REST call.

At a high level, the flow now looks more like this:

  1. Connect to a source using the source’s native read pattern.
  2. Normalize the result and apply the same governance controls customers already use.
  3. Deliver the governed result through the Virtual API interface.

That is the part we care about most. The governance layer no longer needs to care whether the upstream source was a GET, a POST, a GraphQL query, or eventually a database connector. It sees governed data entering a consistent control path.

That lets us expand source support without asking customers to relearn the product each time.

What customers get immediately

The first and most practical result is simple: more APIs just work.

If a source uses a POST endpoint with a request body, structured filters, or custom headers, that pattern can now be treated as a first-class connector scenario instead of a workaround case.

Consider a very normal upstream search API:

POST /v2/customers/search
x-api-key: sk_live_...
Content-Type: application/json

{
  "filters": {
    "region": "us-east",
    "status": "active"
  },
  "limit": 100,
  "include": ["profile", "usage"]
}

This is a pattern DataHarbor customers kept bringing us. It is also exactly the kind of source that used to sit awkwardly outside a GET-only mental model.

With this upgrade, DataHarbor can sit in front of those richer source contracts while preserving the part customers actually care about: the governed Virtual API they expose downstream.

The Virtual API definition itself still looks familiar:

version: "0.3"
objects:
  customers:
    controls:
      - type: redact
        fields: [ssn, internalNotes]
      - type: tokenize
        fields: [email, phone]
      - type: mask
        fields: [accountNumber]

And the served interface still looks like DataHarbor:

curl -H "dataHarbor-api-key: YOUR_API_KEY" \
  https://service.dataharbor.co/fetch/YOUR_VAPI_ID/customers

That continuity matters. Customers should not have to think about connector internals just because their upstream source is more realistic than a toy REST endpoint.

The bigger unlock is what comes next

Supporting richer REST patterns is valuable on its own, but it is not the full story.

The larger unlock is that DataHarbor now has a source model that can stretch beyond one API style entirely. GraphQL, SQL Server, and Snowflake no longer feel like special cases that would need a separate product philosophy. They fit the same connector architecture.

The connector model now looks like this:

Source typeConnector model
REST GET endpointsAlready supported
REST POST and structured search endpointsEnabled by this connector upgrade
GraphQL APIsNow aligned to the same source abstraction
SQL ServerFits the same read-oriented connector pattern
Snowflake and warehousesFits the same governed source pattern

That is the moment where the product story becomes more honest and more ambitious at the same time.

DataHarbor stops looking like a tool for one narrow slice of API governance and starts looking like what we always believed it could be: a universal data access control layer.

If a company has data that should be accessed live, governed consistently, and delivered without custom engineering every time, it belongs in the DataHarbor conversation whether it starts as REST, GraphQL, SQL, or warehouse data.

Secrets management had to grow up too

When you broaden the kinds of sources a platform can connect to, you also broaden the kinds of credentials and secrets it has to manage responsibly. That made secret handling a core part of this upgrade, not an afterthought.

Customer API keys and secrets are now stored in Azure Key Vault instead of being kept inline, with intelligent caching around retrieval so reads stay fast.

That matters most when something changes at exactly the wrong moment. If a partner rotates an API key on a Friday afternoon, the connector should not turn into a manual recovery project. The new model gives us a much stronger foundation for secret storage, and it can recover cleanly when those secrets change.

That is part of what source-agnostic really means in practice: not just more connector flexibility, but an operational model sturdy enough to support it.

Why this changes the product category

The shortest version is this: the limitation was never that DataHarbor could only enforce policy on APIs. The limitation was that the connector layer only knew how to read from a narrow kind of source.

Once that assumption starts disappearing, the category expands.

This is how we think about the before and after:

BeforeAfter
Govern simple REST sourcesGovern read-oriented data sources more broadly
One source assumptionMany source contracts under one model
API privacy managerUniversal data access control layer
Workarounds for non-GET patternsFirst-class connector support for richer sources

That shift is not just technical. It is commercial.

Most companies do not suffer from a shortage of data. They suffer from a shortage of safe, reusable access paths to that data. Every time a team wants to expose a new source to a partner, a product surface, an internal consumer, or an AI workflow, they face the same set of questions:

  • How do we avoid copying the data again?
  • How do we keep credentials safe?
  • How do we apply the same policy model everywhere?
  • How do we make the served interface easy to use without writing custom gateway code?

That is the real job to be done. That is the job DataHarbor is built for — and this connector upgrade is what lets us answer it honestly across more than one source pattern.

This connector upgrade moves DataHarbor closer to solving that job across a company’s full data estate instead of only a slice of it.

Why it matters for AI too

AI systems make this problem more urgent, not less.

An agent does not care whether the useful data lives behind a REST API, a GraphQL endpoint, or a warehouse-backed service. It only cares whether it can access the right governed slice reliably and safely.

That is why we keep coming back to the same architectural principle: one platform, one policy model, many delivery paths.

If the same governed Virtual API can eventually sit in front of many source types and then be delivered through APIs, MCP, or other interfaces, customers stop rebuilding access control every time a new consumption pattern shows up.

Pair this with Adaptive Output Formats and a governed Virtual API can serve an agent Markdown from a POST-based upstream source without custom glue.

That is a much better posture for the world we are entering.

Try it now

If you are already using DataHarbor, the key idea is reassuringly simple: the Virtual API experience does not need to change just because your upstream source is more complex.

You still start with a governed Virtual API. You still define controls with Data Control. You still expose a purpose-built interface instead of handing out raw source access.

What changes is the universe of systems that can now fit behind that model.

If you have an API that used to be awkward because it required a POST body, custom headers, or more structured request semantics, this is the upgrade to pay attention to. And if your next source is GraphQL, SQL Server, or Snowflake, the architecture is finally pointing in the right direction.

That is the bigger story behind this release.

We are not just making one connector more flexible. We are making DataHarbor more honest about the problem it is here to solve.

Not API privacy in one narrow format.

Governed access to the data companies actually have.

Share
Kevin

Thanks for reading. If you have questions or want to see this in action, reach out at hello@dataharbor.co.

— Kevin, Founder

Go to Dashboard