No Middlemen: The Architecture Blockchains Were Built For

Blockchains disintermediated finance. Then we built an entire intermediary industry around accessing the data.

[2026-02-02] | Shinzō Team

The Offchain Data Problem

You deploy a contract to a decentralized network secured by thousands of validators. Trustless execution. Cryptographic verification. The whole point.

Then you need users to actually interact with it. Suddenly you're signing up for API keys, configuring RPC endpoints, integrating indexer services, paying monthly subscriptions to companies who sit between your users and the chain you just deployed to.

The blockchain is trustless. Everything around it isn't.

RPC providers handle the most basic operations. Querying contract state. Checking balances. Submitting transactions. Unless you're running your own node, every request routes through Alchemy, Infura, QuickNode, or similar services. They see every query. They can rate limit you, block you, or go offline. Your "direct" interaction with the blockchain passes through their servers first.

Reading data? You trust they're returning accurate state. Submitting a transaction? You trust they'll actually broadcast it. The blockchain verifies everything cryptographically. The RPC layer verifies nothing.

Indexers transform raw blockchain data into something applications can query efficiently. Token balances, transaction histories, NFT ownership, DeFi positions. You know the drill. Without indexers, answering "what NFTs does this address own?" means processing millions of blocks yourself. With them, you're trusting their processing is correct, their servers stay online, and they index what you need in the format you need it.

The trust surface is larger than RPC. Indexers don't just relay data. They transform it. Aggregate it. Interpret it. More computation means more places where errors or manipulation can occur. And when their servers go down during a traffic spike, your users stare at loading spinners while the blockchain keeps producing blocks just fine.

Oracles bring external data onchain. Readings from sensors, results from sports leagues, asset data from real-world registries. Smart contracts can't reach outside the blockchain, so oracles bridge the gap. If a sensor reading is manipulated, supply chain contracts execute on false premises. If sports results are falsified, prediction markets pay out incorrectly. If property records are wrong, RWA protocols collateralize against assets that don't exist.

The current oracle model inserts operators between data sources and smart contracts. Oracle nodes fetch data from external systems. IoT platforms aggregate sensor readings. Each intermediary is a point where data can be delayed, manipulated, or fabricated. You're not trusting the original source. You're trusting everyone between the source and your contract.

RPC, indexing, oracles. Different services, different providers, different pricing pages. Same underlying problem.

The Common Flaw

These layers look different on the surface. RPC is about node access. Indexing is about data transformation. Oracles are about external information.

But strip away the product positioning and they share the same gap:

None of them provide cryptographic proof of provenance.

Query an RPC provider, you get a response. Nothing in that response lets you verify it reflects actual chain state. The provider could be wrong, compromised, or lying. You'd never know from the data alone.

Query an indexer, you get transformed data. Nothing lets you verify the transformation was performed correctly on actual blockchain events. Did they process all relevant transactions? Compute accurate results? You're taking their word for it.

Consume oracle data, you get external information. Nothing lets you verify the data actually came from the source it claims to represent. Did the oracle operator substitute something? Delay the update? Fabricate the reading? The data doesn't tell you.

Blockchain consensus solved this. Every state transition comes with cryptographic proof. You don't trust validators to be honest. You verify the math. Trust replaced by verification.

The offchain data layer never adopted this model. It inherited the trust assumptions of Web2 infrastructure. A trustless foundation supporting a trust-dependent access layer.

You can verify your smart contract executed correctly. You can't verify the data your application displays to users actually came from that contract.

The Architecture That Should Exist

The solution isn't making intermediaries more trustworthy. It's making them unnecessary.

Attach cryptographic proofs at the source. Prove provenance and integrity. Let the data consumer verify directly.

For RPC, this means querying blockchain state through peer-to-peer networks with responses that include state proofs. Light client technology already enables this. You verify proofs without running a full node. The blockchain itself attests to accuracy.

For writes, transactions broadcast directly to validator networks through P2P protocols. No single entity decides whether to relay your transaction. The network is the network, not one company's servers pretending to be the network.

For indexing, validators index data at the source. They're already processing every transaction. They maintain canonical state. The indexed data includes proofs linking it to actual blockchain events. Any node can serve it. Any client can verify it.

Having separate companies re-process the same data to build indexes is redundant infrastructure. It exists because we never built the right architecture, not because the problem requires it.

For oracles, proofs attach at the data source itself. A sensor reading comes with proof it was produced by that specific device at that specific time. Sports results come with attestation from the league's own systems. Property records and asset data come with proofs from authoritative registries. The data travels from source to smart contract with provenance intact.

The same database infrastructure that stores indexed blockchain data can run on a sensor, a sports league's backend, or a property registry. Proofs attach at the point of origin. The smart contract verifies the data came from sources it trusts. No oracle operator sits in between.

The trust question shifts. Instead of "do I trust this oracle network?" it becomes "do I trust this sensor?" or "do I trust this registry?" You're trusting data sources, which you have to do regardless. You're not trusting intermediaries, which adds nothing but risk.

Direct Connection

Imagine building on an offchain data layer designed this way.

Your users check token balances. The wallet queries validators through a peer-to-peer network. The response includes a proof that this balance reflects actual chain state. No API provider in the middle. The data verifies itself.

Your application shows transaction history. It receives indexed data directly from validators, proofs linking every record to blockchain events. The data stores locally on user devices, verified at rest. When users search their history, they're searching their own database. Your server isn't a dependency.

Users submit transactions. They broadcast directly to the validator network. No RPC provider decides whether to relay them. The transaction reaches validators or it doesn't, and reaching validators means thousands of independent operators.

Your supply chain application needs IoT data. Each sensor runs lightweight database infrastructure attaching proofs to every reading. Your contract verifies readings came from specific devices at specific times. No IoT platform aggregating and relaying. Measurements flow directly from devices to contracts.

Your RWA protocol tokenizes real-world assets. Property records, commodity data, asset registries attach proofs at the source. Your contract verifies data came from authoritative systems. No oracle network interpreting or relaying. Asset data flows directly from registries to contracts with cryptographic provenance.

Direct connection. Users and contracts connected to data sources without intermediaries in the path. Verification replacing trust. Data integrity from origin to consumption.

No API keys. No rate limits. No "service degradation" emails at 2am. No invoices scaling with your success.

Why Elimination Matters

The standard response to intermediary problems is distributing them. Decentralized RPC networks. Decentralized indexer marketplaces. Decentralized oracle networks.

You've probably evaluated some of these. Maybe integrated them. They're better than single points of failure.

But distributing trust isn't eliminating it. You're still trusting someone. A committee instead of a company. Failure modes change but don't disappear. Extraction economics persist. Trustless protocols still accessed through trusted infrastructure.

The better question: why do these intermediaries need to exist?

RPC providers exist because running nodes is hard and finding them requires knowing where they are. Peer-to-peer networks and light client proofs solve both without intermediaries.

Indexers exist because blockchain data formats optimize for consensus, not queries. Validator-embedded indexing solves this at the source, with proofs, without external parties reconstructing what validators already computed.

Oracles exist because smart contracts can't access external data directly. But the solution isn't intermediaries fetching and relaying. It's proof infrastructure at the data source, so external data arrives with the same provenance guarantees as blockchain data.

Every intermediary exists because of a technical problem with a technical solution that doesn't require intermediaries. Those solutions are harder to build. That's why we have the architecture we have. Harder doesn't mean impossible.

Scaling Without Chokepoints

Intermediaries are chokepoints. Chokepoints limit scale.

You've seen this play out. Your application grows, your infrastructure costs grow faster. You're paying per query, per request, per compute unit. Success gets taxed. The intermediaries extracting from your growth aren't providing proportionally more value. They're just positioned to collect.

Scale to a billion users while maintaining chokepoints and those chokepoints must handle a billion users. The provider economics don't support it. The reliability math doesn't work. You're building on infrastructure that breaks under the load you're trying to achieve.

Edge-first architecture scales differently. Data lives on user devices. Verification happens locally. Peer-to-peer networks handle distribution. No chokepoints means no artificial scaling limits. A billion users means a billion nodes, each independently capable, collectively more resilient than any centralized alternative.

This is how the internet was supposed to work before platforms captured it. End-to-end connectivity. Decentralized protocols. Intelligence at the edge. Blockchain technology is a chance to build it right.

But only if the offchain data layer matches what the protocol layer provides. Trustless all the way down. No intermediaries extracting from the middle.

The infrastructure to build this way exists. The question is whether we use it.


We're building the infrastructure to eliminate blockchain intermediaries.

Follow our progress.

X · Telegram · Discord · GitHub · shinzo.network