Last Updated on February 2, 2024 by Pinax Team
TL;DR: Along with many updates to important repositories, you'll discover what's coming in the new releases of Graph Node and Indexer Service & Agent. The open discussion focuses on the GraphOps team's presentation of File Hosting Service.
Opening remarks
Hello and welcome back to Indexer Office Hours!
Today is January 16, and this is episode 140.
We’re excited to be back for another week of discussing the most important and relevant topics for indexers.
Don’t miss the latest GRTiQ podcast, episode 151 with Maarten Henskens, Head at the Astar Foundation.
Repo watch
The latest updates to important repositories
Execution Layer Clients
- Erigon v2.0: New release v2.56.2 :
- This release fixes Erigon’s failure to stop gracefully, introduced in v2.56.0.
- Geth: New releases:
- v1.13.10 :
- This release is equivalent to v1.13.9, as it just contains a version bump. The reason is that a bad commit was tagged on 1.13.9 originally and while it was untagged and fixed, some caches (Go’s package manager ( go mod)) managed to store the temporary bad version. As there is no way for us to flush the bad version out, it’s cleaner to tag the next version instead.
- v1.13.9 :
- This release fixes a few issues and enables the Cancun upgrade for the Goerli network at block timestamp 1705473120 (#28719) which is 6:32 am 17. Jan. 2024 UTC.
- ⚠️ If you are running Goerli, this is a required update!
- v1.13.10 :
- sfeth/fireeth: New release v2.2.0 :
- Added support for EIP-4844 (upcoming with activation of Dencun fork), through instrumented go-ethereum nodes with version fh2.4. This adds new fields in the Ethereum Block model, fields that will be non-empty when the Ethereum network you’re pulling has EIP-4844 activated.
- Substreams server:
- Fixed error-passing between tier2 and tier1 (tier1 will not retry sending requests that fail deterministicly to tier2).
- Tier1 will now schedule a single job on tier2, quickly ramping up to the requested number of workers after 4 seconds of delay, to catch early exceptions.
- The error “store became too big” is now considered deterministic and returns code “InvalidArgument”.
- Operators running Goerli chain will need to upgrade to this version, with this geth node release: https://github.com/streamingfast/go-ethereum/releases/tag/geth-v1.13.10-fh2.4.
- Nethermind: New release v1.25.0:
- Goerli Dencun hard fork support
- The Json.NET library was replaced with the System.Text.Json implementation. As a result, the memory overhead was drastically reduced, improved the block processing time, and sped up JSON-RPC handling in general.
- This version supports the upcoming Goerli Dencun hard fork that is scheduled on Jan. 17, 2024, at 06:32:00 UTC. Please update your node to this version to ensure correct node functionality.
Consensus Layer Clients
Information on the different clients
- Prysm: New release v4.2.0 :
- There are some API changes bundled in this release that require you to upgrade or downgrade in a particular order. If the validator is updated before the beacon node, it will see repeated 404 errors at start-up until the beacon node is updated as it uses a new API endpoint introduced in v4.2.0.
- 🔼 Upgrading: Upgrade the beacon node, then the validator.
- 🔽 Downgrading: Downgrade the validator to v4.1.1, then downgrade the beacon node.
- This release adds full support for the upcoming Deneb hard fork on Goerli next week on January 17.
- This release increases the default peer count to 70 from 45. The reason is so that nodes running with default peer counts can perform their validator duties as expected. Users who want to use the old peer count can add in –p2p-max-peers=45 as a flag.
- NOTE: This release is strongly recommended for all operators to update as it has many bug fixes, security patches, and features that will improve the Prysm experience on mainnet.
- Teku: New release 24.1.0
- Attention: Teku will require around 50 GB of extra storage for blobs, but theoretically blob storage requirements can go up to 103 GB. This extra storage space WILL NOT grow above this limit over time.
- Fix incompatibility between Teku validator client and Lighthouse beacon nodes.
- Fix a block publishing endpoints issue where 202 status code could be returned but block hasn’t been broadcast.
- NOTE: This is a required update for anyone running Goerli nodes as it contains the configuration required for the Deneb upgrade in Goerli. It is an optional update for anyone else. This version also has some bug fixes.
- Lighthouse: New release v4.6.0-rc.0 :
- All Goerli users must update their nodes to v4.6.0-rc.0 by Jan. 17, 2024, 06:32:00 am UTC.
- Some extensive changes have been made to the networking components in this release. Focus has been on several performance and structural changes to the gossipsub protocol and discovery mechanism.
- NOTE: ⚠️ You should not run this pre-release supporting mainnet validators. This pre-release is, however, required for Goerli validators ⚠️.
Graph Stack
Check out this highlight clip, where Derek from Data Nexus explains the concept of “automated pruning” through indexerHints, a feature of the new Graph Node release.
- Graph Node: New release v0.34.0-rc.0 :
- This update introduces the ability for subgraph authors to specify indexerHints with a field prune in their manifest, indicating the desired extent of historical block data retention. This feature enables graph-node to automatically prune subgraphs when the stored history exceeds the specified limit, significantly improving query performance. This automated process eliminates the need for manual action by indexers for each subgraph. Indexers can also override user-set historyBlocks with the environment variable GRAPH_HISTORY_BLOCKS_OVERRIDE .
- Introducing initial Starknet support for graph-node, expanding indexing capabilities to the Starknet ecosystem.
- This update adds the endBlock field for dataSources in subgraph manifest. By setting an endBlock, subgraph authors can define the exact block at which a data source will cease processing, ensuring no further triggers are processed beyond this point.
- Updated GraphiQL query interface of graph-node to version 2.
- A new guide has been added to graph-node documentation, explaining how to scale graph-node installations using sharding with multiple Postgres instances.
- The subgraphFeatures endpoint in graph-node has been updated to load features from subgraphs prior to their deployment.
- Resolved an issue when rewinding data sources across multiple blocks. In rare cases, when a subgraph had been rewound by multiple blocks, data sources ‘from the future’ could have been left behind. This release adds a database migration that fixes that. With very unlucky timing, this migration might miss some subgraphs, which will later lead to an error assertion failed: self.hosts.last().and_then(|h| h.creation_block_number()) <= data_source.creation_block(). Should that happen, the migration script should be rerun against the affected shard.
- Fixed a bug in graphman’s index creation to ensure entire String and Bytes columns are indexed rather than just their prefixes, resulting in optimized query performance and accuracy.
- A new graphman deploy command has been introduced, simplifying the process of deploying subgraphs to graph-node.
- Indexer Service & Agent: New release v0.21.1 :
- indexer-agent: Fix bug in stake usage summary
- indexer-agent: Support subgraphs syncing sepolia and arbitrum-sepolia
- indexer-agent: Improve robustness of DAI contract calls
- indexer-agent: add tests for allocation decision consolidation
- indexer-agent: Improve batch action preparation
- indexer-service: Fix allocation monitor query
- indexer-service: Various robustness improvements
- indexer-service: Improve validation of operator wallet
- indexer-service: Add graph-node version endpoint from status api
- indexer-common: Reduce stringency of action validation when adding to queue
- NOTE! The version of indexer components was bumped from v0.20.15 to v0.20.16, but no code has changed. The release notes are the same as before, just to encourage users to pick v16 over v15.
Graph Orchestration Tooling
Join us every other Wednesday at 5 pm UTC for Launchpad Office Hours and get the latest updates on running Launchpad.
The next one is on January 31. Bring all your questions!
Blockchains Operator Upgrade Calendar
The Blockchains Operator Upgrade Calendar is your one-stop solution for tracking hard fork updates and scheduled maintenance for various protocols within The Graph ecosystem.
Simplify your upgrade process and never miss a deadline again.
Protocol watch
The latest updates on important changes to the protocol
Forum Governance
- Add support for Scroll Mainnet
- Add Support for Chiado chain (Gnosis chain testnet)
- Add Support for BEVM Canary
Forum Research
Contracts Repository
- chore: add changeset tool #919 (merged)
- test: dont use logging for tests #920 (merged)
- chore: add changeset tool #918 (closed)
- feat(gre): several improvements #917 (merged)
- chore: use gre task for hardhat tasks #915 (merged)
- chore: remove cli from protocol contracts #914 (merged)
- test: consolidate unit/e2e test code #908 (merged)
- test: refactor folder structure and minor tweaks #904 (merged)
- feat(gre): add convenience gre task creation method #724 (closed)
- fix: read config without metadata on verifyAll #912 (merged)
- fix: override type in CLI to allow wrapping contract calls #911 (merged)
- fix: remove remaining usage of the removed GraphGovernance contract #910 (merged)
- fix: send to l2 task incorrectly setting sender #922 (merged)
- chore: deploy GGPs 31, 34 and 35 to testnet #921 (merged)
- feat: subgraph availability manager contract #882 (open)
- chore: add new implementation addresses on staging for GGPs 31, 34, 35 #916 (merged)
- chore(deps): bump follow-redirects from 1.15.3 to 1.15.4 #909 (merged)
Open discussion
- The GraphOps team shares updates on File Hosting Service (previously File Data Service).
- Hope presents the file hosting service, explains what it is, its target user demographic, and how it can reduce redundant work and boot-strapping costs. Hope also talked about how data producers and data consumers benefit from the file hosting service, plus gave examples of data sets and use cases.
- Hope asks for questions.
- Chris asks if subgraph developers will be able to publish manifests to a decentralized group of indexers instead of using IPFS.
- Hope clarifies that potential users besides indexers could be anyone interested in the manifests.
- Chris again mentions abstracting the manifest away from IPFS.
- Someone (unidentified) asks about using a different URL for the file data service data source.
- Hope explains that the endpoint might be path-based or different, with pros and cons to each approach.
- Jim asks if snapshots of different node types could be included in the service, mentioning Arbitrum as an example.
- Jim continues, suggesting that indexers could get paid for providing historical data not kept in full node.
- Hope agrees and mentions a conversation with Semantic about EIP-4444, where indexers could keep full history and provide data not kept in full node.
- Someone (unidentified) asks about configurable maximum threads/workers to limit resource usage.
- Hope acknowledges the concern and says they will add it to the feature checklist.
Questions and Answers:
- Does this service fall under Gossip Network or can subgraph developers publish manifests to a decentralized group of indexers instead of IPFS?
Answer: Yes, publishing subgraph manifests to decentralized indexers is a potential future use case for this service. Currently, developers rely on hosted gateways like IPFS from Edge & Node, but moving away from that is a possibility.
- Will every file have a corresponding manifest?
Answer: Yes, each file will have a manifest similar to a subgraph manifest, containing metadata and hashes for verification.
- How are micropayments handled?
Answer: Scalar is used for micropayments. The manifest may be abstracted away from IPFS in the future. SHA-256 is used for hashing, and the manifest is minimal for easy matching.
- Different URL for file data service data source?
Answer: This is being discussed internally. It might be path-based or different, with pros and cons to each approach. Allocations might point to a file containing a different URL, requiring some discovery changes.
- Can anyone besides indexers use the service, like for node snapshots?
Answer: Yes, wider use cases are considered. Providing snapshots to anyone willing to pay could be valuable for foundations that currently host them for free. File hosting service could allow indexers to get paid for providing historical data not kept in phone nodes (e.g., EIP-4444).
nice blog mate.. amazing content.. this one i search for. thank you
Wow Thanks for this post i find it hard to unearth details out there when it comes to IOH, thank for the review site
awesome