TL;DR: This episode featured a jam-packed open discussion: GRT Sunrise Upgrade Program, Graph Node update, and Building a Decentralized Brain with AI and Crypto.
Opening remarks
Hello everyone and welcome to the latest edition of Indexer Office Hours! Today is May 7, and we’re here for episode 156.
GRTiQ 167
Catch the GRTiQ Podcast with Shermin Voshmgir, author of the best-selling book Token Economy and Founder of Token Kitchen.
Repo watch
The latest updates to important repositories
Execution Layer Clients
- sfeth/fireeth: New releases:
- v2.4.9 :
- Fixed a crash when eth_call batch is of length 0 and a retry is attempted.
- Allow stores to write to stores with out-of-order ordinals (they will be reordered at the end of the module execution for each block).
- v2.4.8 :
- Firehose response (both single block and stream) now include the sf.firehose.v2.BlockMetadata field. This new field contains the chain-agnostic fields we hold about any block of any chain.
- The fireeth tools download-from-firehose has been improved to work with new Firehose sf.firehose.v2.BlockMetadata field, if the server sends this new field, the tool is going to work on any chain.
- v2.4.9 :
- Avalanche: New release v1.11.5 :
- Fixed increased outbound PeerList messages when specifying custom bootstrap IDs.
- Fixed CPU spike when disconnected from the network during bootstrapping fetching.
- Fixed topological sort in vote calculation.
- Fixed job dependency handling for transitively rejected blocks.
- Prevented creation of unnecessary consensus polls during the issuance of a block.
- Arbitrum-nitro New release v2.3.4-rc.5 :
- This release prepares support for Arbitrum Stylus block validation compared to the previous release candidate.
- This PR undoes the configuration changes in release candidate 3 and removes the –node.block-validator.execution-server-config.* config block.
Graph Stack
- Subgraph-radio: New release 1.0.4 :
- No longer alpha.
- Fixes db locked error.
Graph Orchestration Tooling
Join us every other Wednesday at 5 PM UTC for Launchpad Office Hours and get the latest updates on running Launchpad.
The next one is on May 22. Bring all your questions!
Protocol watch
The latest updates on important changes to the protocol
Forum Governance
- GRC-002: QoS Oracle V2
- If you have any more feedback, please add it to the Forum post.
Contracts Repository
- feat: remove minimum allocation duration restriction #902 (open)
- fix: horizon dispute manager tests #969 (open)
- chore: SAM deployments #961 (open)
Network Subgraphs
- Analytics subgraph: Performance refactors PR
- Working on performance upgrades to fix this subgraph.
Open discussion
- GRT Sunrise Upgrade Program
- Graph Node Update
- Building a Decentralized Brain with AI and Crypto
GRT Sunrise Upgrade Program
Marian Walter, in BD (business development) and partnerships for Edge & Node, presented the GRT Sunrise Upgrade Program.
The program was launched to ensure everyone upgrades from the hosted service to The Graph Network, celebrate decentralized data, and challenge centralized SaaS monopolies and giants.
This is the Sunbeam phase, part two of the three phases of the Sunrise of decentralized data. The 60-day upgrade window began on April 11 and closes when the hosted service endpoints expire on June 12.
Marian explained he’s been speaking with dApps on the hosted service to ensure they upgrade to the decentralized network. He’s been focused on the larger dApps, talking to them first. They’ve discovered that speaking with individuals differs from speaking with project teams.
A Program for Community Members and Newcomers
This program isn’t for dApps as they already have upgrade grants from The Graph Foundation to support them. Instead, it’s focused on empowering the community by reallocating some grants to individuals.
The goal is to target individual contributors already in the ecosystem and encourage newcomers to join. They want a massive wave of new participants to come in apart from the dApps already supported in the general grants program.
Overview of the Program
The Sunrise Upgrade Program consists of five waves of weekly missions for members of The Graph community and newcomers. The Graph Foundation has allocated 4 million GRT for this program, which is about 1.2 million dollars. Completing and submitting these missions earns people rewards in GRT.
This program consists of missions ranging from very technical to non-technical. You can choose between two streams of missions. The Visionary track is creative and fun. It’s all about education, so it’s a nice on-ramp for people just starting and trying to break into web3 but needing to know how.
It starts with missions as simple as retweeting The Graph Sunrise Upgrade Program announcement, interacting on Discord and Telegram, and creating memes or videos, and ramps up to more advanced tasks like hosting X-spaces, writing long-form research content, or writing subgraphs.
There’s also the Builder track, which is all about technical tinkering. Most of the time, it’s for those already in the community. Tasks range from upgrading a subgraph, querying existing subgraphs on the network, assisting others with making their first queries on the network, sharing useful example queries, and creating example code that queries the network.
If people are dedicated, they can earn quite decent rewards. To estimate, the more technical tasks in the Builder track range from 60 to 1,500 GRT, and the more educationally focused tasks in the Visionary track range from 40 to 1,000 GRT.
The size of the reward depends on the execution level and how early the submission form arrives. The first mission wave is due May 14. The upgrade window ends on June 12, and this Sunrise Upgrade Program ends on June 20.
GRTiQ Podcast
Listen to this special release of the GRTiQ Podcast with Marian Walter to better understand the program and how it works.
Graph Node Update
Adam Fuller from Edge & Node presented on Graph Node.
A couple of GGPs are being reviewed right now. Graph Node releases came up in the chat earlier.
GGP-039
- Delegating responsibility for new Graph Node versions to the Graph Node maintainers
GGP-040
- New Graph Node version 0.35.x
- Arweave file data sources (uses the default Arweave gateway)
- ENS support
GGP-039
Adam explained that there seems to be agreement to delegate responsibility for the announcement or enablement of new Graph Node versions to the Graph Node maintainers. So, that’s the team working on Graph Node, cutting new releases, and doing integration testing. Hopefully, this will allow them to move a bit faster.
He mentioned they’re also working internally with the Graph Node team on some improvements to the release testing process so they can release more frequently. Alex or others may be here in the future to talk about that, so hopefully, we’ll be moving a bit faster going forward.
GGP-040
This includes a new Graph Node version and enables indexing rewards for Arweave file data sources using the default Arweave gateway. This feature is very similar to the IPFS file data sources, so it isolates non-deterministic from deterministic syncing so that more subgraphs will be eligible for indexing rewards.
Then, the ENS subgraph. We’re chatting with the ENS team about moving to the network, and they’ve essentially got an OG feature within Graph Node, which fetches names by hash and requires a one-off import of data.
So we made some changes a while back to Graph Node to allow this to be synced, sort of deterministically. The timing is pertinent because this is moving to the network as we speak, and will hopefully be enabled this week.
As an indexer, you’d see that when you’re syncing, the subgraph would fail if you didn’t have these tables imported correctly. It should fail with a helpful error to point you to the right place to get the tables, and then you’d be off to the races.
There’ll be more formal announcements and documentation once this is all approved.
Building a Decentralized Brain with AI and Crypto
Matthieu Di Mercurio from StreamingFast presented on building a decentralized brain with AI and Crypto.
He joined to discuss a piece of content StreamingFast has been promoting. They published a blog post about building a decentralized brain with AI and crypto. The idea was to give their vision of where they think the merge of these two technologies is happening or has the potential to happen, as well as explain how The Graph can contribute to this.
There’s also a podcast with GRTiQ on the topic, where Matthieu expands on the blog post and shares more ideas about how The Graph can benefit from AI and help with the decentralization of AI.
There is a lot of hype around what is happening at the intersection of blockchain and AI: people are painting long-term pictures of how blockchain can help with the decentralization of AI, get access to GPUs for cheap, and make models available to everyone.
The Graph has had a few different initiatives to support this over the past year. Some of them are led by Semiotic Labs, making LLM models available to be run by indexers so that people could run inferences on the network.
Semiotic has also built a first version of an agent, like a chat agent, fed by Substreams data so that it has access to on-chain data, can run analysis on that data, and even has access to DEX APIs and can make trades. You could see the full process of going from having a question to analyzing some data and then making a transaction.
Knowledge Graphs
The blog post discusses a new idea: leveraging knowledge graphs as part of an AI agent.
One limitation of AI models is that they don’t have access to live data. It’s very expensive to retrain them with new data, so to give real-time, up-to-date information, we need to feed that information to ChatGPT or a similar tool. Feeding the data that you’re interested in in your context before sending your question, adding all of that information in.
Geo is a great tool for doing something like this. Think of Geo as a way to create information, curate it, and have people create connections between pieces of knowledge. Now, you have a knowledge graph with potentially all of the world’s information. If you build an AI agent on top of it, you’d be able to leverage all of those connections and all of that content created and curated by humans and make it accessible to an AI agent that can now answer questions based on all of that knowledge and even give you feedback on the content that you’ve created.
So that’s the main focus of the post, but there’s a lot more that The Graph can bring in terms of where we’re going to see AI agents go in the next year or maybe a couple of years.
AI Agents
Aligning with the idea of being able to index data, all of the work that has been done since the beginning to make blockchain data available is going to be leveraged by foundation model builders and people who want to train models on more data.
You can imagine using on-chain data to train your models and having models that are more specific to the use case or application that you have in mind. Eventually, as we build a world of data services, those agents that are leveraging this data could also interact with other data services.
With this concept of new data services that we’re going to bring to the network, those agents will also be able to interact with those APIs. That means that now you have an agent that can take action in the world, and it is not just a ChatGPT that you can talk to and will reply to you but then cannot really do anything for you—you still have to go and execute all of the decisions that you make.
Thinking about the case of Agentc, the agent that was built by Semiotic, if you were to do this with ChatGPT, you could do your analysis with ChatGPT, but you can’t make a trade directly from their interface.
By having all of those data services interconnected, now you can put your model at the center and allow it to interact with the world for you, and that’s the next step of automation and where I think the world is really going to benefit from AI. Not just reading text and creating content but also being able to build autonomous agents that can interact with other services and that can interact in between them.
The post also touches on this, and I’m personally very excited about it. I’m looking forward to seeing how we can make these things come together.
A few questions were raised:
Q1: As someone not at all educated on the various aspects of AI, I feel things get a bit general regarding implementation. From the indexer’s point of view, what does the MVP look like for this type of service? What type of things should we be educating ourselves on?
- Answer: That’s a really good question because many of the things we’re discussing are long-term, and that’s why they might feel a bit general. We’re not describing a specific implementation. Still, some things will be coming in the shorter term and can be a lot more concrete. For example, being able to run an LLM inference engine as an indexer, taking an open-source LLM like Llama 3 released recently by Meta.
- All of the architecture and the weights are available. I think the best place to find them would be Hugging Face, but it’s probably also on Meta. They have a repo where you can find all of the information. So, as an indexer, you would be running this model, and people would be able to query the model with text since it’s a large language model, so it’s taking some text as input and replying with text. The indexer’s role here would be making the GPU or the CPU available to the consumers and exposing that service so that people can send requests and get answers as text. So, that’s the very first component of the more complex agent system that I was describing.
Q2: What sort of timeline are we talking about to start experimenting/deploying… this year, next year… 5 years out?
- Answer: All of the above, I would say. For the LLM services, I don’t want to speak for Semiotic. I don’t know if anyone from Semiotic is here because they’re the ones who are leading the charge around this, but it is coming in the fairly short term. There’s been some experimentation already, but I would say something like six months to a year. I would let Semiotic confirm. I would say having LLM data services would be what I would consider fairly short term, and then as we roll out more data services over the next few years, that’s the bigger vision that I was describing will roll out. So getting there will take time for sure, but we’ve started in that direction.
Q3: Would this model be distributed across multiple indexers or would each indexer maintain their own copy?
- Answer: To be able to serve it, you would have to have your own version of the model. You would need to have the full model running on your end. I’m not sure it would be possible to distribute across indexers, but we could imagine a world where, if on the network, there are hundreds of different models, different indexers would serve different models and it would be able to scale this way.
No Comments