This repository demonstrates how to implement AI agents that execute directly on-chain using Arbitrum Stylus. The project was created as part of the ETHGlobal Agenttic Ethereum Hackathon, showcasing the potential of running AI algorithms directly within smart contracts.
While most current AI integrations with blockchain involve off-chain computation (typically using large language models) followed by on-chain actions, this project demonstrates a different approach: implementing the AI agent to run entirely on-chain. This opens up new possibilities for:
- DEX hooks that analyze market conditions in real-time
- Dynamic risk scoring for lending protocols
- Price prediction mechanisms
- Automated decision-making based on on-chain metrics
Our example uses Q-learning, a classic reinforcement learning algorithm, to demonstrate how an AI agent can learn and adapt within the constraints of a smart contract environment.
The smart contract implements a 5x5 maze where:
- A Q-learning algorithm learns the optimal path from start to goal
- The maze solution is visualized as an NFT using SVG
- Each cell shows an arrow pointing to the next best move
- The contract implements the ERC-721 standard for NFT compatibility
Screenshot of the maze NFT displayed in MetaMask
-
First, set up your Rust and Stylus environment by following the Arbitrum Stylus Getting Started Guide
-
Clone this repository:
git clone https://github.com/hammertoe/ArbitrumOnchainAgent
cd ArbitrumOnchainAgent
- First, check that your contract compiles correctly:
cargo stylus check -e https://sepolia-rollup.arbitrum.io/rpc
- Deploy the contract (replace
${PRIVATE_KEY}
with your private key):
cargo stylus deploy -e https://sepolia-rollup.arbitrum.io/rpc --private-key ${PRIVATE_KEY} --no-verify
- Export the ABI for use with Remix:
cargo stylus export-abi --json --output contract.abi
Once deployed, you can interact with the contract using Remix:
- Import the ABI into Remix
- Connect to the deployed contract address
- Train the Q-learning algorithm using the
train
function - Mint your NFT using the
mint
function - View the learned path in your wallet!
Screenshot of interacting with the contract in Remix
mint(address)
: Mint the NFT to a specified addresstrain(episodes, max_steps, epsilon, alpha, gamma)
: Train the Q-learning algorithmtoken_uri()
: Generate the SVG visualization of the maze- Standard ERC-721 functions for NFT compatibility
This project uses Q-learning, a reinforcement learning algorithm, to solve a maze puzzle. The agent:
- Learns through trial and error
- Develops an optimal policy for navigating the maze
- Visualizes its learned solution as an NFT
- Runs entirely on-chain using Arbitrum Stylus
This project was developed for a workshop during the ETHGlobal Agenttic Ethereum Hackathon, where Arbitrum was a sponsor offering $10,000 in prizes for innovative projects built on Arbitrum. The workshop demonstrates how Arbitrum Stylus enables running sophisticated algorithms, including AI agents, directly on-chain.
Watch the full workshop recording here to learn about:
- The potential of on-chain AI agents
- Implementing reinforcement learning in smart contracts
- Using Arbitrum Stylus for complex on-chain computation
- Creating interactive NFTs that showcase AI behavior
Feel free to submit issues and enhancement requests!
MIT License - See LICENSE file for details
Note: Replace placeholder image paths with actual screenshots once available. Update the repository URLs as needed.