You are participating in Ironforge Public Beta 

    RPC Orchestration

    Learn about the available routing methods and multiple endpoints.

    By leveraging a robust routing mechanism and the ability to setup multiple RPC endpoints, Ironforge ensures seamless handling of service disruptions and minimizes downtime.

    You can find these settings in the RPC Orchestration:

    RPC Orchestration

    When making changes, make sure you have selected the correct cluster (Mainnet or Devnet)

    Routing Methods

    Ironforge offers two routing methods: Round Robin and Parallel. Each method has its own advantages and considerations.

    Screenshot of RPC Routing

    Round Robin

    The Round Robin routing method follows a sequential order when sending requests to RPC endpoints. When a request is made, Ironforge forwards it to the first available RPC endpoint in the configured list. If the first endpoint responds successfully, Ironforge uses that response. However, if the first endpoint encounters an error or is unavailable, Ironforge automatically moves to the next available endpoint in the sequence. This process continues until a successful response is obtained or all endpoints have been attempted.


    The Parallel routing method is designed for maximizing performance and responsiveness. When a request is received, Ironforge simultaneously sends it to all available RPC endpoints. Ironforge then uses the response from the first endpoint that responds successfully, discarding the responses from the remaining endpoints. This approach leverages parallel processing to reduce response times. However, it's important to note that the Parallel method may incur higher costs due to the increased utilization.

    Which method is better?

    The choice between Round-Robin and Parallel routing strategies depends on your application's specific requirements. Round Robin offers simplicity and predictability in load distribution and is most suitable for implementing failover logic in case the primary node goes down, while Parallel Routing leverages concurrency for faster response times. Whichever method you choose, you can cache frequently accessed data by passing a header with your request, which can reduce both the latency of requests and lower RPC costs.

    It's important to consider factors like node performance, latency sensitivity, redundancy and network bandwidth when deciding. By understanding these strategies and tailoring your approach to your application's needs, you can optimize the performance of your application and thus, the end-user experience.

    Configuring RPC Endpoints

    In addition to selecting the desired routing method, Ironforge allows you to set up multiple RPC endpoints with no upper limit. This feature enables you to distribute your application's workload across multiple endpoints, increasing scalability and fault tolerance. By configuring multiple RPC endpoints, you can handle higher traffic volumes and ensure redundancy in case of endpoint failures.

    Screenshot of RPC Endpoints