πŸ€–opML

For the Versse platform, we leverage Ora Protocol's opML to utilize advanced AI models like Stable Diffusion and Llama2 for generating static images when users create scenes, characters, and worlds. OpenAI's Sora will be employed to generate video content.

About opML

Introduction

opML (Optimistic Machine Learning), developed by ORA (https://www.ora.io), is a pioneering framework that integrates machine learning with blockchain technology. Using principles similar to optimistic rollups, opML ensures the validity of computations in a decentralized manner. This approach enhances transparency and builds trust in machine learning by enabling on-chain verification of AI computations.

Architecture

opML consists of the following key components:

  1. Fraud Proof Virtual Machine (Off-chain VM): This robust off-chain engine executes machine learning inferences, generating new VM states. When discrepancies occur, the MIPS VM uses a bisection method to identify the precise step where the divergence begins.

  2. opML Smart Contracts (On-chain VM): These contracts verify computational results, ensuring the accuracy of off-chain computations. They allow the execution of a single MIPS instruction, enabling on-chain verification of specific computation steps. This capability is crucial for resolving disputes and maintaining the integrity of off-chain computations.

  3. Fraud Proofs: In case of disputes, fraud proofs generated by the verifier serve as conclusive evidence, highlighting computation discrepancies and aiding resolution through opML smart contracts.

Verification Game

The verification game involves multiple parties executing the same program and challenging each other to locate disputable steps. The contentious step is sent to the smart contract for verification.

For the system to function correctly, it’s essential to ensure:

  • Deterministic ML Execution: opML guarantees consistent ML execution using fixed-point arithmetic and software-based floating points, eliminating randomness and achieving deterministic outcomes.

  • Separate Execution from Proving: opML employs a dual-compilation method: one for optimized native execution and another for fraud-proof VM instructions. This ensures both fast execution and reliable, machine-independent proof.

  • Efficiency of AI Model Inference in VM: Traditional fraud proof systems in optimistic rollups require cross-compiling the entire computation into fraud proof VM instructions, leading to inefficient execution and high memory consumption. opML proposes a multi-phase protocol allowing semi-native execution and lazy loading, significantly speeding up the fraud proof process.

opML Process:

  1. The requester initiates an ML service task.

  2. The server completes the task and commits results on-chain.

  3. The verifier validates the results. If a verifier disputes the results, a verification game (bisection protocol) is initiated with the server to pinpoint the erroneous step.

  4. Arbitration over a single step is conducted via smart contract.

Multi-phase Verification Game:

This extension of the single-phase verification game optimizes resource use. Traditional single-phase verification cross-compiles the entire ML inference code into Fraud Proof VM instructions, which is less efficient than native execution. The Fraud Proof VM has limited memory, restricting large model loading.

To address these issues, the multi-phase verification game introduces:

  • Semi-Native Execution: Computation in the VM is only conducted in the final phase, with prior phases utilizing native environments (CPU, GPU, TPU) for computations. This reduces VM reliance, enhancing execution performance to near-native levels.

  • Lazy Loading Design: This technique optimizes memory usage by loading only necessary data keys into the VM memory on-demand, fetching specific data items from external sources as needed, and swapping out data when no longer needed. This approach allows handling large data volumes efficiently without exceeding memory capacity.

For a detailed explanation of opML, refer to their research paper research paper.

Last updated