RepoAI

RepoAI — The AI-Powered Framework for Blockchain Data Security

Secure. Compliant. Scalable. RepoAI protects your decentralized data infrastructure with cutting-edge AI and cryptographic assurance.

Redefining Blockchain Security with Intelligence

RepoAI delivers an advanced data security framework for Web3 — combining AI, zero-knowledge proofs, and decentralized governance.

Build Trustless Data Systems with RepoAI

From smart contract auditing to zkML-based verification, RepoAI is your AI-first foundation for secure, verifiable blockchain operations.

Advanced Algorithm Structure

We propose a model that combines EfficientLLM with SEKI. Fig.1 illustrates the overall workflow of EfficientLLM, a pruning-aware pretraining framework designed for building compact, high-performance language models suitable for edge deployment.

The left section defines the pruning space, covering multiple architectural components—including self-attention (K, Q, V, output projections), feed-forward networks (up/down projections), and Transformer stems—allowing flexible and fine-grained structural pruning.

The central module performs global saliency detection, where forward and backward passes are used to compute the importance of each channel. Based on this saliency, the framework dynamically selects less important channels to prune, updating weights either once or over multiple steps.

The top-right section (a) shows a local second-order weight update, leveraging local Hessian information to perform pruning one channel at a time during training. This approach ensures more stable performance compared to post-training methods like SparseGPT or LLM-Pruner (shown in part b), which prune all channels at once after training.

Finally, part (c) visualizes the concept of continuous parameter dropping, showing how model size gradually decreases as training progresses. Different pruning frequencies (e.g., N=1 vs. N=2) allow control over compression speed and stability.

EfficientLLM integrates pruning and training in a unified loop, enabling structure-aware, saliency-driven compression that is architecture-agnostic and efficient for real-world deployment on resource-constrained devices.

Fig. 1 Scalable Pruning-Aware Pretraining for Architecture-Agnostic Edge Language Models

SEKI is a novel large language model (LLM)-based neural architecture search (NAS) method. Inspired by the chain-of-thought (CoT) paradigm in modern LLMs, SEKI operates in two key stages: self-evolution and knowledge distillation. In the self-evolution stage, LLMs initially lack sufficient reference examples, so we implement an iterative refinement mechanism that enhances architectures based on performance feedback. Over time, this process accumulates a repository of high-performance architectures. In the knowledge distillation stage, LLMs analyze common patterns among these architectures to generate new, optimized designs. Combining these two stages, SEKI greatly leverages the capacity of LLMs on NAS and without requiring any domain-specific data. Experimental results show that SEKI achieves state-of-the-art (SOTA) performance across various datasets and search spaces while requiring only 0.05 GPU-days, outperforming existing methods in both efficiency and accuracy. Furthermore, SEKI demonstrates strong generalization capabilities, achieving SOTA-competitive results across multiple tasks.

Fig. 2 Framework of SEKI. SEKI is composed of two stages: self-evolution and knowledge inspiration. In each iteration of the self-evolution, the LLM generates optimization strategies and produces a new, refined architecture by analyzing the current architecture and its performance metrics.

Fig. 3 PromptframeworkforSelf-Evolution.

Data Encryption Framework

Our laboratory has proposed a large language model framework applied to blockchain technology for data protection and intelligent screening.

1/8 ChainLLM-Guard Introduction

A framework that brings LLMs into the blockchain world securely. It filters and analyzes data using large language models — all while protecting user privacy and proving results on-chain.

2/8 Problem Solution

LLMs are powerful but risky — they can leak sensitive data or hallucinate. ChainLLM-Guard lets LLMs process Web3 data without compromising security or trust.

3/8 Key Features

– On/off-chain data filtering via LLM
– TEE or zkML protected model execution
– End-to-end encryption & access control
– Zero-knowledge proof of inference
– Verifiable outputs on-chain

4/8 LLM Agent Layer

Built with open-source models like LLaMA, Mistral, ChatGLM. Receives encrypted data, runs reasoning tasks:
– "Summarize DAO debates"
– "Flag risky DeFi positions"
– "Audit smart contracts for bugs"

5/8 Privacy Layer Options

Option A: TEE (e.g., Intel SGX, Nitro)
– Runs LLM in a secure enclave
– Model never sees raw data
Option B: zkML + FHE
– Fully encrypted inference
– Output comes with a ZK-proof

6/8 Verifiability Layer

– Hashes, Merkle roots, or proofs are stored on-chain
– Smart contracts verify results
– Optional trigger logic: auto-alerts, access control, governance votes

7/8 Tech Stack Highlights

– LLMs: LLaMA, Mistral, ChatGLM
– Privacy: SGX / zkML / Lit Protocol
– ZK: zkSync, RiscZero, Aleo
– On-chain: Solidity, Chainlink OCR, EigenLayer

8/8 ChainLLM-Guard Applications

ChainLLM-Guard = LLM intelligence + ZK trust + Web3 openness.
Perfect for:
– DAO tools
– DeFi surveillance
– Smart contract auditing
– Private data markets

🚀 RepoAI Roadmap (2024–2026)

An AI-powered, blockchain-native framework for secure and verifiable data infrastructure.

🟦 Phase 1: MVP Development & Core Encryption Layer

2025Q2

🎯 Goal: Deliver a working prototype for secure data encryption and on-chain anchoring.

  • ✅ Data Encryption SDK: Implement RSA/AES-based client-side encryption with IPFS integration.
  • ✅ Smart Contracts for Data Anchoring: Deploy Solidity contracts to store encrypted data hashes and access policies.
  • ✅ Web3 SDK/API: JavaScript/TypeScript SDK for dApps to encrypt, upload, and query securely.
  • ✅ Basic Role-Based Access Control (RBAC): Allow selected addresses to request decryption access.
  • ✅ Admin Dashboard (Dev UI): Basic frontend for developers to upload, authorize, and audit encrypted data.

🟪 Phase 2: AI & ZK-Powered Verifiable Computation

2025Q3

🎯 Goal: Enable AI-based inference over encrypted data, verifiable by zero-knowledge proofs.

  • 🔄 zkML Integration PoC: Leverage frameworks like EZKL or Giza for lightweight model inference with ZK proofs.
  • 🧠 AI Inference API: Offer RepoAI-powered insights via encrypted model execution.
  • 🔐 ZK Verifier Smart Contract: Deploy Halo2/Groth16 ZK verifier contracts on zkEVM-compatible chains.
  • 🧩 Trusted Execution Environment (TEE) Support: Integrate Integrate Intel SGX or AMD SEV to run AI models securely off-chain when ZK is infeasible.

🟩 Phase 3: Protocolization & Decentralization

2025Q4

🎯 Goal: Transform RepoAI into a decentralized, permissionless protocol with token incentives.

  • 🗳️ RepoDAO Governance Launch: Introduce decentralized governance for managing access rules, parameters, and node participation.
  • 💰 Tokenomics & Incentives: Design a token system for model execution, proof validation, and data access rights.
  • 🌐 Validator/Decryptor Node Network: Launch a decentralized node layer to provide AI inference and ZK validation services.
  • 📦 Data NFT Support: Enable tokenization of data assets via Ocean Protocol, with fine-grained access control.

🟨 Phase 4: Ecosystem Expansion

2026Q1

🎯 Goal: Scale to multi-chain environments and enable AI module marketplaces.

  • 🔗 Multi-Chain Deployment: Expand smart contract deployments to Ethereum, Polygon zkEVM, Arbitrum, and Filecoin VM.
  • 🧠 Plugin-based AI Marketplace: Enable third-party developers to offer secure, verifiable AI modules within the RepoAI ecosystem.
  • 🧑‍💼 Enterprise SDK & API Gateway: Provide enterprise-grade APIs and on-premises deployment options for compliance-sensitive use cases.
  • 📊 ZK-Powered Analytics Dashboard: Visualize usage data, security posture, and model audits using zero-knowledge-backed analytics.
  • 🤝 Partnerships & Integration: Integrate with DePIN networks (e.g., io.net, Akash), L2 rollups, and decentralized storage layers.

🧠 Optional Future Add-ons

  • Community Plugin Registry: Open-source ZK/AI plugin registry governed by DAO
  • Staking Mechanism for Proof Validators: Ensures honest behavior in the node network
  • Interoperability with DID/VC: Use decentralized identity and verifiable credentials