spchless docs
  • ABOUT US
    • πŸ™ŒIntroduction
    • 🎯Mission y Vision
  • TECHNOLOGY
    • πŸ—£οΈTTS (Text-To-Speech)
      • How it works
      • Use Cases
    • ⛓️Decentralized AI Network
      • Actors
      • Task Process
      • Training Process
      • Preventing Exploits
      • Private Voice Cloning and Data Storage
  • spchless bot
    • πŸ’«Overview and Functionalities
    • πŸ“šUsage Instructions
  • STRATEGY
    • πŸ—ΊοΈRoadmap
    • πŸ‘¨β€πŸ’ΌSustainability Strategy
    • πŸͺ™Tokenomics
    • ✍️Token Usage
  • POLICIES
    • $SPCH Token Disclaimer
    • Terms of Service
    • Acceptable Use Policy
    • Refund Policy
Powered by GitBook
On this page
  1. TECHNOLOGY
  2. Decentralized AI Network

Preventing Exploits

The system implements multiple mechanisms to ensure integrity and prevent malicious behavior among nodes. These strategies are primarily built around a Proof of Stake (PoS) model and random audits, focusing on discouraging incentive-based attacks and ensuring accurate task completion.

Task Validation

Immediate Validation: Every task completed by a node is immediately reviewed by validator nodes. If the task result fails to meet the expected standards (e.g., poor quality or incorrect results), the node is subject to a direct penalty, resulting in the loss of a portion of its stake.

Focus on Individual Task Quality: This process ensures that each specific task is thoroughly verified before payments and rewards are released, maintaining high standards of output.

Random Audits

Periodic and Randomized Evaluations: In addition to task validation, the system performs random audits to review a node's overall performance. These audits aim to detect dishonest behavior over time, potentially uncovering issues that were not identified during individual task validations.

Penalties for Consistent Misbehavior: If an audit reveals that a node has consistently delivered incorrect results, more severe penalties may be applied, such as a larger stake loss or a decrease in the node's reputation score.

The combination of task validation and periodic audits ensures that nodes maintain high standards of quality both in the short term and over the long term. While task validation focuses on the quality of individual tasks, random audits check the integrity of nodes' behavior over time, providing a comprehensive safeguard against potential exploits.

Penalty Mechanism and Incentive Attack Mitigation with Random Audits

If a node repeatedly performs a task poorly and its rating decreases, it will be penalized by losing part of its stake. This approach also defends the system from Incentive Attacks, where nodes attempt to claim rewards without completing their tasks correctly. To prevent these attacks, the system imposes penalties that are slightly larger than the rewards nodes could potentially earn, ensuring that the expected mathematical outcome of such attacks is less than or equal to zero.

The system periodically audits the performance of nodes by randomly selecting other nodes from the network to validate their work. This is done probabilistically based on a security parameter p, which dictates the likelihood of an audit for any given task. The selection of nodes for auditing is random, but the probability can increase if a higher security level is specified.

Expected Outcome of Incentive Attacks:

The expected reward for a node attempting to perform n tasks incorrectly, considering the audit probability, can be modeled as:

E(Reward)=(1βˆ’paudit)Γ—nΓ—Rβˆ’pauditΓ—nΓ—PE(\text{Reward}) = (1 - p_{\text{audit}}) \times n \times R - p_{\text{audit}} \times n \times PE(Reward)=(1βˆ’paudit​)Γ—nΓ—Rβˆ’paudit​×nΓ—P

Where:

R=BaseRewardP=BasePenaltypaudit=ProbabilityOfBeingAuditedR = BaseReward \newline P = BasePenalty \newline p_{\text{audit}} = ProbabilityOfBeingAuditedR=BaseRewardP=BasePenaltypaudit​=ProbabilityOfBeingAudited

For incentive attacks to be unprofitable we need that

E(Reward)≀0E(\text{Reward}) \leq 0E(Reward)≀0

And for that, the penalty must satisfy the following condition:

Pβ‰₯(1βˆ’paudit)pauditΓ—RP \geq \frac{(1 - p_{\text{audit}})}{p_{\text{audit}}} \times RPβ‰₯paudit​(1βˆ’paudit​)​×R

This ensures that the expected value of attempting incorrect tasks is zero or negative, discouraging nodes from attempting incentive attacks.

Preventing Validator Collusion:

To avoid collusion among validators, multiple validator nodes participate in each validation process. These validator nodes are selected at random to verify the work of a subset of nodes. The probability of validators successfully colluding can be modeled as:

PCollusion=(vm)P_{\text{Collusion}} = \left( \frac{v}{m} \right)PCollusion​=(mv​)

Where:

m=TotalNumberOfValidatorsv=MinimumNumberOfValidatorsToApproveAnIncorrectTaskm = TotalNumberOfValidators \newline v =MinimumNumberOfValidatorsToApproveAnIncorrectTaskm=TotalNumberOfValidatorsv=MinimumNumberOfValidatorsToApproveAnIncorrectTask

By increasing the number of validators , the probability of collusion decreases, making it harder for validators to successfully approve incorrect tasks.

PreviousTraining ProcessNextPrivate Voice Cloning and Data Storage

Last updated 8 months ago

⛓️