spchless docs
  • ABOUT US
    • 🙌Introduction
    • 🎯Mission y Vision
  • TECHNOLOGY
    • 🗣️TTS (Text-To-Speech)
      • How it works
      • Use Cases
    • ⛓️Decentralized AI Network
      • Actors
      • Task Process
      • Training Process
      • Preventing Exploits
      • Private Voice Cloning and Data Storage
  • spchless bot
    • 💫Overview and Functionalities
    • 📚Usage Instructions
  • STRATEGY
    • 🗺️Roadmap
    • 👨‍💼Sustainability Strategy
    • 🪙Tokenomics
    • ✍️Token Usage
  • POLICIES
    • $SPCH Token Disclaimer
    • Terms of Service
    • Acceptable Use Policy
    • Refund Policy
Powered by GitBook
On this page
  1. TECHNOLOGY
  2. Decentralized AI Network

Training Process

PreviousTask ProcessNextPreventing Exploits

Last updated 7 months ago

Training Process Overview

The training process is essential for adapting models to new voice data and improving the quality of the Text-to-Speech (TTS) output. This process ensures the integration of user-specific voice characteristics while maintaining privacy and efficiency across decentralized nodes.

Step 1: Client Data Submission and Smart Contract Initialization

The client specifies the model characteristics and submits training data (e.g., voice samples). This data is encrypted before submission, ensuring privacy.

The smart contract manages the distribution of training tasks and holds client payment in escrow until successful validation.

Step 2: Training Cost Calculation

The system calculates the training cost by selecting among the training nodes and validator nodes. The node selection is made by balancing the quality score of each node with its individual cost.

CostTraining=min⁡i∈NT(CiQi)+min⁡j∈NV(CjQj)\text{Cost}_{\text{Training}} = \min_{i \in N_T} \left( \frac{C_i}{Q_i} \right) + \min_{j \in N_V} \left( \frac{C_j}{Q_j} \right)CostTraining​=i∈NT​min​(Qi​Ci​​)+j∈NV​min​(Qj​Cj​​)

Where:

Ci=CostOfNodesiQi=QualityScoreForNodesiNT=SubsetsOfTrainingNodesNV=SubsetsOfValidatorNodesC_i = CostOfNodes_i \newline Q_i = QualityScoreForNodes_i \newline N_T = SubsetsOfTrainingNodes \newline N_V = SubsetsOfValidatorNodesCi​=CostOfNodesi​Qi​=QualityScoreForNodesi​NT​=SubsetsOfTrainingNodesNV​=SubsetsOfValidatorNodes

Step 3: Client Authorization and Payment

If the Client accept the costs, he must transfer payment tokens to the smart contract.

The client provides temporary decryption keys to selected nodes, allowing them to use the data for this specific training session only.

Step 4: Training Process Execution

Training nodes use optimization algorithms to adjust the model weights according to the provided voice samples and details.

During training, nodes perform intermediate evaluations using test samples to ensure the model is converging towards the desired output, sharing only the encrypted evaluation metrics with validators.

Step 5:

Validators assess the accuracy and consistency of the updated model by applying it to a set of test inputs and comparing results against expected outputs

Validators reach a decision using the following voting mechanism:

Rtrain=arg⁡max⁡r∈{0,1}∑v∈NV1rv=rR_{\text{train}} = \arg\max_{r \in \{0,1\}} \sum_{v \in N_V} \mathbb{1}_{r_v = r} Rtrain​=argr∈{0,1}max​v∈NV​∑​1rv​=r​

Where:

rv=VoteFromValidatorvRtrain=FinalValidationResultr_v = VoteFromValidator_v \newline R_{\text{train}} = FinalValidationResultrv​=VoteFromValidatorv​Rtrain​=FinalValidationResult

Step 6:

After the training is validated, the rewards and penalties are distributed

Rewardi={R×Qi,if training is validated correctly−P×Qi,if training is invalid\text{Reward}_i = \begin{cases} R \times Q_i, & \text{if training is validated correctly} \\ -P \times Q_i, & \text{if training is invalid} \end{cases}Rewardi​={R×Qi​,−P×Qi​,​if training is validated correctlyif training is invalid​

Where:

R=BaseRewardP=BasePenaltyQi=QualityScoreForNodeiR = BaseReward \newline P = BasePenalty \newline Q_i = QualityScoreForNode_iR=BaseRewardP=BasePenaltyQi​=QualityScoreForNodei​

Nodes that successfully complete the training receive an increase in their reputation score, impacting their future selection probability for tasks

Step 7: Weights Encryption and Distribution

Once validated, the updated model is distributed across training nodes in the network for future TTS tasks, ensuring consistency and availability.

The updated model is encrypted before being stored, requiring client permission for any future usage, ensuring continuous privacy control

Aclaration: Processor nodes can use different models to perform the TTS task, so each Training Node will need to specify which model the weights are compatible with. For efficiency, a single node could serve as both a Training Node and a Processor Node for its own model. However, we believe that a modular approach enables greater decentralization and scalability

⛓️
Training Process Diagram