Official Rules (Draft: July 18, 2025)

Note: The Official Rules are in draft form and will be updated by July 21, 2025.

Table of Contents

  1. 1. Eligibility and Registration
  2. 2. Codec Specifications
  3. 3. Training and Data Usage
  4. 4. Processing Rules
  5. 5. Submission Process
  6. 6. Evaluation and Winner Selection
  7. 7. Legal and Compliance

1. Eligibility and Registration

  1. Participant Eligibility: This Challenge is open to participants (“Participants” or “you”) who are age 18 years or older at the time of entry, across academia and industry, as well as individual contributors. Participants who are employees or internally contracted vendors of governments and government-affiliated companies or organizations are not eligible for any monetary grants. Participants can enter as an individual, or as a team or group. Each team may consist of one or more individuals. This Challenge is not open to: (1) employees or internally contracted vendors of Cisco or its parent/subsidiaries, agents and affiliates; (2) the immediate family members or members of the same household of any such employee or vendor; (3) anyone professionally involved in the development or administration of this Challenge; or (4) any employee whose employer’s guidelines or regulations do not allow entry in the Challenge. This Challenge is not open to participants in the province of Quebec in Canada. In addition, residents of Cuba, Iran, Syria, North Korea, Myanmar (formerly Burma), Russia, the occupied regions of Ukraine and Belarus are not eligible to participate. This Challenge is void in these countries and where otherwise prohibited or restricted by law.

  2. Each team may consist of one or more individuals.

  3. Teams must register between July 10 and September 9, 2025 through registration link on the challenge website: https://lrac.short.gy/participate#registration.

  4. Each team may submit up to one system per track for the final evaluation.

Back to top

2. Codec Specifications

Participants must submit systems that meet the following technical requirements:

  1. Sampling rate:

    1. Systems must support input and output at 24 kHz audio sampling rate
  2. Bitrate:

    1. Constant bitrate systems only are permitted. No variable or adaptive bitrate coding or entropy coding optimization is allowed.

    2. A single system must support both of the following modes:

      • Ultralow bitrate mode: budget up to 1 kbps

      • Low bitrate mode: budget up to 6 kbps

    3. The same decoder must be capable of supporting both modes and a mixture of the modes within a single inference run.

  3. Latency:

    1. The total latency must be equal to or lower than:

      • Track 1 – transparency codecs: 30 ms

      • Track 2 – speech enhancement codecs: 50 ms
        with the total latency is defined as:
        total latency = algorithmic latency + buffering latency
        where algorithmic latency is the delay introduced by the processing algorithms and their internal operations, excluding buffering, while the buffering latency is the delay resulting from processing audio in fixed-size blocks or frames.

  4. Compute complexity:

    1. Track 1 - transparency codecs:

      • Total compute ≤ 700 MFLOPS

      • Receive-side compute ≤ 300 MFLOPS

    2. Track 2 – speech enhancement codecs:

      • Total compute ≤ 2600 MFLOPS

      • Receive-side compute ≤ 600 MFLOPS

Back to top

3. Training and Data Usage

  1. Training, validation, and hyperparameter tuning must be performed strictly on the designated challenge speech and noise datasets listed on the challenge website: https://lrac.short.gy/datasets.

  2. Use of publicly available pre-trained models (e.g., HuBERT, Wav2Vec) is allowed only if they were publicly available before the challenge start date. These models may be fine-tuned only on the allowed challenge datasets.

  3. Prohibited:

    1. Using any other datasets for training, fine-tuning, or hyperparameter tuning.

    2. Using dev/test data for any purpose other than evaluation (e.g., no training, fine-tuning, or checkpoint selection on dev/test).

    3. Techniques like domain adaptation, self-training, or test-time adaptation involving the dev/test data.

    4. To ensure fair evaluation, models with long past-context receptive fields must not have access to the entire test utterance or repeated versions of it within their receptive window. Techniques such as concatenating multiple copies of the test utterance, reflective/mirrored padding, or any method that artificially repeats or extends the test input are strictly prohibited. Instead, models should employ neutral padding strategies, such as zero padding, when necessary.

Back to top

4. Processing Rules

  1. Test utterances must be processed via a single-pass only (no iterative enhancement).

  2. Any system architecture is allowed (e.g., traditional, neural, hybrid) as long as all challenge rules are met.

  3. Systems may be: (i) end-to-end enhancement speech codecs, (ii) traditional codecs with pre-/post-processing, (iii) any other valid configuration.

Back to top

5. Submission Process

  1. Submissions must include:

    1. The complete system audio output on the provided blind (withheld) test set matching the file and folder structure of the input set provided by the organizers.

    2. A detailed system description that allows for reproducibility. This should cover:

      • Data curation, augmentation, and splits

      • Model architecture, loss functions

      • Hyperparameter tuning

      • Any pre-trained models used

      • Any training or fine-tuning stages

      • Compute complexity, latency, number of parameters, and the actual bitrate used for each system submitted

  2. Optional but encouraged: Public release of model checkpoints and/or sourcecode for reproducibility

  3. Submission deadline: October 1, 2025

  4. Submission portal will be available post participant registration.

  5. For participation information please refer to: https://lrac.short.gy/participate, or reach out to the challenge organizers via lrac-challenge@cisco.com.

Back to top

6. Evaluation and Winner Selection

  1. Submissions will be evaluated using crowdsourced listening tests (refer to crowdsourced evaluation battery table for details: https://lrac.short.gy/evaluation). Performance assessment will focus on the following areas:

    1. Track 1 – transparency codecs:

      • Transparency in clean conditions

      • Robustness in mild noise and reverberation

    2. Track 2 – speech enhancement codecs:

      • Transparency in clean

      • Robustness in noise/reverb

      • Denoising and dereverberation performance

  2. Entries will be ranked based on weighted results of crowdsourced evaluations (refer to crowdsourced evaluation battery table for details: https://lrac.short.gy/evaluation). The winning team will be the one with the top-ranked entry.

  3. The entries will also be evaluated using objective metrics

    1. Metric selection: to be advised.

    2. On open validation set: throughout the challenge runtime

      • Based on a withheld subset from the training datasets: synthetic examples, reference signal available.
    3. On blind test set: at challenge conclusion

      • Same test set as used in the crowdsourced listening tests: real-world recordings, no reference.

Note: the objective metric results will be provided strictly as advisory only, and will not be used to determine the final ranking or the winner selection. The objective results will further serve to give insights to the research community into which metrics (including under which conditions) are effective predictors of subjective impressions.

  1. The final results, including ranking and winners, will be announced on October 14, 2025.

Back to top

  1. All submissions must be original work of the participants and include all necessary consents and licenses. Submissions must not violate privacy, intellectual property, or other legal rights of any individual or entity.

  2. Cisco retains the right to use, review, assess, and analyze all submitted data, including audio files, for any purpose, at any time, without requiring additional permissions. Cisco does not claim ownership of your models but reserves the right to use submitted test data and results for evaluation, publication, and analysis purposes.

  3. The organizers reserve the right to update or clarify rules at any time. Registered participants will be notified of any major changes.

Back to top