Spaces:
Running
Running
jefffffff9 commited on
Commit ·
cc82bd8
1
Parent(s): d0e28fa
Remove stale Qwen references from minimal-baseline docstrings
Browse filesRuntime defaults already point to CohereLabs/aya-expanse-32b; only
docstrings / usage examples still named Qwen. Cosmetic cleanup only.
- app_minimal.py +2 -2
- src/llm/minimal_client.py +1 -1
app_minimal.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
"""Minimal baseline Gradio entry point for the Month 1-3 rebuild.
|
| 2 |
|
| 3 |
-
Wires the simplest possible slice: Whisper (zero-shot) ->
|
| 4 |
No LoRA adapters, no memory loop, no speaker ID, no voice cloning, no IoT,
|
| 5 |
no phrase matcher. Used for field testing and building a real-user eval set.
|
| 6 |
|
|
@@ -10,7 +10,7 @@ Run locally:
|
|
| 10 |
HF_TOKEN=hf_xxx python app_minimal.py
|
| 11 |
|
| 12 |
Environment variables (all optional except HF_TOKEN, which is needed for the
|
| 13 |
-
|
| 14 |
HF_TOKEN — HuggingFace token with read access
|
| 15 |
LLM_MODEL_ID — default "CohereLabs/aya-expanse-32b"
|
| 16 |
(23-language multilingual, strong African-language coverage)
|
|
|
|
| 1 |
"""Minimal baseline Gradio entry point for the Month 1-3 rebuild.
|
| 2 |
|
| 3 |
+
Wires the simplest possible slice: Whisper (zero-shot) -> Aya-Expanse -> MMS-TTS.
|
| 4 |
No LoRA adapters, no memory loop, no speaker ID, no voice cloning, no IoT,
|
| 5 |
no phrase matcher. Used for field testing and building a real-user eval set.
|
| 6 |
|
|
|
|
| 10 |
HF_TOKEN=hf_xxx python app_minimal.py
|
| 11 |
|
| 12 |
Environment variables (all optional except HF_TOKEN, which is needed for the
|
| 13 |
+
HF Serverless LLM call):
|
| 14 |
HF_TOKEN — HuggingFace token with read access
|
| 15 |
LLM_MODEL_ID — default "CohereLabs/aya-expanse-32b"
|
| 16 |
(23-language multilingual, strong African-language coverage)
|
src/llm/minimal_client.py
CHANGED
|
@@ -126,7 +126,7 @@ class MinimalClient:
|
|
| 126 |
"""Dialect-anchored plain-text LLM client over HF Serverless Inference.
|
| 127 |
|
| 128 |
Usage:
|
| 129 |
-
client = MinimalClient(model_id="
|
| 130 |
reply = client.chat("Good morning", target_lang="bam")
|
| 131 |
# → "I ni sɔgɔma. I ka kɛnɛ wa?"
|
| 132 |
"""
|
|
|
|
| 126 |
"""Dialect-anchored plain-text LLM client over HF Serverless Inference.
|
| 127 |
|
| 128 |
Usage:
|
| 129 |
+
client = MinimalClient(model_id="CohereLabs/aya-expanse-32b", hf_token=TOK)
|
| 130 |
reply = client.chat("Good morning", target_lang="bam")
|
| 131 |
# → "I ni sɔgɔma. I ka kɛnɛ wa?"
|
| 132 |
"""
|