adaamko commited on
Commit
0a80056
·
verified ·
1 Parent(s): 8f5689d

Add related work context

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -33,7 +33,9 @@ A tool output pruner for coding agents. When an agent runs a tool (pytest, grep,
33
  Tool output (500 lines) → Squeez → Relevant lines (30 lines) → Agent context
34
  ```
35
 
36
- This model is [Qwen 3.5 2B](https://huggingface.co/Qwen/Qwen3.5-2B) fine-tuned to extract verbatim relevant lines from tool output given a task-specific query. It's trained on real software engineering tool output from SWE-bench (test logs, grep results, build errors, git diffs, stack traces, etc.), not generic text.
 
 
37
 
38
  - 2B parameters, runs on a single GPU, serves via vLLM
39
  - Outperforms Qwen 3.5 35B A3B zero-shot by +13% Span F1
 
33
  Tool output (500 lines) → Squeez → Relevant lines (30 lines) → Agent context
34
  ```
35
 
36
+ Existing context pruning tools ([SWE-Pruner](https://github.com/Ayanami1314/swe-pruner), [Zilliz Semantic Highlight](https://huggingface.co/zilliz/semantic-highlight-bilingual-v1), [Provence](https://arxiv.org/abs/2501.16214)) are built for source code or document paragraphs. They don't handle the mixed, unstructured format of tool output (stack traces interleaved with passing tests, grep matches with context lines, build logs with timestamps).
37
+
38
+ This model is [Qwen 3.5 2B](https://huggingface.co/Qwen/Qwen3.5-2B) fine-tuned to extract verbatim relevant lines from tool output given a task-specific query. It's trained specifically on 14 types of tool output from real SWE-bench workflows.
39
 
40
  - 2B parameters, runs on a single GPU, serves via vLLM
41
  - Outperforms Qwen 3.5 35B A3B zero-shot by +13% Span F1