Ashish Tanwer
ashishtanwer
AI & ML interests
None yet
Recent Activity
liked
a Space
1 day ago
ling99/OCRBench-v2-leaderboard
liked
a Space
6 days ago
ACE-Step/Ace-Step-v1.5
liked
a dataset
6 days ago
FutureMa/EvasionBench
Organizations
RAG
DataLabelling
LLM
-
Running3.15k
AnyCoder
π3.15kGenerate code for web and app projects with AI
-
RunningFeatured273
Qwen2.5 Coder Artifacts
π’273Generate and preview code from your app description
-
RunningFeatured923
QwQ-32B-Preview
π923QwQ-32B-Preview
-
Running on CPU Upgrade13.9k
Open LLM Leaderboard
π13.9kTrack, rank and evaluate open LLMs and chatbots
Evals
ClassicalML
Paper and resources for Classical ML
InfraML
Agents
Transformer
-
sentence-transformers/all-mpnet-base-v2
Sentence Similarity β’ 0.1B β’ Updated β’ 24.4M β’ β’ 1.25k -
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 16 -
google-t5/t5-base
Translation β’ Updated β’ 2.29M β’ β’ 765 -
Attention Is All You Need
Paper β’ 1706.03762 β’ Published β’ 112
DataCleaning
Dataset
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper β’ 2306.01116 β’ Published β’ 43 -
HuggingFaceFW/fineweb
Viewer β’ Updated β’ 52.5B β’ 194k β’ 2.67k -
tiiuae/falcon-refinedweb
Viewer β’ Updated β’ 968M β’ 13.4k β’ 893 -
LLaMA: Open and Efficient Foundation Language Models
Paper β’ 2302.13971 β’ Published β’ 20
Training
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 16 -
AutoTrain: No-code training for state-of-the-art models
Paper β’ 2410.15735 β’ Published β’ 59 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper β’ 2405.00732 β’ Published β’ 122 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper β’ 2106.09685 β’ Published β’ 58
Diffusion
DataCrawling
Agents
RAG
Transformer
-
sentence-transformers/all-mpnet-base-v2
Sentence Similarity β’ 0.1B β’ Updated β’ 24.4M β’ β’ 1.25k -
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 16 -
google-t5/t5-base
Translation β’ Updated β’ 2.29M β’ β’ 765 -
Attention Is All You Need
Paper β’ 1706.03762 β’ Published β’ 112
DataLabelling
DataCleaning
LLM
-
Running3.15k
AnyCoder
π3.15kGenerate code for web and app projects with AI
-
RunningFeatured273
Qwen2.5 Coder Artifacts
π’273Generate and preview code from your app description
-
RunningFeatured923
QwQ-32B-Preview
π923QwQ-32B-Preview
-
Running on CPU Upgrade13.9k
Open LLM Leaderboard
π13.9kTrack, rank and evaluate open LLMs and chatbots
Dataset
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper β’ 2306.01116 β’ Published β’ 43 -
HuggingFaceFW/fineweb
Viewer β’ Updated β’ 52.5B β’ 194k β’ 2.67k -
tiiuae/falcon-refinedweb
Viewer β’ Updated β’ 968M β’ 13.4k β’ 893 -
LLaMA: Open and Efficient Foundation Language Models
Paper β’ 2302.13971 β’ Published β’ 20
Evals
Training
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 16 -
AutoTrain: No-code training for state-of-the-art models
Paper β’ 2410.15735 β’ Published β’ 59 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper β’ 2405.00732 β’ Published β’ 122 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper β’ 2106.09685 β’ Published β’ 58
ClassicalML
Paper and resources for Classical ML
Diffusion
InfraML