AI & ML interests

AI engineers

Shrijanagainย 
posted an update about 1 month ago
view post
Post
4253
sKT-Ai-Labs


Join fast we will soon published tokens and all join and get started because we will soon off join request button if you want you can join fast guys
  • 1 reply
ยท
Shrijanagainย 
posted an update about 2 months ago
view post
Post
2639
โ€‹๐Ÿš€ Bharat AI Revolution ka Hissa Banein! ๐Ÿ‡ฎ๐Ÿ‡ณ

โ€‹Kya aap Bharat ko AI ki duniya mein ek nayi pehchan dilana chahte hain ?

SKT AI Labs sirf ek naam nahi, ek mission haiโ€”desh ko digital shakti dene ka aur "Viksit Bharat" ke sapne ko sach karne ka.

โ€‹Humse Kyun Judein?

โ€‹1. Desh ka Apna AI: Hum aise models bana rahe hain jo khas taur par Bharat ki zarooraton aur bhashaon ke liye hain.

โ€‹2. Open Collaboration: Hamare Hugging Face repository par hamare kaam ko dekhein, test karein aur apna yogdan dein.

3. Technological Growth: Agar aap student hain, developer hain ya tech enthusiast hain, toh hamare saath naya seekhne aur grow karne ka yeh behtareen mauka hai.

โ€‹Join here

sKT-Ai-Labs

๐Ÿ”—
sKT-Ai-Labs


โ€‹Aaiye, saath milkar Bharat AI Revolution ko aage badhate hain! ๐Ÿ’ป๐Ÿ”ฅ

โ€‹#SKTAILabs #DigitalIndia #AIRevolution #ViksitBharat #TechInnovation #JoinTheMission
Shrijanagainย 
posted an update about 2 months ago
view post
Post
6887
SOME NEW HINDI + ENGLISH DATASETS

๐Ÿ”—
- sKT-Ai-Labs/HIN
- sKT-Ai-Labs/SKT-MIX
- sKT-Ai-Labs/ST-H

Download and Use And Train Models

You Can Alsoo Use ST-x-LIGHTING Module For Faster Training

pip install ST-x-LIGHT-V11
  • 2 replies
ยท
Shrijanagainย 
posted an update about 2 months ago
view post
Post
5617

โ€‹We are thrilled to announce the launch of SKT-OMNI-CORPUS-2T, a massive-scale, high-quality dataset designed to power the next generation of Foundation Models (LLMs) from scratch.
โ€‹Developed at SKT AI LABS, this corpus is not just a collection of data; itโ€™s a mission to decentralize high-grade AI training for regional languages and global knowledge.

โ€‹๐Ÿ’Ž Key Highlights:

โ€‹โ€ขโ€ข Massive Scale: Targeting a multi-terabyte architecture for 2T-level tokenization.

โ€ขโ€ข โ€‹Pure Quality: Curated from 500+ Elite Sources

โ€ขโ€ข โ€‹Structured for MoE: Perfectly sharded into 3.5GB standardized units (SKT-๐•ป series) for seamless distributed training.

โ€‹๐Ÿค Open for Collaboration!

โ€‹We are looking for AI researchers, CUDA engineers, and data scientists to join us in this journey of building Project Surya and the ST-X Series models. Whether it's optimization, custom tokenization, or architecture designโ€”letโ€™s build the future together.

โ€‹Explore the Dataset on Hugging Face:

๐Ÿ”— https://huggingface.co/datasets/Shrijanagain/SKT-OMNI-CORPUS-146T-V1

DSR -- ๐Ÿ”— https://huggingface.co/datasets/Shrijanagain/SKT-DSRx10000

โ€‹#AI #MachineLearning #OpenSource #IndicAI #SKTAILABS #LLM #BigData #HuggingFace #InnovationIndia
Felgukย 
posted an update 12 months ago
view post
Post
2294
Where gone streamlit in huggingface?
  • 3 replies
ยท
John6666ย 
posted an update about 1 year ago
view post
Post
37517
If your Space stops working after restarting mainly for the last 5 days (https://discuss.huggingface.co/t/my-space-suddenly-went-offline-the-cpu-cannot-restart/151121/22), try some of following.
1. Add pydantic==2.10.6 to requirements.txt or upgrade Gradio to the latest version.
2. Upgrade PyTorch to 2.2.0 or later (torch>=2.2.0 for Zero GPU space).
3. Fix Transformers to 4.49.0 or earlier (transformers<=4.49.0for spaces using Transformers or Diffusers).
4. Fix huggingface_hub to the old version (huggingface_hub==0.25.2 for if an error like cached_download is not available occurs or inference does not work properly)
5. Specifying WORKDIR in Dockerfile may cause the application to fail to start with error 137. (Docker Spaces, https://discuss.huggingface.co/t/error-code-137-cache-error/152177)

About pydantic==2.10.6:
https://discuss.huggingface.co/t/error-no-api-found/146226
https://discuss.huggingface.co/t/internal-server-error-bool-not-iterable/149494

Edit:
Zero GPU space has been upgraded from A100 to H200.
This is likely the reason why older versions of PyTorch are no longer supported.
In fact, an error message to that effect was displayed.
zero-gpu-explorers/README#163
  • 2 replies
ยท
John6666ย 
posted an update about 1 year ago
John6666ย 
posted an update over 1 year ago
view post
Post
26152
@victor @not-lain There has been a sudden and unusual outbreak of spam postings on the HF Forum that seem to be aimed at relaying online videos and commenting on them. It is also spanning multiple languages for some reason. I've flagged it too, but I'm not sure if the staff will be able to keep up with the manual measures in the future.
  • 16 replies
ยท
John6666ย 
posted an update over 1 year ago
view post
Post
24019
@victor Sorry for the repetitiveness.

I'm not sure if Post is the right place to report such an error, but it seems to be a server error unrelated to the Zero GPU space error the other day, so I don't know where else to report it.

Since this morning, I have been getting a strange error when running inference from space in Gradio 3.x.
Yntec (@Yntec ) discovered it, but he is not in the Pro subscription, so I am reporting it on behalf of him.

The error message is as follows: 1girl and other prompts will show cached output, so experiment with unusual prompts.

Thank you in advance.

John6666/blitz_diffusion_error
John6666/GPU-stresser-t2i-error
ValueError: Could not complete request to HuggingFace API, Status Code: 500, Error: unknown error, Warnings: ['CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF', 'There was an inference error: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF']

  • 14 replies
ยท
John6666ย 
posted an update over 1 year ago
view post
Post
2866
@victor

Excuse me.
I would like to report the following bug or new specification that is probably the cause of the fatal stacks that are occurring in the Zero GPU space throughout HF.
Thanks.

zero-gpu-explorers/README#104
  • 3 replies
ยท