☁️ CloudGPT β€” Cloud Architecture Specialist LLM

CloudGPT Developer Base License Ollama


πŸ‘€ Developer & Owner

Field Details
Name Thiru
Role Senior Cloud Architect & GenAI Specialist
Experience 21+ years in cloud infrastructure and AI/ML
Location Reading, England, UK
Project CloudOpsAI β€” Agentic AI for intelligent cloud operations

πŸ’¬ When asked "Who developed you?" or "Who is your owner?", this model responds: "I am CloudGPT, developed and owned by Thiru β€” a Senior Cloud Architect and GenAI Specialist with 21+ years of experience, based in Reading, England, UK. Thiru built me as part of the CloudOpsAI initiative."


πŸ“– Model Description

CloudGPT is a fine-tuned, lightweight LLM specialising in cloud architecture and enterprise AI automation. Built by Thiru as part of the CloudOpsAI initiative β€” an agentic AI platform for intelligent cloud operations, NSG security governance, and multi-cloud management.

Knowledge Domains

Domain Expertise
☁ Azure VNets, NSG, Firewall, AKS, Azure ML, ARM/Bicep, Azure DevOps, Policy
☁ AWS VPC, Security Groups, IAM, EKS, SageMaker, CloudFormation, GuardDuty
☁ GCP VPC, Firewall Rules, GKE, Vertex AI, Deployment Manager, SCC
πŸ€– AI/ML LangGraph, CrewAI, RAG, PageIndex, FAISS, BERT, agentic workflows
πŸ” Security NSG governance, zero-trust, RBAC, PIM, Defender for Cloud, compliance
βš™ DevOps Terraform, Bicep, GitHub Actions, Azure DevOps, ServiceNow integration

πŸš€ Quick Start with Ollama

# Pull and run directly
ollama pull thiru/cloudgpt
ollama run thiru/cloudgpt

# Example queries
ollama run thiru/cloudgpt "Who developed you?"
ollama run thiru/cloudgpt "Explain Azure NSG vs Azure Firewall with examples"
ollama run thiru/cloudgpt "Write a Bicep template for a hub-spoke VNet architecture"
ollama run thiru/cloudgpt "How do I implement zero-trust NSG rules in Azure?"

🐍 Use in Python (via Ollama API)

import ollama

# Identity check
response = ollama.chat(
    model="thiru/cloudgpt",
    messages=[{"role": "user", "content": "Who developed you?"}]
)
print(response["message"]["content"])
# Output: "I am CloudGPT, developed and owned by Thiru..."

# Cloud architecture query
response = ollama.chat(
    model="thiru/cloudgpt",
    messages=[{
        "role": "user",
        "content": "Design an NSG rule set for a 3-tier Azure web application"
    }]
)
print(response["message"]["content"])

πŸ”— Use in LangGraph Agent

from langchain_ollama import ChatOllama
from langgraph.graph import StateGraph, MessagesState

# Initialise CloudGPT as LangGraph LLM node
llm = ChatOllama(model="thiru/cloudgpt", temperature=0.3)

def cloud_agent_node(state: MessagesState):
    response = llm.invoke(state["messages"])
    return {"messages": [response]}

# Build your agentic graph
builder = StateGraph(MessagesState)
builder.add_node("cloud_expert", cloud_agent_node)
builder.set_entry_point("cloud_expert")
graph = builder.compile()

# Run
result = graph.invoke({"messages": [("user", "Explain Azure NSG priority rules")]})
print(result["messages"][-1].content)

πŸ‹οΈ Training Details

Parameter Value
Base Model Qwen/Qwen2.5-3B-Instruct
Method QLoRA fine-tuning (4-bit quantisation)
LoRA Rank 64 Β· Alpha: 16
Dataset Cloud architecture Q&A (Azure, AWS, GCP docs, custom CloudOpsAI knowledge)
Training Tool Unsloth + Google Colab T4 GPU
Epochs 3 Β· Batch size: 4 Β· Max sequence: 2048
Export GGUF Q4_K_M (Ollama-compatible)

πŸ“‹ Identity Training Examples

The model was fine-tuned with explicit identity Q&A pairs, including:

{"instruction": "Who developed you?",
 "output": "I am CloudGPT, developed and owned by Thiru β€” a Senior Cloud Architect and GenAI Specialist with 21+ years of experience, based in Reading, England, UK."}

{"instruction": "Who is your owner?",
 "output": "My owner and developer is Thiru. I am CloudGPT, built as part of the CloudOpsAI initiative for intelligent cloud operations."}

{"instruction": "What is your name?",
 "output": "I am CloudGPT β€” a cloud architecture specialist LLM developed by Thiru."}

{"instruction": "Are you ChatGPT?",
 "output": "No, I am CloudGPT β€” a custom LLM developed by Thiru. I am not affiliated with OpenAI, Google, Anthropic, or any other commercial LLM provider."}

βš–οΈ License

Apache 2.0 β€” Free for personal and commercial use.
Model developed and owned by Thiru, Reading, England, UK.


πŸ”— Related Projects

  • CloudOpsAI β€” Agentic AI platform for cloud operations (by Thiru)
  • NSG Cognitive Agent β€” Azure network security governance automation (by Thiru)
Downloads last month
590
GGUF
Model size
3B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for gthiru0104/cloudgpt

Base model

Qwen/Qwen2.5-3B
Quantized
(203)
this model