• Home
  • Dashboard
  • Models
  • Wiro AppsApps
  • Pricing
  • Blog
  • Sign In
  • Sign Up
HomeDashboardModelsWiro AppsAppsPricing
Blog
Documentation
Sign In
Sign Up

Task History

  • Runnings
  • Models
  • Trains
Select project...
The list is empty
No results

You don't have task yet.

Go to Models
  • Models

Models

Relevance
  • Relevance
  • Newest
  • Most Rated
  • Most Commented
  • Highest Rated
Relevance
  • Relevance
  • Newest
  • Most Rated
  • Most Commented
  • Highest Rated

Wiro AI

Social Media & ViralEcommerceHuman ResourcesFast Inference

Video Generation

Text to VideoImage to VideoSpeech to VideoTalking Head

Image Generation

Text to ImageImage to ImageImage Editing

Audio & Speech

Text to SpeechSpeech to TextText to MusicText to SongImage to SongVideo to MusicVoice Clone

LLM & Chat (1)

ChatLLMRAGReasoning

Partners

WiroOpenAIGoogleByteDanceRunwayElevenlabsBlack Forest LabsKling-AIMiniMaxPixVerse

Wiro AI

Social Media & ViralEcommerceHuman ResourcesFast Inference

Video Generation

Text to VideoImage to VideoSpeech to VideoTalking Head

Image Generation

Text to ImageImage to ImageImage Editing

Audio & Speech

Text to SpeechSpeech to TextText to MusicText to SongImage to SongVideo to MusicVoice Clone

LLM & Chat

ChatLLMRAGReasoning

Partners

WiroOpenAIGoogleByteDanceRunwayElevenlabsBlack Forest LabsKling-AIMiniMaxPixVerse
Chat

wiro/rag-chat-github

Instantly retrieve and analyze content from any GitHub repository. Select your LLM model, extract relevant information from codebases or documentation, and generate context-aware responses with ease!
0
Chat

wiro/rag-chat-youtube

Extract insights directly from YouTube videos by simply providing a URL. Choose your LLM model, access video transcripts or summaries, and create contextually rich conversations effortlessly!
0
Chat

wiro/rag-chat-website

Instantly retrieve and analyze content from any website URL. Select your LLM model, fetch key information from the page, and generate context-aware responses with ease!
0
Chat

wiro/rag-chat

Wiro LLM Conversational RAG Tool – A unified interface for seamless RAG-based conversations. Select your preferred LLM model, retrieve relevant data from external sources, and generate context-aware responses effortlessly!
0
Chat

wiro/chat

Wiro LLM Conversational chat script! Runs every LLM with one tool.
0
Chat

wiro/wiroai-turkish-llm-9b

Meet with Wiro AI's Turkish Large Language Model (LLM)! A robust language model with more Turkish language and culture support!
0
Chat

openai/gpt-oss-20b

OpenAI GPT-OSS-20B model.
0
Chat

Qwen/Qwen3-30B-A3B

Qwen3-30B-A3B model.
0
Chat

Qwen/Qwen3-30B-A3B-Thinking-2507

Qwen3-30B-A3B-Thinking-2507 model.
0
Chat

Qwen/Qwen3-32B

Qwen3-32B model.
0
Chat

Qwen/Qwen3-Coder-30B-A3B-Instruct

Qwen3-Coder-30B-A3B-Instruct model.
0
Chat

AlicanKiraz0/SenecaLLM-x-QwQ-32B-Q8_Max-Version

Mixed dataset LLM Information Security v1.5 - Incident Response v1.3.1 - Threat Hunting v1.3.2 - Ethical Exploit Development v2.0 - Purple Team Tactics v1.3 - Reverse Engineering v2.0
0
Chat

Qwen/Qwen2.5-14B-Instruct

Qwen2.5-14B-Instruct is a large language model by Alibaba’s Qwen team, optimized for instruction-following tasks with 14 billion parameters. It offers strong reasoning, multilingual capabilities, and efficient performance, making it suitable for chatbots, content creation, and various AI-driven applications.
0
Chat

Qwen/Qwen2.5-32B-Instruct

Qwen2.5-32B-Instruct is a powerful large language model developed by Alibaba’s Qwen team, designed for instruction-following tasks with enhanced reasoning and natural language understanding capabilities. Optimized for efficiency and accuracy, it supports multi-turn conversations and complex queries, making it suitable for applications such as chatbots, content generation, and AI assistants.
0
Chat

deepseek-ai/DeepSeek-R1-Distill-Qwen-32B

DeepSeek-R1-Distill-Qwen-32B is a distilled version of the DeepSeek-R1 model with 32 billion parameters, leveraging the Qwen architecture to deliver high-quality language understanding and generation. Through knowledge distillation, it retains the strengths of larger models while offering improved efficiency and reduced computational requirements. This model is ideal for large-scale AI applications that demand robust performance with optimized resource utilization.
0
Chat

deepseek-ai/DeepSeek-R1-Distill-Qwen-14B

DeepSeek-R1-Distill-Qwen-14B is a distilled version of the DeepSeek-R1 model, featuring 14 billion parameters. Built on the Qwen architecture, it utilizes advanced knowledge distillation techniques to achieve a balance between high performance and computational efficiency. This model is well-suited for a wide range of natural language processing tasks, providing accurate and context-aware responses while optimizing resource consumption for deployment in production environments.
0
Chat

deepseek-ai/DeepSeek-R1-Distill-Llama-8B

DeepSeek-R1-Distill-Llama-8B is a distilled version of the DeepSeek-R1 model based on the LLaMA architecture, featuring 8 billion parameters. It offers a balance between performance and efficiency by leveraging knowledge distillation techniques to reduce computational costs while maintaining high-quality language processing capabilities. This model is ideal for applications that require powerful text generation and understanding with optimized resource usage.
0
Chat

deepseek-ai/DeepSeek-R1-Distill-Qwen-7B

DeepSeek-R1-Distill-Qwen-7B is a distilled version of the DeepSeek-R1 model with 7 billion parameters, designed to provide high-quality language understanding while optimizing efficiency. Leveraging advanced knowledge distillation techniques, it retains the core capabilities of larger models with improved speed and lower resource consumption. This model is well-suited for tasks requiring robust natural language processing while maintaining cost-effective deployment.
0
Chat

deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B

DeepSeek-R1-Distill-Qwen-1.5B is a distilled version of the DeepSeek-R1 model, designed to offer a balance between efficiency and performance. With 1.5 billion parameters, it leverages knowledge distillation techniques to retain the capabilities of larger models while optimizing for faster inference and reduced resource consumption. Ideal for applications requiring high-quality language understanding with lower computational overhead.
0
Chat

utter-project/EuroLLM-9B-Instruct

utter-project/EuroLLM-9B-Instruct is a 9 billion parameter multilingual AI model developed to understand and generate text across all European Union languages and additional relevant languages. It has been trained on a vast dataset of 4 trillion tokens and further fine-tuned on EuroBlocks to improve its performance in instruction-following and machine translation tasks.
0
Chat

utter-project/EuroLLM-1.7B-Instruct

utter-project/EuroLLM-1.7B-Instruct is a 1.7 billion parameter multilingual AI model designed to understand and generate text in all European Union languages and additional relevant languages. It has been trained on 4 trillion tokens from diverse sources and further instruction-tuned on EuroBlocks to enhance its capabilities in general instruction-following and machine translation.
0
Chat

m42-health/Llama3-Med42-8B

m42-health/Llama3-Med42-8B is an open-access clinical large language model fine-tuned by M42, based on LLaMA-3, with 8 billion parameters. It is designed to provide high-quality, reliable answers to medical queries, enhancing access to medical knowledge.
0
Chat

meta-llama/Llama-3.2-3B-Instruct

The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks.
0
Chat

meta-llama/CodeLlama-34b-Instruct-hf

meta-llama/CodeLlama-34B-Instruct-hf is a 34 billion parameter instruction-tuned AI model developed by Meta, designed to provide advanced code generation, completion, and understanding capabilities across various programming languages, offering high accuracy and efficiency for complex coding tasks.
0
Chat

meta-llama/CodeLlama-7b-Instruct-hf

meta-llama/CodeLlama-7B-Instruct-hf is a 7 billion parameter instruction-tuned AI model developed by Meta, designed to assist with code generation, completion, and understanding across multiple programming languages with high efficiency and accuracy.
0
Chat

internlm/internlm3-8b-instruct

InternLM3 has open-sourced an 8-billion parameter instruction model, InternLM3-8B-Instruct, designed for general-purpose usage and advanced reasoning.
0
Chat

CohereForAI/aya-expanse-8b

Aya Expanse 8B is an open-weight research release of a model with highly advanced multilingual capabilities. It focuses on pairing a highly performant pre-trained Command family of models with the result of a year’s dedicated research from Cohere For AI, including data arbitrage, multilingual preference training, safety tuning, and model merging. The result is a powerful multilingual large language model.
0
Chat

mistralai/Mistral-Nemo-Instruct-2407

The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-Nemo-Base-2407. Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
0
Chat

HuggingFaceTB/SmolLM2-1.7B-Instruct

HuggingFaceTB/SmolLM2-1.7B-Instruct is a 1.7 billion parameter instruction-tuned AI language model designed to provide efficient and accurate responses for a wide range of tasks while maintaining a lightweight and accessible architecture.
0
Chat

mistralai/Mathstral-7B-v0.1

mistralai/Mathstral-7B-v0.1 is a 7 billion parameter AI model optimized for mathematical reasoning and problem-solving, designed to deliver precise and efficient solutions across a variety of mathematical domains.
0
Logo of nvidia programLogo of nvidia program
Wiro AI brings machine learning easily accessible to all in the cloud.
  • WIRO
  • About
  • Blog
  • Careers
  • Contact
  • Light Mode
  • Product
  • Models
  • Pricing
  • Status
  • Documentation
  • Introduction
  • Start Your First Project
  • Example Projects

2025 © Wiro.ai | Terms of Service & Privacy Policy