📚
Docs - Float16
homeapp
  • 🚀GETTING STARTED
    • Introduction
    • Account
      • Dashboard
      • Profile
      • Payment
      • Workspace
      • Service Quota
    • LLM as a service
      • Quick Start
        • Set the credentials
      • Supported Model
      • Limitation
      • API Reference
    • One Click Deploy
      • Quick Start
        • Instance Detail
        • Re-generate API Key
        • Terminate Instance
      • Features
        • OpenAI Compatible
        • Long context and Auto scheduler
        • Quantization
        • Context caching
      • Limitation
      • Validated model
      • Endpoint Specification
    • Serverless GPU
      • Quick Start
        • Mode
        • Task Status
        • App Features
          • Project Detail
      • Tutorials
        • Hello World
        • Install new library
        • Prepare model weight
        • S3 Copy output from remote
        • R2 Copy output from remote
        • Direct upload and download
        • Server mode
        • LLM Dynamic Batching
        • Train and Inference MNIST
        • Etc.
      • CLI References
      • ❓FAQ
    • Playground
      • FloatChat
      • FloatPrompt
      • Quantize by Float16
  • 📚Use Case
    • Q&A Bot (RAG)
    • Text-to-SQL
    • OpenAI with Rate Limit
    • OpenAI with Guardrail
    • Multiple Agents
    • Q&A Chatbots (RAG + Agents)
  • ✳️Journey
    • ✨The Beginner's LLM Development Journey
    • 📖Glossary
      • [English Version] LLM Glossary
      • [ภาษาไทย] LLM Glossary
    • 🧠How to install node
  • Prompting
    • 📚Variable
    • ⛓️Condition
    • 🔨Demonstration
    • ⌛Loop
    • 📙Formatting
    • 🐣Chat
    • 🔎Technical term (Retrieve)
  • Privacy Policy
  • Terms & Conditions
Powered by GitBook
On this page
  • Llama
  • Mistral
  • Qwen, Qwen2
  • Gemma, Gemma2
  1. GETTING STARTED
  2. One Click Deploy

Validated model

PreviousLimitationNextEndpoint Specification

Last updated 7 months ago

One Click Deploy supports models based on specific model architectures. This means that if you fine-tune a model, we can also deploy it as long as it uses a supported architecture.

Llama

Mistral

Qwen, Qwen2

  • (FIM Supported)

  • (FIM Supported)

Gemma, Gemma2

  • (FIM Supported)

  • (FIM Supported)

🚀
meta-llama/Meta-Llama-3.1-8B-Instruct
scb10x/llama-3-typhoon-v1.5x-8b-instruct
scb10x/llama-3-typhoon-v1.5-8b-instruct
mistralai/Mistral-7B-Instruct-v0.2
SeaLLMs/SeaLLMs-v3-7B-Chat
SeaLLMs/SeaLLMs-v3-1.5B-Chat
Qwen/Qwen2-7B-Instruct
Qwen/CodeQwen1.5-7B-Chat
Qwen/CodeQwen1.5-7B
SeaLLMs/SeaLLM-7B-v2.5
google/gemma-2-2b-it
google/gemma-2-9b-it
google/gemma-2-27b-it
google/codegemma-1.1-2b
google/codegemma-7b