📚
Docs - Float16
homeapp
  • 🚀GETTING STARTED
    • Introduction
    • Account
      • Dashboard
      • Profile
      • Payment
      • Workspace
      • Service Quota
    • LLM as a service
      • Quick Start
        • Set the credentials
      • Supported Model
      • Limitation
      • API Reference
    • One Click Deploy
      • Quick Start
        • Instance Detail
        • Re-generate API Key
        • Terminate Instance
      • Features
        • OpenAI Compatible
        • Long context and Auto scheduler
        • Quantization
        • Context caching
      • Limitation
      • Validated model
      • Endpoint Specification
    • Serverless GPU
      • Quick Start
        • Mode
        • Task Status
        • App Features
          • Project Detail
      • Tutorials
        • Hello World
        • Install new library
        • Prepare model weight
        • S3 Copy output from remote
        • R2 Copy output from remote
        • Direct upload and download
        • Server mode
        • LLM Dynamic Batching
        • Train and Inference MNIST
        • Etc.
      • CLI References
      • ❓FAQ
    • Playground
      • FloatChat
      • FloatPrompt
      • Quantize by Float16
  • 📚Use Case
    • Q&A Bot (RAG)
    • Text-to-SQL
    • OpenAI with Rate Limit
    • OpenAI with Guardrail
    • Multiple Agents
    • Q&A Chatbots (RAG + Agents)
  • ✳️Journey
    • ✨The Beginner's LLM Development Journey
    • 📖Glossary
      • [English Version] LLM Glossary
      • [ภาษาไทย] LLM Glossary
    • 🧠How to install node
  • Prompting
    • 📚Variable
    • ⛓️Condition
    • 🔨Demonstration
    • ⌛Loop
    • 📙Formatting
    • 🐣Chat
    • 🔎Technical term (Retrieve)
  • Privacy Policy
  • Terms & Conditions
Powered by GitBook
On this page
  1. GETTING STARTED
  2. Playground

FloatPrompt

Create Prompt, Run and Share with your colleague

PreviousFloatChatNextQuantize by Float16

Last updated 9 months ago

For developers and non-developers alike, FloatPrompt offers a user-friendly environment to create, test, and refine prompts using few-shot techniques. This platform allows you to experiment with various models and share your results effortlessly.

  • Free Models: Test your prompts using our provided models, including "SeaLLM-7b-v3" and "Eidy" (a specialized medical AI model).

  • OpenAI Integration: Input your OpenAI API key to access models like GPT-4 and GPT-4 Mini.

  • Collaborative Sharing: Easily share your prompts with colleagues via generated public URLs.

  • Customizable Parameters: Adjust model settings for optimal results.

How to use

  1. Navigate to the FloatPrompt section.

  2. Set the system prompt to define the AI's behavior.

  3. Enter your user prompt or question.

  4. Add multiple message pairs (assistant and user prompts) as needed.

  5. Adjust model settings:

    • Select your preferred model (default: SeaLLM-7b-v3)

    • Set the temperature (default: 0.5)

    • Define max tokens (default: 512)

  6. Click "Run" to see the model's response.

  7. Use the "Share" feature to generate a public URL for your prompt.

  8. Copy the link and share it with others.

To access and start using the FloatPrompt, please visit

Please Note:

Prompts are not automatically saved. To preserve your work, generate a share link and save it for future modifications.

🚀
Prompt - Float16FloatPrompt
Logo
FloatPrompt