📚
Docs - Float16
homeapp
  • 🚀GETTING STARTED
    • Introduction
    • Account
      • Dashboard
      • Profile
      • Payment
      • Workspace
      • Service Quota
    • LLM as a service
      • Quick Start
        • Set the credentials
      • Supported Model
      • Limitation
      • API Reference
    • One Click Deploy
      • Quick Start
        • Instance Detail
        • Re-generate API Key
        • Terminate Instance
      • Features
        • OpenAI Compatible
        • Long context and Auto scheduler
        • Quantization
        • Context caching
      • Limitation
      • Validated model
      • Endpoint Specification
    • Serverless GPU
      • Quick Start
        • Mode
        • Task Status
        • App Features
          • Project Detail
      • Tutorials
        • Hello World
        • Install new library
        • Prepare model weight
        • S3 Copy output from remote
        • R2 Copy output from remote
        • Direct upload and download
        • Server mode
        • LLM Dynamic Batching
        • Train and Inference MNIST
        • Etc.
      • CLI References
      • ❓FAQ
    • Playground
      • FloatChat
      • FloatPrompt
      • Quantize by Float16
  • 📚Use Case
    • Q&A Bot (RAG)
    • Text-to-SQL
    • OpenAI with Rate Limit
    • OpenAI with Guardrail
    • Multiple Agents
    • Q&A Chatbots (RAG + Agents)
  • ✳️Journey
    • ✨The Beginner's LLM Development Journey
    • 📖Glossary
      • [English Version] LLM Glossary
      • [ภาษาไทย] LLM Glossary
    • 🧠How to install node
  • Prompting
    • 📚Variable
    • ⛓️Condition
    • 🔨Demonstration
    • ⌛Loop
    • 📙Formatting
    • 🐣Chat
    • 🔎Technical term (Retrieve)
  • Privacy Policy
  • Terms & Conditions
Powered by GitBook
On this page
  • Intro
  • How it work ?
  • Prompt example
  1. Prompting

Demonstration

PreviousConditionNextLoop

Last updated 9 months ago

Intro

LLMs have the ability to follow demonstrations by example that we have provided, without any explicit instructions or conditions. It's like providing X as input and Y as output.

If we have enough pairs to demonstrate, the LLM will learn and try to predict Y for incoming X.

How it work ?

LLMs have an ability called "in-context learning."

In-context learning comes along with LLMs by providing examples called few-shot or many-shot examples in the prompt.

If you have some experience with training machine learning models, you will notice this concept is similar to preparing a dataset for supervised learning by providing x_train and y_train.

Rice => noun
Eat => verb
Sleep => verb
Food => 

A software developer could consider this ability to be a new type of capability for programming languages. In traditional programming languages, you need to define conditions to modify the input. However, with this ability, you just prepare example pairs of input and output for the LLM. You no longer need to write conditions to modify the input.

Prompt example

🔨
LogoBasic Prompt Demonstration #2FloatPrompt
Part of speech
LogoBasic Prompt Demonstration #3FloatPrompt
Arrange text by demonstration
LogoBasic Prompt Demonstration #4FloatPrompt
Information extraction
LogoBasic Prompt Demonstration #1FloatPrompt
Machine translation