๐Ÿ”จDemonstration

Intro

LLMs have the ability to follow demonstrations by example that we have provided, without any explicit instructions or conditions. It's like providing X as input and Y as output.

If we have enough pairs to demonstrate, the LLM will learn and try to predict Y for incoming X.

How it work ?

LLMs have an ability called "in-context learning."

In-context learning comes along with LLMs by providing examples called few-shot or many-shot examples in the prompt.

If you have some experience with training machine learning models, you will notice this concept is similar to preparing a dataset for supervised learning by providing x_train and y_train.

Rice => noun
Eat => verb
Sleep => verb
Food => 

A software developer could consider this ability to be a new type of capability for programming languages. In traditional programming languages, you need to define conditions to modify the input. However, with this ability, you just prepare example pairs of input and output for the LLM. You no longer need to write conditions to modify the input.

Prompt example

Last updated