# LLM Dynamic Batching

This tutorial guides you through deploying dynamic batching with FastAPI application using Float16's deployment mode.

{% hint style="info" %}

* Float16 CLI installed
* Logged into Float16 account
* VSCode or preferred text editor recommended
  {% endhint %}

## What is dynamic batching ?

Deploying AI endpoints, also known as online serving, is crucial and challenging because it is difficult and complex to ensure proper VRAM mapping.&#x20;

During online serving, we have several techniques to enhance GPU utilization and increase throughput.&#x20;

One of the famous techniques is 'Dynamic batching.'&#x20;

Dynamic batching helps us maximize GPU utilization while solving the 'memory bound' issue.&#x20;

Dynamic batching allows you to pack incoming requests within a specific time frame, like 1 sec or 2 sec, into the same batch and perform inference simultaneously.

This helps us infer faster but might trade off with a slight increase in latency.

## Step 1 : Download and Upload the weight

We use [Typhoon2-8b](https://huggingface.co/scb10x/llama3.1-typhoon2-8b-instruct) (a fine-tuned version of Llama3.1-8b) to demonstrate.

```
huggingface-cli download scb10x/llama3.1-typhoon2-8b-instruct --local-dir ./typhoon2-8b/

float16 storage upload -f ./typhoon2-8b -d weight-llm
```

## Step 2 : Prepare Your Script

<https://github.com/float16-cloud/examples/tree/main/official/deploy/fastapi-dynamic-batching-typhoon2-8b>

(server.py)

```python
import os
import time
from typing import Optional
import uuid 
from fastapi import FastAPI
from fastapi.responses import JSONResponse
from pydantic import BaseModel
from transformers import AutoModelForCausalLM, AutoTokenizer
import uvicorn
import asyncio

start_load = time.time()
model_name = "../weight-llm/typhoon-8b"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name,padding_side="left")

app = FastAPI()

class ChatRequest(BaseModel):
    messages: str
    max_token : Optional[int] = 512
    

def process_llm(batch_data, batch_id):
    global model
    batch_tokenized = []
    for data in batch_data : 
        _text_formated = [{"role": "user", "content": data}]
        _text_tokenized = tokenizer.apply_chat_template(
            _text_formated,
            tokenize=False,
            add_generation_prompt=True
        )
        batch_tokenized.append(_text_tokenized)

    model_inputs = tokenizer(batch_tokenized, return_tensors="pt",padding=True,truncation=True).to(model.device)
    generated_ids = model.generate(
        **model_inputs,
        max_new_tokens=512,
        pad_token_id=tokenizer.eos_token_id
    )
    generated_ids = [
        output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
    ]

    result_list = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
    result_with_id = dict(zip(batch_id,result_list))
    return result_with_id

class BatchProcessor:
    def __init__(self):
        self.batch = []
        self.batch_id = []
        self.lock = asyncio.Lock()
        self.event = asyncio.Event()
        
    async def add_to_batch(self, data, batch_id):
        async with self.lock:
            self.batch.append(data)
            self.batch_id.append(batch_id)

    async def process_batch(self):
        while True:
            await asyncio.sleep(1)  # Wait for 1 second
            async with self.lock:
                current_batch = self.batch.copy()
                current_batch_id = self.batch_id.copy()
                self.batch.clear()
                self.batch_id.clear()

            if current_batch:
                self.results = process_llm(current_batch,current_batch_id)
                self.event.set()
                self.event.clear()

    async def get_result(self, batch_id):
        return self.results[batch_id]

main_batch = BatchProcessor()

@app.post("/chat")
async def chat(text_request: ChatRequest):
    batch_id = uuid.uuid4()
    await main_batch.add_to_batch(text_request.messages, batch_id)
    await main_batch.event.wait()
    result_text = await main_batch.get_result(batch_id)
    return JSONResponse(content={"response": result_text})

async def main():
    asyncio.create_task(main_batch.process_batch())
    config = uvicorn.Config(
        app, host="0.0.0.0", port=int(os.environ["PORT"])
    )
    server = uvicorn.Server(config)
    await server.serve()
```

{% hint style="info" %}

* Ensure the port is set to "port=int(os.environ\['PORT'])"
* Ensure the server is serve with "async def main"
  {% endhint %}

## Step 3 : Deploy Script

```
float16 deploy server.py
```

After successful deployment, you'll receive:

* Function Endpoint
* Server Endpoint
* API Key

**Example:**

```
Function Endpoint: http://api.float16.cloud/task/run/function/x7x2DFl8zU   
Server Endpoint: http://api.float16.cloud/task/run/server/x7x2DFl8zU       
API Key: float16-r-QoZU7uNlgDIFJ5IMrBtOCjuzVBlC

## curl
curl -X POST "{FUNCTION-URL}/chat" -H "Authorization: Bearer {FLOAT16-ENDPOINT-TOKEN}"

curl -X POST "http://api.float16.cloud/task/run/server/x7x2DFl8zU/chat" -H "Authorization: Bearer float16-r-QoZU7uNlgDIFJ5IMrBtOCjuzVBlC" -d '{ "messages": "Hi !! Who are you ?" }' &
curl -X POST "http://api.float16.cloud/task/run/server/x7x2DFl8zU/chat" -H "Authorization: Bearer float16-r-QoZU7uNlgDIFJ5IMrBtOCjuzVBlC" -d '{ "messages": "How about you ?" }'
```

To pack requests, we need to use the server mode only.&#x20;

This is because server mode will start the endpoint and keep it alive for 30 seconds.&#x20;

(You will only be billed for 30 seconds)

It doesn't charge based on the number of requests during the active time.&#x20;

The server will handle and process the requests by itself. This will help you be more cost-effective.

{% hint style="success" %}
Congratulations! You've successfully use your first server mode on Float16's serverless GPU platform.
{% endhint %}

## Explore More&#x20;

Learn how to use Float16 CLI for various use cases in our tutorials.

<table data-view="cards"><thead><tr><th></th><th></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td><strong>Hello World</strong></td><td>Launch your first serverless GPU function and kickstart your journey.</td><td><a href="hello-world">hello-world</a></td></tr><tr><td><strong>Install new library</strong></td><td>Enhance your toolkit by adding new libraries tailored to your project needs.</td><td><a href="install-new-library">install-new-library</a></td></tr><tr><td><strong>Copy output from remote</strong></td><td>Efficiently transfer computation results from remote to your local storage.</td><td><a href="s3-copy-output-from-remote">s3-copy-output-from-remote</a></td></tr><tr><td><strong>Deploy FastAPI Helloworld</strong></td><td>Quick start to deploy FastAPI without change the code.</td><td><a href="server-mode">server-mode</a></td></tr><tr><td><strong>Upload and Download via CLI and Website</strong></td><td>Direct upload and download file(s) to server.</td><td><a href="direct-upload-and-download">direct-upload-and-download</a></td></tr><tr><td><strong>More examples</strong></td><td>Open source from community and Float16 team.</td><td><a href="etc.">etc.</a></td></tr></tbody></table>

Happy coding with Float16 Serverless GPU!
