# Endpoint Specification

### Endpoint

#### Available endpoint

`/{dedicate}/v1/chat/completions`

#### Method

Post

#### Header

Autherization : "Bearer {your-apikey}"

#### Request parameter

<table data-full-width="true"><thead><tr><th width="261.3333333333333">Parameter</th><th width="601">Description</th><th>Required</th></tr></thead><tbody><tr><td>messages</td><td><p>Messages must be in the same format as the OpenAI API format</p><p><br>[</p><p>{</p><p>"role" : "system", "user", "assistant"</p><p>},</p><p>{</p><p>"content" : "text"</p><p>}</p><p>]</p></td><td>Yes</td></tr><tr><td>model</td><td><p>The model should be from a Huggingface model repository.</p><p>i.e. SeaLLMs/SeaLLMs-v3-1.5B-Chat</p></td><td>Yes</td></tr><tr><td>stream</td><td>Boolean, By default is False</td><td>No</td></tr><tr><td>max_tokens</td><td>Int, By default is 1024</td><td>No</td></tr><tr><td>temperature</td><td>Float, By default is 0.7</td><td>No</td></tr><tr><td>repetition_penalty</td><td>Float, By default is 1.0</td><td>No</td></tr><tr><td>end_id</td><td>Int, By default uses eos_token_id from config.json of the model repository.</td><td>No</td></tr><tr><td>top_p</td><td>Float, By default is 0.7</td><td>No</td></tr><tr><td>top_k</td><td>Int, By default is 40</td><td>No</td></tr><tr><td>stop</td><td>Array, By default uses eos_token from config.json of the model repository.</td><td>No</td></tr><tr><td>random_sed</td><td>Int, By default is 2</td><td>No</td></tr></tbody></table>

`/{dedicate}/v1/completions`

#### Method

Post

#### Header

Autherization : "Bearer {your-apikey}"

#### Request parameter

<table data-full-width="true"><thead><tr><th width="261.3333333333333">Parameter</th><th width="601">Description</th><th>Required</th></tr></thead><tbody><tr><td>prompt</td><td><p>A prompt is the raw text and is not modified by applying a chat template.</p><p></p><p>A prompt must be used when you need to use a base model or a coding model. When using a coding model via the Continue.dev extension, the prompt will automatically be passed to the endpoint.</p></td><td>Yes</td></tr><tr><td>model</td><td>The model should be from a Huggingface model repository.<br>i.e. SeaLLMs/SeaLLMs-v3-1.5B-Chat</td><td>Yes</td></tr><tr><td>stream</td><td>Boolean, By default is False</td><td>No</td></tr><tr><td>max_tokens</td><td>Int, By default is 1024</td><td>No</td></tr><tr><td>temperature</td><td>Float, By default is 0.7</td><td>No</td></tr><tr><td>repetition_penalty</td><td>Float, By default is 1.0</td><td>No</td></tr><tr><td>end_id</td><td>Int, By default uses eos_token_id from config.json of the model repository.</td><td>No</td></tr><tr><td>top_p</td><td>Float, By default is 0.7</td><td>No</td></tr><tr><td>top_k</td><td>Int, By default is 40</td><td>No</td></tr><tr><td>stop</td><td>Array, By default uses eos_token from config.json of the model repository.</td><td>No</td></tr><tr><td>random_sed</td><td>Int, By default is 2</td><td>No</td></tr></tbody></table>
