File storage
Concept

Each project have individual storage it didn't share between your projects.
background
will contain you spot job and separate by they task_id.
workspace
will contain you run job with latest code only.
api-service
will contain you deploy job with latest code only.
Bandwidth cost
At the moment, It free.
File features

Preview file

When the file is less than 1 MB, the system will automatically generate a preview for you.
However, if the file is larger than 1 MB, you will need to manually continue with the preview.
Others

Each file can be downloaded via the GUI.
"Copy to" is used to copy a file between your projects.
Absolute path
The file storage absolute path is /apps/
workspace : /apps/workspace/
background : /apps/background/
api-service : /apps/api-service/
Share model weight
We have pre-downloaded AI model weights as shown in the table.
(Permission is read-only)
If you are looking for a new model, please let us know via Discord.
LLM
Qwen3-4B
/share_weights/Qwen3-4B-GGUF/
Qwen3-4B-Q4_K_M.gguf
Qwen3-4B-Q8_0.gguf
Qwen3-8B
/share_weights/Qwen3-8B-GGUF/
Qwen3-8B-Q4_K_M.gguf
Qwen3-8B-Q8_0.gguf
Qwen3-14B
/share_weights/Qwen3-14B-GGUF/
Qwen3-14B-Q4_K_M.gguf
Qwen3-14B-Q8_0.gguf
Qwen3-32B
/share_weights/Qwen3-32B-GGUF/
Qwen3-32B-Q4_K_M.gguf
Qwen3-32B-Q8_0.gguf
Qwen3-30B-A3B
/share_weights/Qwen3-30B-A3B-GGUF/
Qwen3-30B-A3B-Q4_K_M.gguf
Qwen3-30B-A3B-Q8_0.gguf
Gemma3-12B
/share_weights/Gemma3-12B-GGUF/
gemma-3-12b-it-q4_0.gguf
mmproj-model-f16-12B.gguf
Gemma3-27B
/share_weights/Gemma3-27B-GGUF/
gemma-3-27b-it-q4_0.gguf
mmproj-model-f16-27B.gguf
typhoon2.1-gemma3-12b
/share_weights/typhoon2.1-gemma3-12b-gguf/
typhoon2.1-gemma3-12b-q4_k_m.gguf
VLM
Qwen2.5-vl-7B
/share_weights/Qwen2.5-VL-7B-Instruct-exl2/
*.safetensors
Qwen2.5-vl-32B
/share_weights/Qwen2.5-VL-32B-Instruct-exl2/
*.safetensors
UI-tars-1.5-7b
/share_weights/UI-TARS-1.5-7B-exl2/
*.safetensors
Embedding
Bge-m3
/share_weights/Bge-m3-GGUF/
bge-m3-Q8_0.gguf
Qwen3-Embedding-0.6B
/share_weights/Qwen3-Embedding-0.6B-GGUF/
Qwen3-Embedding-0.6B-Q8_0.gguf
Qwen3-Embedding-4B
/share_weights/Qwen3-Embedding-4B-GGUF/
Qwen3-Embedding-4B-Q8_0.gguf
Qwen3-Embedding-8B
/share_weights/Qwen3-Embedding-8B-GGUF/
Qwen3-Embedding-8B-Q8_0.gguf
Last updated