📚
Docs - Float16
homeapp
  • 🚀GETTING STARTED
    • Introduction
    • Account
      • Dashboard
      • Profile
      • Payment
      • Workspace
      • Service Quota
    • LLM as a service
      • Quick Start
        • Set the credentials
      • Supported Model
      • Limitation
      • API Reference
    • One Click Deploy
      • Quick Start
        • Instance Detail
        • Re-generate API Key
        • Terminate Instance
      • Features
        • OpenAI Compatible
        • Long context and Auto scheduler
        • Quantization
        • Context caching
      • Limitation
      • Validated model
      • Endpoint Specification
    • Serverless GPU
      • Quick Start
        • Mode
        • Task Status
        • App Features
          • Project Detail
      • Tutorials
        • Hello World
        • Install new library
        • Prepare model weight
        • S3 Copy output from remote
        • R2 Copy output from remote
        • Direct upload and download
        • Server mode
        • LLM Dynamic Batching
        • Train and Inference MNIST
        • Etc.
      • CLI References
      • ❓FAQ
    • Playground
      • FloatChat
      • FloatPrompt
      • Quantize by Float16
  • 📚Use Case
    • Q&A Bot (RAG)
    • Text-to-SQL
    • OpenAI with Rate Limit
    • OpenAI with Guardrail
    • Multiple Agents
    • Q&A Chatbots (RAG + Agents)
  • ✳️Journey
    • ✨The Beginner's LLM Development Journey
    • 📖Glossary
      • [English Version] LLM Glossary
      • [ภาษาไทย] LLM Glossary
    • 🧠How to install node
  • Prompting
    • 📚Variable
    • ⛓️Condition
    • 🔨Demonstration
    • ⌛Loop
    • 📙Formatting
    • 🐣Chat
    • 🔎Technical term (Retrieve)
  • Privacy Policy
  • Terms & Conditions
Powered by GitBook
On this page
  • Step 1 : Create and start the project
  • Step 2 : Prepare the script
  • Step 3.1 : Upload via CLI
  • Step 3.2 : Upload via Website
  • Step 4.1 : Download the file(s) via CLI
  • Step 4.2 : Download the file(s) via Website
  • Explore More
  1. GETTING STARTED
  2. Serverless GPU
  3. Tutorials

Direct upload and download

Get Endpoint via Float16

PreviousR2 Copy output from remoteNextServer mode

Last updated 1 month ago

This tutorial guides you through upload and download using Float16's storage.

  • Float16 CLI installed

  • Logged into Float16 account

  • VSCode or preferred text editor recommended

Step 1 : Create and start the project

float16 project create --instance h100
float16 project start

If you didn't start the project, You can't use storage command before start the project.

Step 2 : Prepare the script

(download-mnist-datasets.py)

import os
from torchvision import datasets, transforms

def download_mnist(data_path):
    if not os.path.exists(data_path):
        os.makedirs(data_path)
    
    transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize((0.1307,), (0.3081,))
    ])

    # Download training data
    train_dataset = datasets.MNIST(root=data_path, train=True, download=True, transform=transform)
    
    # Download test data
    test_dataset = datasets.MNIST(root=data_path, train=False, download=True, transform=transform)

    print(f"MNIST dataset downloaded and saved to {data_path}")

if __name__ == "__main__":
    data_path = "../mnist-datasets"  # You can change this to your preferred location
    download_mnist(data_path)

Step 3.1 : Upload via CLI

After downloaded. Use this command to upload datasets directory to remote path.

float16 storage upload -f ./mnist-datasets -d datasets
  • The storage upload command use direct connect between your local machine direct to

Step 3.2 : Upload via Website

Step 4.1 : Download the file(s) via CLI

float16 storage download -f datasets -d ./local_datasets

Step 4.2 : Download the file(s) via Website

Congratulations! You've successfully use your first server mode on Float16's serverless GPU platform.

Explore More

Learn how to use Float16 CLI for various use cases in our tutorials.

Happy coding with Float16 Serverless GPU!

🚀
https://github.com/float16-cloud/examples/tree/main/official/spot/torch-train-and-infernce-mnist

Hello World

Launch your first serverless GPU function and kickstart your journey.

Install new library

Enhance your toolkit by adding new libraries tailored to your project needs.

Copy output from remote

Efficiently transfer computation results from remote to your local storage.

Deploy FastAPI Helloworld

Quick start to deploy FastAPI without change the code.

Upload and Download via CLI and Website

Direct upload and download file(s) to server.

More examples

Open source from community and Float16 team.