Introduction

Introduce Float16.cloud

Welcome to Float16.cloud documentation. We are a GPU managed service provider offering API-first solutions and resources for developers, with a focus on those new to the Large Language Model (LLM) industry who may lack sufficient resources for experimentation or implementation.

Our GPU managed service is a cloud-based solution that provides on-demand access to GPU resources. We manage the complexities of GPU infrastructure, including deployment, scaling, and maintenance. This allows our users to concentrate on their primary tasks, such as AI development and LLM applications, without concerns about hardware management.

As of July 2024, we are pleased to introduce our new API platform and "LLM as a service" offering. These API-first services are designed for users who prefer to avoid deployment concerns and long-term commitments. We operate on a pay-per-token model, ensuring cost-effectiveness and flexibility.

See our quick start here,

LLM as a service

This documentation will guide you through our services and help you make the most of our platform.

Last Updated

Version 0.3.0 (September 2024)

  • Add new feature "One Click Deployment", learn more

How to Get Started?

To begin using our services, you need to sign up for a Float16 App account. For more details on this process, please refer to the Account section in our documentation. Once you have an account, you can start exploring and utilizing our range of services.

For organizations with sensitive data who prefer to host our service on their own infrastructure, we offer a private hosting option. Please contact our team for more information about this solution.

Playground for developers

We offer a convenient Chat Playground for developers who want to test or quickly try out new models without implementing a user interface. This feature allows you to interact with our AI models directly and effortlessly.

Check out the playground,

FloatChat

We are excited to introduce a new playground designed for both developers and non-developers who wish to create and test few-shot prompts. This platform allows users to share their prompts with the community, offering free access to the SeaLLM-7b-v3 model and OpenAI API integration.

FloatPrompt

We offers "Quantize by Float16" for developer who wants to compare inference speeds of leading LLMs: Llama, Gemma, RecurrentGemma, and Mamba. Test various quantization techniques and KV cache settings to help you optimize your LLM deployments and understand performance trade-offs.

Quantize by Float16

Support

Our focus is on creating a developer-first community. We are excited to support you and the entire developer community. If you have any questions or issues with deploying, implementing, or launching your AI application, you can contact us or share your problem on Discord. Additionally, if you have any requests or feedback you'd like us to know about, we're eager to hear from you.

Last updated