Hosting a Replit Frontend
This guide outlines the steps to deploy a frontend using Replit and utilize a VALDI machine as an endpoint to handle backend requests.
You can easily fork the VALDI Modular frontend by navigating to the project on Replit and forking the code. Follow the remaining steps to proceed. The Replit frontend is the quickest way to get started.
Video Walkthrough
Proxy-Backend
- Python with Flask:
main.py
acts as the server, handling the various APIs, sending requests to the VALDI backend, and serving the frontend. Despite being written in Python, this is more of a frontend than a backend — it functions as a serverless backend endpoint, but is a proxy to your real backend.
API Endpoints
This directly interacts with the backend server hosted on VALDI.
/
: Serves the main chat interface./api/chat
: Handles chat messages sent to different language models./api/llava
: Acts as a specialized chat handler for the LLaVA model that includes image data./txt2img
: Handles text-to-image generation requests./list-models
: Returns the list of available models installed on the server./install-model
: Installs a given model./uninstall-model
: Uninstalls a given model./install
: Handles initial setup, installing necessary components.
Installation Instructions
You must first run the OllamaAPI on a VALDI machine and complete the setup here
To get the VALDI Modular LLM Chat Interface running:
- Fork or clone this repository to your local machine or Replit environment.
- Install the required Python modules with
pip install -r requirements.txt
or run the program to install dependencies on Replit. - Set the
VALDI_ENDPOINT
variable stored in your Replit environment's secrets to your backend's endpoint. - Run
main.py
to start the Flask server. - Access the web interface by opening
localhost:81
in your web browser (or whatever port you configured), or in the webview of Replit.