Skip to content

VALDI Modular LLM Chat Interface

This demonstration features a user interface designed for interacting with diverse language models via chat within a web browser. It seamlessly integrates with the OllamaAPI backend, enabling users to effortlessly select different models, transmit messages, and examine responses.

Users now possess the capability to execute the code directly on VALDI without relying on Replit, or alternatively, run the code on VALDI as a backend endpoint.

Video Walkthrough

Features

  • Dynamic Model Selection: Users can select from a range of pre-installed language models to interact with.
  • Installation Management: Users can install or uninstall models by dragging them between lists.
  • Chat Interface: Users can communicate with the chosen LLM via the interactive chat interface.
  • Support for Text-to-Image Generation: Users can send requests to a Stable Diffusion endpoint for text-to-image generation.
  • Image Uploads for LLaVA: Users can upload images to be interpreted by LLaVA.