Skip to main content

View Source Code

Browse the complete example on GitHub

What’s inside?

Combines LFM2.5-Audio-1.5B in TTS and STT modes with LFM2-1.2B-Tool within a mockup of a car cockpit, letting the user control the car functionalities by voice. All running locally in real-time.

Key Components

Llama.cpp is used for both models inference, with a custom runner for the audio model. The car cockpit (UI) is vanilla js+html+css, and the communication with the backend is through messages over websocket, like a widely simplified car CAN bus.

Supported Platforms

Supported PlatformsThe following platforms are currently supported:
  • macOS ARM64
  • Ubuntu ARM64
  • Ubuntu x64

Quick Start

  1. Setup Python environment:
    git clone https://github.com/Liquid4All/cookbook.git
    cd cookbook/examples/audio-car-cockpit
    make setup
    
  2. Optional: If you already have llama-server in your path, you can symlink instead of building it:
    ln -s $(which llama-server) llama-server
    
  3. Prepare the audio and tool calling models:
    make LFM2.5-Audio-1.5B-GGUF LFM2-1.2B-Tool-GGUF
    
  4. Launch the demo:
    make -j2 audioserver serve
    

Need help?

Edit this page