View Source Code
Browse the complete example on GitHub
Whatβs inside?
Combines LFM2.5-Audio-1.5B in TTS and STT modes with LFM2-1.2B-Tool within a mockup of a car cockpit, letting the user control the car functionalities by voice. All running locally in real-time.Key Components
Llama.cpp is used for both models inference, with a custom runner for the audio model. The car cockpit (UI) is vanilla js+html+css, and the communication with the backend is through messages over websocket, like a widely simplified car CAN bus.Supported Platforms
Supported PlatformsThe following platforms are currently supported:
- macOS ARM64
- Ubuntu ARM64
- Ubuntu x64
Quick Start
-
Setup Python environment:
-
Optional: If you already have llama-server in your path, you can symlink instead of building it:
-
Prepare the audio and tool calling models:
-
Launch the demo: