View Source Code
Browse the complete example on GitHub
This example is a 100% local meeting summarization tool, that runs on your machine thanks to:
-
LiquidAI/LFM2-2.6B-Transcript- a small language model specialized in summarizing meeting transcripts. -
llama.cpp- a fast inference engine with a minimal setup and state-of-the-art performance on a wide range of hardware, both locally and in the cloud.
Quick Start
Install uv
Click to see installation instructions for uv
Click to see installation instructions for uv
macOS/Linux:Windows:
Run the Tool
-
Run the tool without cloning the repository using a
uv runone-liner:The previous command uses this default transcript file to summarize the meeting. -
If you want to use a different transcript file, you can pass the
--transcript-fileargument explicitly, either as a local file path or as an HTTP/HTTPS URL, and the tool will automatically download and use it. For example, to use this other file you can run: -
If you want to dig deeper into the code, experiment with it, and modify it, you can clone the repository:
and run the summarization CLI using the following command:
How does it work?
The CLI uses the llama.cpp Python bindings to download and build the llama.cpp binary for your platform automatically, so you donβt need to worry about it. The build is optimized for your platform, so you can use it on your machine without any other setup. Then, it uses theLiquidAI/LFM2-2.6B-Transcript model to summarize the transcript.
Next Steps
- Integrate the CLI into a 2-step pipeline that transcribes audio files to transcripts and then summarizes them.