Improved model summarization performance
B
Bradley Smith
The output of AI summary using the local notes do not appear to reasonably prioritize and retain/improve upon the notes I capture during the meeting in shorthand. The output appears to be mostly or entirely based off of the transcription, which generally appears to be reducing the fidelity of my notes. Part of that is also I suspect due to the lower size local LLM model performing the summary.
I really would love to see the summarization retain and build upon / extend / enhance the note from what I jot down rather than purely reproducing the note based on the transcript. This appears to be how Granola generally works and also the Notes function of OpenWebUI I have experimented with as well. I love the offline model concept and portability but am debating the ability to squeeze good enough note quality from it, even when using more powerful local models like Gemma 12B, so any work done to also improve upon that (more optimized open source local models or ability to connect via API to frontier budget models) would be huge for my use case. Love your work and project...especially ability to record audio sources broadly - whether on a call or live meetings. Nice work!!
Deokhaeng Lee
Hey Bradely, you are definitely right to point that out! We're currently working on something called HyprLLM-a local model trained to do a good job in generating meeting summary notes-and this will be available within a week or so. We'd love it if you could try HyprLLM out and let us know what you think.