The output of AI summary using the local notes do not appear to reasonably prioritize and retain/improve upon the notes I capture during the meeting in shorthand. The output appears to be mostly or entirely based off of the transcription, which generally appears to be reducing the fidelity of my notes. Part of that is also I suspect due to the lower size local LLM model performing the summary. I really would love to see the summarization retain and build upon / extend / enhance the note from what I jot down rather than purely reproducing the note based on the transcript. This appears to be how Granola generally works and also the Notes function of OpenWebUI I have experimented with as well. I love the offline model concept and portability but am debating the ability to squeeze good enough note quality from it, even when using more powerful local models like Gemma 12B, so any work done to also improve upon that (more optimized open source local models or ability to connect via API to frontier budget models) would be huge for my use case. Love your work and project...especially ability to record audio sources broadly - whether on a call or live meetings. Nice work!!