(Photo by Felix Eka Putra Kuntjoro on Unsplash)
Background
I just thought of switching the post-processing for MacWhisper Dictation from the original google/gemma-3-12b to using openai/gpt-oss-20b in LM Studio, but I kept encountering an issue where the gpt-oss model would return the reasoning process as part of the dictation output results. Here’s the problematic output:
We need to correct punctuation: use full-width.
Input: "嗨,我們明天去兒童樂園玩好嗎?"
We replace comma with ,, question mark with ?.
Also add period at end?
The sentence ends with question mark already.
So output: "嗨,我們明天去兒童樂園玩好嗎?"
嗨,我們明天去兒童樂園玩好嗎?
Contents
Temporary Solution
I thought about potentially using LM Studio Structured Output, but the logic didn’t make sense either, as MacWhisper would also need to understand it.
Root Cause Identification
Later, Claude helped me find that LM Studio 0.3.9 1 started supporting separating reasoning content into a dedicated reasoning_content
field.
(LM Studio > App Settings > Developer)
- In LM Studio > App Settings > Developer
- Enable “When applicable, separate reasoning_content and content in API responses”
- This way, API responses will put the reasoning process in the
reasoning_content
field and the main content in thecontent
field
(The screen recording converted to GIF has some flickering. Just consider it a life record. Hope it won’t hurt everyone’s eyes too much.)
Problem solved, done.