AI issue with local models and <think> tags | MDaemon Technologies, Ltd.

AI issue with local models and <think> tags


  • I am trying to use a local model (Deepseek-R1) on a local machine running Ollama.  It works fine, except that no matter what I do it includes its reasonsing/thinking within <think> tags, and this screws us the classification.  Apparently it is possible to pass a "think" true or false parameter in the API call but unless I'm missing something, I don't believe that's something we can alter or change.  Any suggestions?  Perhaps it can be changed to just ignore the <think> tags?  THis is going to be a problem on any reasoning model I imagine.  



  • Can you provide an example of the prompt you are using along with a response that you get?

    From what I've read the only way to remove the <think> tag is to change to a model that does not use reasoning, such as Deepseek V3, but I have not actually tried this.

    If you adjust the prompt to format the response so that it starts with the category followed by a comma, and then the reasoning response, is it able to correctly parse the response? Can you provide an example of the response that is returned.

    Here is an example prompt:

    Analyze this email and classify it based on content. Respond with exactly one of: {classification_labels}, followed by a comma and the explanation.


  • @Arron Hi Arron,

    No amount of prompt engineering could make it work, but I did find a solution; use LM Studio instead of Ollama.  Then it works perfectly.  Also in case anyone cares, I seem to be having more success with GPT-OSS than DeepSeek R1.  


  • We will look into getting Deepseek-R1 to work with Ollama.

    Thank you for sharing.


  • I have to say it is a pretty cool feature; it seems by far the best way to spot phishing e-mails.


  • I'm glad to hear you like the feature! It is doing a great job of catching phishing emails for our domain as well.


Please login to reply this topic!