25 comments

  • LTL_FTC 4 days ago ago

    It sounds like you don’t need immediate llm responses and can batch process your data nightly? Have you considered running a local llm? May not need to pay for api calls. Today’s local models are quite good. I started off with cpu and even that was fine for my pipelines.

    • kreetx 4 days ago ago

      Though haven't done any extensive testing then I personally could easily get by with current local models. The only reason I don't is that the hosted ones all have free tiers.

    • queenkjuul 4 days ago ago

      Agreed, I'm pretty amazed at what I'm able to do locally just with an AMD 6700XT and 32GB of RAM. It's slow, but if you've got all night...

    • ok_orco 3 days ago ago

      I haven't thought about that, but really want to dig in more now. Any places you recommend starting?

      • LTL_FTC a day ago ago

        I started off using gpt-oss-120b on cpu. It uses about 60-65gb of memory or so but my workstation has 128gb of ram. If I had less ram, I would start off with the gpt-oss-20b model and go from there. Look for MoE models as they are more efficient to run.

        My old threadripper pro was seeing about 15tps, which was quite acceptable for the background tasks I was running.

    • ydu1a2fovb 3 days ago ago

      Can you suggest any good llms for cpu?

      • LTL_FTC a day ago ago

        I started off using gpt-oss-120b on cpu. It uses about 60-65gb of memory or so but my workstation has 128gb of ram. If I had less ram, I would start off with the gpt-oss-20b model and go from there. Look for MoE models as they are more efficient to run.

      • R_D_Olivaw 3 days ago ago

        Following.

        • LTL_FTC a day ago ago

          I started off using gpt-oss-120b on cpu. It uses about 60-65gb of memory or so but my workstation has 128gb of ram. If I had less ram, I would start off with the gpt-oss-20b model and go from there. Look for MoE models as they are more efficient to run.

  • 44za12 4 days ago ago

    This is the way. I actually mapped out the decision tree for this exact process and more here:

    https://github.com/NehmeAILabs/llm-sanity-checks

    • homeonthemtn 3 days ago ago

      That's interesting. Is there any kind of mapping to these respective models somewhere?

      • 44za12 3 days ago ago

        Yes, I included a 'Model Selection Cheat Sheet' in the README (scroll down a bit).

        I map them by task type:

        Tiny (<3B): Gemma 3 1B (could try 4B as well), Phi-4-mini (Good for classification). Small (8B-17B): Qwen 3 8B, Llama 4 Scout (Good for RAG/Extraction). Frontier: GPT-5, Llama 4 Maverick, GLM, Kimi

        Is that what you meant?

  • gandalfar 4 days ago ago

    Consider using z.ai as model provider to further lower your costs.

    • DANmode 4 days ago ago

      Do they or any other providers offer any improvements on the often-chronicled variability of quality/effort from the major two services e.g. during peak hours?

    • tehlike 4 days ago ago

      This is what i was going to suggest too.

    • viraptor 4 days ago ago

      Or minimax - m2.1 release didn't make a big splash in the news, but it's really capable.

    • ok_orco 3 days ago ago

      Will take a look!

  • deepsummer 4 days ago ago

    As much as I like the Claude models, they are expensive. I wouldn't use them to process large volumes of data. Gemini 2.5 Flash-Lite is $0.10 per million tokens. Grok 4.1 Fast is really good and only $0.20. They will work just as well for most simple tasks.

  • DeathArrow 4 days ago ago

    You also can try to use cheaper models like GLM, Deepseek, Qwen,at least partially.

  • joshribakoff 4 days ago ago
  • toxic72 3 days ago ago

    consider this for addtl cost savings if local doesnt interest you - https://docs.cloud.google.com/vertex-ai/generative-ai/docs/m...

  • dezgeg 4 days ago ago

    Are you also adding the proper prompt cache control attributes? I think Anthropic API still doesn't do it automatically

    • ok_orco 2 days ago ago

      No I need to look into this!

  • arthurcolle 4 days ago ago

    Can you discuss a bit more of the architecture?

    • ok_orco 4 days ago ago

      Pretty straightforward. Sources dump into a queue throughout the day, regex filters the obvious junk ("lol", "thanks", bot messages never hit the LLM), then everything gets batched overnight through Anthropic's Batch API for classification. Feedback gets clustered against existing pain points or creates new ones.

      Most of the cost savings came from not sending stuff to the LLM that didn't need to go there, plus the batch API is half the price of real-time calls.