Gemini 3.1 Flash-Lite: Built for intelligence at scale

(blog.google)

57 points | by meetpateltech a day ago ago

32 comments

  • vlmutolo a day ago ago

    Lots of comments about the price change, but Artifical Analysis reports that 3.1 Flash-Lite (reasoning) used fewer than half of the tokens of 2.5 Flash-Lite (reasoning).

    This will likely bring the cost below 2.5 flash-lite for many tasks (depends on the ratio of input to output tokens).

    That said, AA also reports that 3.1 FL was 20% more expensive to run for their complete Intelligence index benchmark.

    The overall point is that cost is extremely task-dependent, and it doesn’t work to just measure token cost because reasoning can burn so many tokens, reasoning token usage varies by both task and model, and similarly the input/output ratios vary by task.

    • XCSme 19 hours ago ago

      > 3.1 Flash-Lite (reasoning)

      (reasoning) doesn't say much. Is it low/med/high reasoning? I ran my own benchmarks, and 3.1 Flash-Lite on high costs A LOT: https://aibenchy.com/compare/google-gemini-3-1-flash-lite-pr...

      Do not use 3.1 Flash-Lite with HIGH reasoning, it reasons for almost max output size, you can quickly get to millions of tokens of reasoning in a few requests.

      • vlmutolo 16 hours ago ago

        Wow, that’s very interesting. I wish more benchmarks were reported along with the total cost of running that benchmark. Dollars per token is kind of useless for the reasons you mentioned.

    • msp26 20 hours ago ago

      many tasks don't need any reasoning

  • sync a day ago ago

    Unfortunate, significant price increase for a 'lite' model: $0.25 IN / $1.50 OUT vs. Gemini 2.5 Flash-Lite $0.10 IN / $0.40 OUT.

  • k9294 a day ago ago

    You can test Gemini 3.1 Lite transcription capabilities in https://ottex.ai — the only dictation app supporting Gemini models with native audio input.

    We benchmarked it for real-life voice-to-text use cases:

                    <10s    10-30s   30s-1m    1-2m    2-3m
      Flash         2548     2732     3177     4583    5961
      Flash Lite    1390     1468     1772     2362    3499
      Faster by    1.83x    1.86x    1.79x   1.94x   1.70x
    
      (latency in ms, median over 5 runs per sample, non-streaming)
    
    Key takeaways:

    - 1.8x faster than Gemini 3 Flash on average

    - ~1.4 sec transcription time for short to medium recordings

    - ~$0.50/mo for heavy users (10h+ transcription)

    - Close to SOTA audio understanding and formatting instruction following

    - Multilingual: one model, 100+ languages

    Gemini is slowly making $15/month voice apps obsolete.

    • simianwords a day ago ago

      You know what would be great? A light weight wrapper model for voice that can use heavier ones in the background.

      That much is easy but what if you could also speak to and interrupt the main voice model and keep giving it instructions? Like speaking to customer support but instead of putting you on hold you can ask them several questions and get some live updates

      • k9294 a day ago ago

        It's actually a nice idea - an always-on micro AI agent with voice-to-text capabilities that listens and acts on your behalf.

        Actually, I'm experimenting with this kind of stuff and trying to find a nice UX to make Ottex a voice command center - to trigger AI agents like Claude, open code to work on something, execute simple commands, etc.

    • stri8ted a day ago ago

      Can you show some comparisons for WER and other ASR models? Especially for non english.

      • k9294 a day ago ago

        I've been experimenting with Gemini 3.1 Flash Lite and the quality is very good.

        I haven't found official benchmarks yet, but you can find Gemini 3 Flash word error rate benchmarks here: https://artificialanalysis.ai/speech-to-text/models/gemini — they are close to SOTA.

        I speak daily in both English and Russian and have been using Gemini 3 Flash as my main transcription model for a few months. I haven't seen any model that provides better overall quality in terms of understanding, custom dictionary support, instruction following, and formatting. It's the best STT model in my experience. Gemini 3 Flash has somewhat uncomfortable latency though, and Flash Lite is much better in this regard.

  • zacksiri a day ago ago

    This is going to be a fun one to play with. I've been conducting tests on various models for my agentic workflow.

    I was just wishing they would make a new flash-lite model, these things are so fast. Unfortunately 2.5-flash and therefore 2.5-flash-lite failed some of my agentic workflows.

    If 3.1-flash-lite can do the job, this solves basically all latency issues for agentic workflows.

    I publish my benchmarks here in case anyone is interested:

    https://upmaru.com/llm-tests/simple-tama-agentic-workflow-q1...

    P.S: The pricing bump is quiet significant, but still stomachable if it performs well. It is significant though.

  • sh4jid a day ago ago

    The Gemini Pro models just don't do it for me. But I still use 2.5 Flash Lite for a lot of my non-coding jobs, super cheap but great performance. I am looking forward to this upgrade!

    • simianwords a day ago ago

      same - pro is usually a miss for me.

  • rohansood15 a day ago ago

    For the last 2 years, startup wisdom has been that models will continue to get cheaper and better. Claude first, and now Gemini has shown that it's not the case.

    We priced an enterprise contract using Flash 1.5 pricing last summer, and today that contract would be unit economic negative if we used Flash 3. Flash 2.5 and now Flash 3.1 Lite barely breaks even.

    I predict open-source models and fine-tuning are going to make a real comeback this year for economic reasons.

    • simianwords a day ago ago

      Not true. You just measure cost by amount of money spent per task. I would argue that this lite version is equivalent to older flash.

      • rohansood15 a day ago ago

        Yea but there is a whole world of tasks for which Flash 2.5-lite was sufficiently intelligent. Given Google's depreciation policy, there will soon be no way to get that intelligence at that price.

        • simianwords a day ago ago

          I hope they release models at every intelligence resolution although the thinking effort can be a good alternative

    • xnx a day ago ago

      > We priced an enterprise contract using Flash 1.5 pricing last summer,

      Interesting. Flash 1.5 was already a year old at that point.

    • dktp a day ago ago

      Opus 4.5 became significantly cheaper than Opus 4.1

    • typs a day ago ago

      I mean the same level of intelligence does get cheaper. People just care about being on the frontier. But if you track a single level of intelligence the price just drops and drops.

      • rohansood15 a day ago ago

        What's the cheaper alternative from Gemini for Flash-2.5-lite level intelligence when it gets deprecated on 22nd July 2026?

  • GodelNumbering a day ago ago

    That's a 150% increase in the input costs and 275% increase on output costs over the same sized previous generation (2.5-flash-lite) model

  • xnx a day ago ago

    I'm still clinging to gemini-2.0-flash which I think is free free for API use(?!).

  • undefined a day ago ago
    [deleted]
  • undefined a day ago ago
    [deleted]
  • msp26 21 hours ago ago

    What the fuck is this price hike? It was such a nice low end, fast model. Who needs 10 years of reasoning on this model size??

    I'm gonna switch some workflows to qwen3.5.

    There's a lot of tasks that benefit from just having a mildly capable LLM and 2.5 Flash Lite worked out of the box for cheap.

    Can we get flash lite lite please?

    Edit: Logan said: "I think open source models like Gemma might be the answer here"

    Implying that they're not interested in serving lower end Gemini models?

    • zzleeper 14 hours ago ago

      Are there good open models out there that beat gemini 2.5 flash on price? I often run data extraction queries ("here is this article, tell me xyz") with structured output (pydantic) and wasn't aware of any feasible (= supports pydantic) cheap enough soln :/

      • kristianp 6 hours ago ago

        You'll have to try out models on your use case. Openrouter makes that easy.

  • guerython a day ago ago

    [flagged]

    • zacksiri a day ago ago

      Yes, my workflows use caching intensively. It's the only way to keep things fast / economical.