Ask HN: What Online LLM / Chat do you use?

11 points | by ddxv a day ago ago

15 comments

  • Kuyawa 9 hours ago ago

    Deepseek all the way, for both chat and coding. I can't complain as their price is the less expensive of all, started with just $2, worked all February and still $1.95 left in balance

    Working on my first AI app for medical advice and I've been using it myself extensively to the point of addiction, it's magical

    https://mediconsulta.net

  • mongrelion 16 hours ago ago

    Through my Kagi subscription I get access to quite a few models [1] but I tend to rely on Qwen3 (fast) for quick questions and Qwen3 (reasoning) when I want a more structured approach, for example, when I am researching a topic.

    I have tried the same approach with Kimi K2.5 and GLM 5 but I keep going back fo Qwen3.

    I also have access to Perplexity which is quite decent to be honest, but I prefer to keep everything in Kagi.

    1: https://help.kagi.com/kagi/ai/assistant.html#available-llms

  • waynerisner 21 hours ago ago

    DeepSeek (reasoning) and Gemini (multimodal) have been useful for me — especially when I want stronger pushback or a different angle. What are you hoping to get that you’re not getting from the usual set?

    • ddxv 20 hours ago ago

      I recently deleted my ChatGPT and was just looking around for what the other options are. I kinda like fast, but Grok / Perplexity are behind a perpetual CloudFlare check for me. I'm looking for something that doesn't spend forever reasoning out the answer to some basic quick question.

      It's interesting, as I type that out it makes me wonder why not just go back to the search engine since it has the AI summaries that have been getting better.

      Finally, I do also like the longer reasoning when I have a tough question and usually like to copy paste it around to various models and compare their responses.

      • waynerisner 11 hours ago ago

        I’ve had similar friction experiences — especially when reasoning-heavy modes take longer or get retried. That repels me too.

        On the search engine comparison: do you feel LLMs reduce cognitive load because they maintain context, whereas search requires more manual synthesis?

        Also curious — do you think the frustration is mostly with the model itself, or with the serving/infrastructure layer (Cloudflare, routing, batching, etc.) around it? Both comments seem to point at that layer in different ways.

    • coldtrait 21 hours ago ago

      Do you customize or personalize these at all? Or use it just out of the box? I use gemini for image generation and as an, assistant through my android although not frequently. As a chat assistant the online Google UI is a turnoff personally.

      • waynerisner 11 hours ago ago

        For tone, I prefer efficient — higher signal-to-noise.

        In custom instructions I’ll sometimes write something like “tell it like it is.” Interestingly, some models then start with “okay, I’ll tell you like it is…” which makes me wonder whether that’s psychological framing, operational behavior, or both.

        Curious why you asked — have you found customization meaningfully changes performance?

  • treesknees 20 hours ago ago

    I’ve been using the Kagi Assistant with various models. It’s more of a glorified search summarizer but it’s essentially free with my existing subscription.

  • nicbou 15 hours ago ago

    ChatGPT just because, then Gemini because it loads much faster. I don't have to wait a few seconds for the web UI to be ready. The frequent ChatGPT downtimes are what got me to look for alternatives.

  • muzani 14 hours ago ago

    Mistral is still a lot of fun, especially with the ChatGPT/Claude voice becoming too common. Openrouter gives you access to it for free, or you can download it to LM Studio.

  • allinonetools_ 21 hours ago ago

    I tend to rotate between a few depending on the task — some are better at summaries, others handle logic better. One thing that really helps is trying smaller niche models for specific use-cases instead of always defaulting to the big ones.

    • ddxv 20 hours ago ago

      Smaller niche models are ones you're running yourself locally?

  • Fine-Palp-528 17 hours ago ago

    Lumo by Proton is pretty performant. Mostly, it has an epic free tier and I dig their stance on privacy-first AI.

  • conception 21 hours ago ago

    Kimi models are great. Also really good at making clock faces.

  • constantinum 8 hours ago ago

    Deepseek and Qwen Chat