Show HN: We open sourced Vapi – UI included

(github.com)

8 points | by pritesh1908 4 days ago ago

8 comments

  • KomoD 4 days ago ago

    That is an extremely misleading title because it made it sound like Vapi was open sourced, not that you just made a clone.

    • pritesh1908 4 days ago ago

      Fair point on the title - should have been clearer. Dograh is an open source alternative to Vapi , not a clone though. Vapi/Retell are closed platforms; this is open source infra you self-host and modify. Like saying n8n is a clone of Zapier because they solve the same problem.

      Same category, but fundamentally different model.

  • a6kme 4 days ago ago

    Hello HN. I am Abhishek, one of the creators and maintainers of Dograh - github.com/dograh-hq/dograh

    Please feel free to ask any question you may have or give us feedbacks on how we can make it better for you.

    Thanks!

  • ursula1112 3 days ago ago

    Nice work, will checkout. What’s the average end-to-end latency per turn with STT + LLM + TTS in your default stack?

    • a6kme 3 days ago ago

      Hello.

      The latency is a factor of the models you are picking up for reasoning. If you are colocating the models by self hosting on GPUs, the latency can be as low as 500 - 600 ms between bot - user turns. With models like Gemini-2.5-flash, the latency is around 800-1000 ms. The latency can be higher with reasoning and larger models, like gpt-4.1.

  • ajabhish 4 days ago ago

    This is look pretty promising. Do you guys are focussing on a specific use case or any voice AI use case in general?

    • a6kme 4 days ago ago

      Thanks for the kind words @ajabish.

      We are more of a horizontal platform and can support a wide variety of use cases. We are serving large BPO call centres on our managed hosted service for outbound and inbound cases.

      There are individual builders also trying to build inbound use cases for personal use or trying to build their business on top of Dograh.

  • muks3567 4 days ago ago

    [dead]