17 comments

  • otekengineering 4 days ago ago

    This is really cool! Development environments that facilitate low-friction use of many models and providers are awesome.

    Here's something similar that I've been working on - https://omnispect.dev/

    • hivetechs 3 days ago ago

      Nice, great minds think alike! Not having vendor lock in and being flexible to use multiple providers and models gives us the most flexibility. The investments put into the CLI tools and what they can do are also amazing, I love using them.

  • ebbi 4 days ago ago

    The 'multi-model consensus' feature actually looks very useful! I'm going to give this a go.

    A question on OpenRouter - is it just a place to consolidate the various AI models through one billing platform, or does it do more than that? And are the costs slightly more as they take a cut in between?

    • joshstrange 4 days ago ago

      > is it just a place to consolidate the various AI models through one billing platform, or does it do more than that

      You can easily switch models, use the cheapest provider (especially for open models), and not have to reach certain "tiers" to get access to limits like you might on OpenAI/Anthropic's direct offerings.

      > And are the costs slightly more as they take a cut in between?

      5% more, you buy credits upfront and pay 5% extra. Aside from that you pay the normal prices listed (which have always matched the direct providers as well AFAIK).

      • KronisLV 4 days ago ago

        Note that you also might need to think a little bit about caching: https://openrouter.ai/docs/guides/best-practices/prompt-cach...

        Depending on the way how the context grows, it can matter quite a bit!

        • hivetechs 4 days ago ago

          Great call out! Yes I have tried to follow these, to make Consensus compliant with OpenRouter's prompt caching best practices.

      • ebbi 4 days ago ago

        Appreciate the reply mate, thank you.

    • hivetechs 4 days ago ago

      What's great about OpenRouter is you have access to all providers and models and they do the work of standardizing the interface. Our new HiveTechs Consensus IDE configures 8 profiles for you and your AI conversations, each using its own LLM from OpenRouter and unlimited custom profiles, you pick the providers and LLM's from a list and name the profile. Also, we have our own built in HiveTechs CLI that gives you the ability to use any LLM from OpenRouter, updated daily. So the moment a new model drops, you can test it out without waiting for it to release in your other favorite apps.

  • infinet 4 days ago ago

    My apologies for the digression. But it reminds me a post I saw long time ago when a guy installed all the antivirus/antimalware software he could find on a Windows machine. It started an antivirus civil war and the Windows fell into a coma within seconds.

    • exe34 4 days ago ago

      I haven't admined windows for 16 years, but I had this conspiracy theory back in the days that when you install an anti virus A, you'd find some viruses, and then you'd install anti virus B, you'd find some more, but then when you went back to anti virus A, you'd find a couple more - that the free versions were installing their own viruses. It might just have been because I was using bootleg copies.

  • reallyaaryan 3 days ago ago

    Which IDE did you use to build this IDE?

  • xnx 4 days ago ago

    I started using Gemini and see no need for other models.

    • hivetechs 4 days ago ago

      Hey, I fully understand. A model or CLI like Gemini releases a new version and it seems like a new place to call home. However, in this time of AI growth, each providers new advancements are reason to make a change, today for you Gemini, perhaps next week, Claude or OpenAI. With HiveTechs Consensus you have use of every leading provider at all times, so use the one you love and compare others any time. You may discover that Gemini excels in frontend but Claude and its latest model excels at backend, reducing your development time.

  • vivzkestrel 4 days ago ago

    is this a vscode fork? how compatible are existing vscode extensions with this? what is your tech stack

    • hivetechs 4 days ago ago

      No this is not a fork, I built it from scratch. It is not intended to be used with vscode extensions. Its an Electron app. Desktop Framework

        - Electron - Desktop app with
        main/renderer process
        architecture
        - TypeScript - Primary language
        (strict mode)
      
        Frontend/UI
      
        - Monaco Editor - VS Code-style
        code editing
        - HTML/CSS - UI rendering
        - WebSockets - Real-time
        communication with backend
      
        Backend Services
      
        - Node.js - Runtime
        - Express - Memory Service API
        server
        - SQLite - Local database for
        memory persistence
        - Cloudflare D1 - Remote sync for
         memory backup
      • vivzkestrel 4 days ago ago

        interesting, did you ever consider building it with tauri?

        • hivetechs 3 days ago ago

          Hey there, actually I started using Tauri, but eventually ran into performance issues and found Electron more mature, easier to achieve at least for me.