34 comments

  • _bfjg 4 days ago ago

    Great to see others working on the problem of validating UI.

    We are also building a Web QA agent at https://kodefreeze.com. We are focused on the small and medium sized companies and are offering free usage during our trial period!

    • MrThoughtful 4 days ago ago

      When running a test all I see is:

          Error loading video
          Please try refreshing the page
      
      No matter how often I refresh, that is all I get.

      Maybe you need more QA?

      When I open my browser console, I see this:

          Capturing error: Error: WHEP request failed: 500 - {"message":"\"message\" is required!","error":"Server Error"}
      • kodefreeze 3 days ago ago

        Sorry that you are running into this error, are you seeing this on the marketing website? or somewhere in the app?

    • nkmnz 4 days ago ago

      Will certainly try this out! FYI: the pricing table is difficult to parse when reading it on mobile.

      • kodefreeze 3 days ago ago

        Thanks for letting us know, we'll fix it.

        Please let us know if you have any feedback!

  • mritchie712 5 days ago ago

    I've been looking for this exact thing. A couple questions:

    Are your agents good at testing other agents? e.g. I want your agent to ask our agent a few questions and complete a few UI interactions with the results.

    How do you handle testing onboarding flows? e.g. I want your agent to create a new account in our app (https://www.definite.app/) and go thru the onboarding flow (e.g. add Stripe and Hubspot as integrations).

    • ttamslam 5 days ago ago

      > Are your agents good at testing other agents? e.g. I want your agent to ask our agent a few questions and complete a few UI interactions with the results.

      I'd say this is one of our strong suits I think, specifically the UIs tend to be easy to navigate for browser agents, and the LLM as a judge offers pretty good feedback on chat quality and it can inform later actions. (I'd be remiss not to mention though that a good LLM eval framework like Braintrust is probably the best first line though)

      > How do you handle testing onboarding flows?

      We can step through most onboarding flows if you start from logged out state & give the context it'll need (i.e. a stripe test card, etc.) That said though, setting up integrations that require multi-page hops is still a pain point in our system and leaves a lot to be desired.

      Would love to talk more about your specific case and see if we can help! founders@propolis.tech

    • kodefreeze 3 days ago ago

      I just did this test with our web QA agent - kodefreeze.com, it was able to test creating an account until it reached the screen that requires email confirmation.

      Support for being able to receive email/custom actions is on our roadmap, but would love to see if getting this far would be valuable to you. The test was with the email=test@kodefreeze.com.

  • cloudflare728 5 days ago ago

    Can it find broken UI?

    Human can find and report broken UI easily by using common sense.

    Even though it is simple for human. Computer has no common sense and I am a machine learning expert. I tried and mostly failed to build a broken UI detector in my previous company. They had automated plugin upgradable process. That periodically broke UI.

    I tried to detect it my taking long screenshot, and you could select a image as working version, then later finding diff between 2 images. I kind of worked but not satisfactory.

    • mpapazian 5 days ago ago

      The agents can definitely detect when something is off, given they're using VLMs. They don't necessarily compare it to previous versions, rather they have opinionated takes on whether something looks broken / off. So - yes!

  • ttamslam 5 days ago ago

    Hey I'm Matt! Really excited to answer any questions.

    To elaborate a little bit on the "canary" comment --

    For a while at Airtable I was on the infra team that managed the deploy (basically click run and then sit and triage issues for a day), One of my first contributions on the team was adding a new canary analysis framework that made it easier to catch and rollback bugs automatically. Two things always bothered me about the standard canary release process:

    1) It necessarily treats some users as lower value, and thus more acceptable to risk exposing bugs to (this makes sense for things like free-tier, etc. but the more you segment out, the less representative and thus less effective your canary is). When every customer interaction matters (as is the case for so many types of businesses) this approach is harder to justify

    2) Low frequency / high impact bugs are really difficult to catch in canary analysis. While it’s easy to write metrics that catch glaring drops/spikes in metrics, more subtle high impact regressions are much harder and often require user reports (which we did not factor in as part of our canary). Example: how do you write a canary metric that auto rolls back when an enterprise account owner (small % of overall users) logs in and a broken modal prevents them from interacting with your website.

    I view what we’re building at Propolis as an answer to both of these things. I envision a deploy process (very soon) that lets us roll out to simulated traffic and canary on THAT before you actually hit real users (and then do a traditional staged release, etc.)

    • bfeynman 5 days ago ago

      seems like you are misappropriating what canaries are useful and used for... they are designed to be lightweight and shallow... hence the name and whole analogy, canaries never were meant to determine if a mine was structurally unsafe etc

      • svnt 4 days ago ago

        I don’t see how they have it wrong?

        Canaries are lightweight and shallow once they exist. Building a canary from the ground up is still beyond us, but if you don’t want to kill an actual bird that is pretty much the only way to go.

  • wyatthoran 4 days ago ago

    Oh this is sick! I've been complaining about this exact problem for years. The "canary without real users" idea is brilliant - way better than just throwing your free tier users under the bus and hoping for the best.

    The thing that really got me was catching bugs in non-deterministic output. We've been struggling with this on LLM features where traditional assertions just don't work. Having agents actually judge quality instead of looking for exact matches is such an obvious solution in hindsight.

    Quick question though - how do you handle auth flows with MFA or OAuth redirects?

  • 8note 5 days ago ago

    fraud/abuse/compliance is a good usecase for this kinda thing - an abuse vector is kinda like a bug, except that the system does what its expected to do.

    testing for abuse stuff ive always found quite difficult, since to work well, you need to both create some real resources so you can delete/clean them up, and also you need to create a new test identity, since your abuse detection system should be deny listing found bad actors. the difficulty is that those sessions probably want to be open for like a week, so they can process both payments and refunds.

    can the agents check their email? other notification methods?

    • ttamslam 5 days ago ago

      This is interesting, I think we've shied away a bit from security-ish use cases since it's outside of our personal core competencies, do you have examples of what tools exist today for catching things like that? Or is it totally adhoc?

      > can the agents check their email? other notification methods?

      Yes to email (for paying customers agents spin up with unique addresses), no to other notifications, but as soon as a paying customer has a use case for SMS, etc. we'll build it.

      • dfsegoat 5 days ago ago

        OTP protected flow verification

    • dfsegoat 5 days ago ago

      Really good call out re: email and other 'side-flows' - hopefully there is integration with something like Mailosaur.

      https://mailosaur.com/email-testing

  • webprofusion 4 days ago ago

    This is a great idea and definitely useful, in particular this would make sense if it can take the build artifacts of recent github action builds and test them comprehensively, particularly to green-light a release. You can already pretty much do this using the standard agent tools and a set of test prompts so anything that makes it easier and repeatable is good.

    The pricing sounds quite enterprisey, the risk there is that people will tend towards building their own.

  • plasma 5 days ago ago

    Neat! How do you handle state changes during tests, for example, in a todo app the agents are (likely) working on the same account in parallel or even as a subsequent run, some test data has been left behind or now data is not perhaps setup for a test run.

    I’m curious if you’d also move into API testing too using the same discovery/attempt approach.

    • mpapazian 5 days ago ago

      This is one of our biggest challenges, you're spot on! What we're working on taking this includes a memory layer that agents have access to - thus state changes become part of their knowledge and accounted for while conducting a test.

      They're also smart enough to not be frazzled by things having changed, they still have their objectives and will work to understand whether the functionality is there or not. Beauty of non-determinism!

  • tarasyarema 5 days ago ago

    Looks interesting! How it would work in terms of latencies? Also, would you be able to run on CI, or even with ephemeral envs? From my experience it is key to be able to run it on each change.

  • not-chatgpt 5 days ago ago

    Been looking for a solution exactly like this but I struggle to see how it is different than spinning 10 Atlas tabs with a 2 sentence prompt.

    • mpapazian 5 days ago ago

      You could definitely do that and get some good results! But if you want a repeatable process with detailed bug reports (including logs, reasoning, etc.) and a large enough search area with agents that can continuously build an understanding of your app - that's us :)

      let's chat - founders@propolis.tech

  • ekarabeg 5 days ago ago

    This seems really interesting. I tried running a swarm on my landing page but didn't get a completed email. I'll try it again, though!

    • mpapazian 5 days ago ago

      hi! Looking at your swarm results, you might not have given the swarm login credentials to use, which is why most of the runs are failing out. Please feel free to try it again and give them access.

  • orliesaurus 5 days ago ago

    Does it output playwright scripts?

    • ttamslam 5 days ago ago

      We use playwright for interacting with the browser, so while it's not available by default, we do support bulk exporting tests as playwright to move off our platform or to customers who want to run deterministic versions of the tests on their own infra (you can also run them on ours!)

  • mhb 5 days ago ago

    Good video, but it looks like it plays twice. Should be ~3.5 minutes...

    • mpapazian 5 days ago ago

      ah good catch! Fixed :)

  • rvz 5 days ago ago

    Looks like a great idea. Does this fully automate QA testing of websites including removing the human in the loop during testing?

    Once again, great product.

    • mpapazian 5 days ago ago

      Great question! The swarm takes a first pass to generate tests and can continuously add as it runs again over time.

      In the off chance it misses specific tests - we have tools to let you build them directly with ai support, either by giving them objectives or dropping in a video of the actions you're taking!

  • GeorgyM 5 days ago ago

    Sounds interesting. Is this handling mobile as well?

    • mpapazian 5 days ago ago

      We don't handle mobile yet, but we might get to it at a future date!