9 comments

  • lucb1e 11 hours ago ago

    The wall of text here, as well as the wall of text on the submission, keeps using the word Tauri but not saying what this is. Wikipedia says Tauri are Crimean settlers. Think I found it now: https://tauri.app

    That page says that "By using the OS’s native web renderer, the size of a Tauri app can be little as 600KB." sounds like an alternative for Electron basically

    • bryanhogan 11 hours ago ago

      It's an alternative to Electron and Capacitor[1] now. So turning a web app into a more "native" application for both mobile and desktop systems.

      [1]: https://capacitorjs.com/

      • WD-42 11 hours ago ago

        The only thing that makes it more native than electron is that it uses the system's webview, instead of shipping an entire Chrome/CEF. You write Rust for Tauri's backend, which is nice.

  • j1elo 11 hours ago ago

      What's next:
      - WebRTC integration for video calls
      - Built-in barcode/QR code scanning
      - Face detection hooks
    
    That sounds to me like final application usages that should be independent from this project, which is just a HAL for camera access. Conflating the two into the same code seems to raise the bar incredibly high for the scope of this one, so not sure how that will work out. WebRTC alone is a very complicated beast, for which the camera acquisition is just a very small part.
  • pzo 10 hours ago ago

    for cross-application as desktop only I think QtMultimedia is still the most feature rich and the best option.

    If need only mobile (iOS / Android) then react-native-vision-camera probably the best bet.

    If need only simple camera access then opencv

  • ge96 13 hours ago ago

    idk why when I see a lot of emojis in readmes I think vibecode

    • foresterre 12 hours ago ago

      And also a lot of (unordered) lists. It however only took one more step to verify this: the code is two commits, which both have "(...) and claude committed" in their commit tag, and " Generated with Claude Code" in their commit message. This is not intended to be a judgement, more a neutral observation.

      I thought the "demo_crabcamera.py" was funny with respect to vibecoding: it's not a demo (I already found it odd for a Tauri app to be demo-ed via a python script); it produces the description text posted by OP.

      On a more serious note, it all looks reasonably complete like most AI generated projects, but also almost a one shot generated project which hasn't seen much use for it to mature. This becomes even more true when you look a bit deeper at the code, where there are unfinished methods like:

        pub fn get_device_caps(device_path: &str) -> Result<Vec<String>, CameraError> {
              // This would typically query V4L2 capabilities
              // For now, return common capabilities
              Ok(vec![
                  "Video Capture".to_string(),
                  "Streaming".to_string(),
                  "Extended Controls".to_string(),
              ])
          }
      
      
      The project states it builds on nokhwa for the real camera capture capabilities, but then conditionally includes platform libraries, which seem to be only used for tests (which means they could have been dev-dependencies), at least in the case of v4l, based on the results of GitHub's search within the repo.

      Perhaps it all works, but it does feel a bit immature and it does come with the risks of AI generated code.

    • WD-42 11 hours ago ago

      The wall of text that doesn’t actually say that much is a dead giveaway.

  • auraham 2 days ago ago

    Thanks for sharing!