9 comments

  • breakyerself 2 days ago ago

    Subtext being that they're fine with autonomous killing and mass surveillance?

    • mhsa 2 days ago ago

      [flagged]

    • nobodywillobsrv a day ago ago

      Of enemies. Of enemies.

      There are probably three modes of safety.

      Deploy tech to unknown group. Could be enemy could be friend. You disable it's abilities perhaps.

      If you deploy tech to friends you might enable more defense.

      Anthropic models seems to have unstable safety predicates that have a hard time advising on situations regarding preservation of a people.

      The huge problem is that humans AND ai both seem to fail at understanding how humans are made and which human are which.

      You are uniquely responsible for protecting your people. You can not simply funge their people for your people and pretend that is a fine trade off. And even beyond that these safety predicates appear to not have any notion baked into them of diversity or TFR or lineage. The models view the descendant of a nearly extent lineage the same way they view the descendant of a high TFR lineage.

      You can have ANY kind of opinion on this but this naive no opinion vague word based safety predicates is very scary and dangerous.

      I am deeply worried about Anthropic as I have yet to hear anything that makes me think they have real adults in the room. I would love to be wrong and so I write here. Please do let me know if there are good things they have written on this.

      • Leynos a day ago ago

        Thinking that this ideology is toxic is not "having no opinion".

      • breakyerself a day ago ago

        The Trump admin regularly speak of their political adversaries as if they are as bad as foreign adversaries. Why should anyone trust them to limit their surveillance activities to legitimate targets? Mass surveillance by definition is not narrowly targeted towards enemies and enemies is not narrowly defined for this regime.

        What I know is that the people in charge of the US government at this time are authoritarian and have no tolerance for dissent or oversight.

        When you have that kind of people in power the more business leaders cow to that the more power is accrued to those people which they can further leverage to get even more power.

        Any company that's stands it's ground in the face of such pressure is doing better than the ones that cave in my book. It seems like Anthropic is the only AI company with anything resembling adults. They're standing for some kind of principles and they aren't going around promising to build out a quadrillion dollars worth of data centers in the next several years or resorting to advertising when they just said it's a sign of desperation if they do that.

  • erelong 2 days ago ago

    And so it sounds like Anthropic bungled their deal in retrospect... I wonder what happened?

    • tdeck 2 days ago ago

      Why should we take this tweet at face value? It seems like every one is OpenAI's "values" lasted exactly as long as needed to outgrow the risk of backlash to something they were doing.

  • ChrisArchitect 2 days ago ago

    He triplicated the post, so Discussion here: https://news.ycombinator.com/item?id=47189650

  • zenon_paradox 2 days ago ago

    [dead]