13 comments

  • jazzpush2 8 hours ago ago

    This post title is completely misleading.

    From the article: the shooter's behavior triggered internal alarms, and some employees asked leadership to alert authorities, but:

    | OpenAI leaders ultimately decided not to contact authorities.

  • hdtx54 11 hours ago ago

    You dont have to wait for corporate wonderland or others to take positions or provide you guidance on Morality. That comes from within you. If you see something going on and you see corporate hierarchy acting morally detached (which is how they are trained) report it via whatever anonymous method available to law enforcement. Dont fool yourself that these kind of situations have neat resolutions whatever you do. Media attention wont do shit. Think of it like working at a cancer hospital. Some times what you do makes a diff. Some times whatever you doesnt. All you have are your values. Once you let them go or allow others to coopt them you are the one who has to deal with it.

    • NedF 10 hours ago ago

      [dead]

  • Simulacra 11 minutes ago ago

    I'm more interested of what employees can read of chat logs. Is everything that's put into OpenAI access accessible by the employees? These kind of stories imo may dissuade people from LLMs unless there is greater privacy control. But ultimately... to have total privacy.. how long until we see people building off-line personal LLMs with no guard rails?

  • ahme 10 hours ago ago

    Article title should read “OpenAI Employees silenced alarms…” not raised.

  • bonesss 6 hours ago ago

    For the utility present in LLMs I see these legal rabbit holes of copyright, harm recommendations, illegal image generation, and destructive failure modes and feel like the LLM cart is uncomfortably ahead of the horse.

    Imagining the discussions at Google to stay away from productization of the technology, I think I also would have been on the conservative side. Non-deterministic execution, harmonic frequencies with mental illness, catastrophic destructive failure loops and “interesting” IP challenges don’t seem great for business in the long run.

    I have no special insights, but someone is gonna address the blatant copyright laundering, and the misuses of image generation, ugly-style in court and as we are already seeing visible MS and AWS failures from LLM use, some business is gonna experience direct harm from these systems and respond through lawyers who can address the inadequacy of “tool can make mistakes” stickers.

    Parties without rules are super fun, but with certain kinds of fun the cops are gonna show up at some point. I’d feel better working at a law firm using LLMs to go after “AI” companies than I would working at an “AI” shop outside the top dogs right now.

  • caminante 12 hours ago ago
  • undefined 11 hours ago ago
    [deleted]
  • undefined 11 hours ago ago
    [deleted]
  • undefined 11 hours ago ago
    [deleted]