OpenAI and other leading labs reinforce AI safety, security and trustworthiness through voluntary commitments.

  • AndrewZabar@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Hahaha making voluntary commitments. I’m other words, unenforceable bullshit. We need legal regulation with massive penalties for violation.

    Commitments. Lol. What a joke.

    This is because corporate America owns the government.

      • AndrewZabar@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Whatever would be appropriate. It’s not something I can come up with spur of the moment. My point is just that we have already seen that promises are useless and they will do whatever they can get away with. Regulation is the only way to keep profiteers in check.

  • Mereo@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    For the fun of it, here’s a summary written by ChatGPT regarding its overlord’s plan for AI governance:

    OpenAI and other leading AI labs, coordinated by the White House, are making voluntary commitments to increase the safety, security, and transparency of AI technologies. These practices are designed to guide AI governance and will remain in effect until related regulations are established.

    Key commitments include:

    • Red-teaming of models to identify potential misuse, societal risks, and security concerns. Companies commit to advancing research in this area and will publicly disclose their red-teaming and safety procedures.
    • Promoting information sharing between companies and governments regarding trust and safety risks, emergent capabilities, and attempts to bypass safeguards.
    • Investment in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
    • Encouraging third-party discovery and reporting of issues and vulnerabilities in AI systems.
    • Developing mechanisms to allow users to understand if audio or visual content is AI-generated. This includes the creation of provenance and/or watermarking systems for AI-generated content.
    • Reporting model capabilities, limitations, and domains of appropriate and inappropriate use publicly, including discussion of societal risks such as fairness and bias.
    • Prioritizing research on societal risks posed by AI systems, including harmful bias, discrimination, and privacy protection.
    • Commitment to developing frontier AI systems to help address society’s greatest challenges like climate change, cancer detection and prevention, and cyber threats.

    These commitments are applicable to generative models more powerful than the current industry standard models.