"alignment problem" meaning in All languages combined

See alignment problem on Wiktionary

Proper name [English]

Etymology: Popularized by the 2020 book The Alignment Problem by Brian Christian. Head templates: {{en-prop}} alignment problem
  1. (artificial intelligence) The problem of how to create a superintelligent artificial intelligence whose values would align with the interests of humankind. Wikipedia link: AI alignment#The alignment problem, Brian Christian, The Alignment Problem Categories (topical): Artificial intelligence, Ethics

Download JSON data for alignment problem meaning in All languages combined (3.6kB)

{
  "etymology_text": "Popularized by the 2020 book The Alignment Problem by Brian Christian.",
  "head_templates": [
    {
      "args": {},
      "expansion": "alignment problem",
      "name": "en-prop"
    }
  ],
  "lang": "English",
  "lang_code": "en",
  "pos": "name",
  "senses": [
    {
      "categories": [
        {
          "kind": "other",
          "name": "English entries with incorrect language header",
          "parents": [
            "Entries with incorrect language header",
            "Entry maintenance"
          ],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "English entries with topic categories using raw markup",
          "parents": [
            "Entries with topic categories using raw markup",
            "Entry maintenance"
          ],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "English terms with non-redundant non-automated sortkeys",
          "parents": [
            "Terms with non-redundant non-automated sortkeys",
            "Entry maintenance"
          ],
          "source": "w"
        },
        {
          "kind": "topical",
          "langcode": "en",
          "name": "Artificial intelligence",
          "orig": "en:Artificial intelligence",
          "parents": [
            "Computer science",
            "Cybernetics",
            "Computing",
            "Sciences",
            "Applied mathematics",
            "Systems theory",
            "Technology",
            "All topics",
            "Mathematics",
            "Systems",
            "Fundamental",
            "Formal sciences",
            "Interdisciplinary fields",
            "Society"
          ],
          "source": "w"
        },
        {
          "kind": "topical",
          "langcode": "en",
          "name": "Ethics",
          "orig": "en:Ethics",
          "parents": [
            "Philosophy",
            "All topics",
            "Fundamental"
          ],
          "source": "w"
        }
      ],
      "examples": [
        {
          "ref": "2022 March 1, Rob Toews, “7 Must-Read Books About Artificial Intelligence”, in Forbes, New York, N.Y.: Forbes Media, →ISSN, →OCLC, archived from the original on 2022-08-31",
          "text": "As [Brian] Christian notes, the alignment problem bears a real resemblance to parenting: “The story of human civilization has always been about how to instill values in strange, alien, human-level intelligences who will inevitably inherit the reins of society from us—namely, our kids.”",
          "type": "quotation"
        },
        {
          "ref": "2022 December 13, Melanie Mitchell, “What Does It Mean to Align AI With Human Values?”, in Quanta Magazine, New York, N.Y.: Simons Foundation, →ISSN, →OCLC, archived from the original on 2023-03-15",
          "text": "Properly defining and solving the alignment problem won’t be easy; it will require us to develop a broad, scientifically based theory of intelligence.",
          "type": "quotation"
        },
        {
          "ref": "2023 February 27, Derek Thompson, “The AI Disaster Scenario”, in The Atlantic, Washington, D.C.: The Atlantic Monthly Group, →ISSN, →OCLC, archived from the original on 2023-03-22",
          "text": "For years, AI ethicists have worried about this so-called alignment problem. In short: How do we ensure that the AI we build, which might very well be significantly smarter than any person who has ever lived, is aligned with the interests of its creators and of the human race? An unaligned superintelligent AI could be quite a problem.",
          "type": "quotation"
        }
      ],
      "glosses": [
        "The problem of how to create a superintelligent artificial intelligence whose values would align with the interests of humankind."
      ],
      "id": "en-alignment_problem-en-name-1H1l0ilb",
      "links": [
        [
          "artificial intelligence",
          "artificial intelligence"
        ],
        [
          "problem",
          "problem#Noun"
        ],
        [
          "superintelligent",
          "superintelligent#Adjective"
        ],
        [
          "artificial intelligence",
          "artificial intelligence#Noun"
        ],
        [
          "values",
          "value#Noun"
        ],
        [
          "align",
          "align#Verb"
        ],
        [
          "interests",
          "interest#Noun"
        ],
        [
          "humankind",
          "humankind#Noun"
        ]
      ],
      "qualifier": "artificial intelligence",
      "raw_glosses": [
        "(artificial intelligence) The problem of how to create a superintelligent artificial intelligence whose values would align with the interests of humankind."
      ],
      "wikipedia": [
        "AI alignment#The alignment problem",
        "Brian Christian",
        "The Alignment Problem"
      ]
    }
  ],
  "word": "alignment problem"
}
{
  "etymology_text": "Popularized by the 2020 book The Alignment Problem by Brian Christian.",
  "head_templates": [
    {
      "args": {},
      "expansion": "alignment problem",
      "name": "en-prop"
    }
  ],
  "lang": "English",
  "lang_code": "en",
  "pos": "name",
  "senses": [
    {
      "categories": [
        "English entries with incorrect language header",
        "English entries with topic categories using raw markup",
        "English lemmas",
        "English multiword terms",
        "English proper nouns",
        "English terms with non-redundant non-automated sortkeys",
        "English terms with quotations",
        "English uncountable nouns",
        "Translation table header lacks gloss",
        "en:Artificial intelligence",
        "en:Ethics"
      ],
      "examples": [
        {
          "ref": "2022 March 1, Rob Toews, “7 Must-Read Books About Artificial Intelligence”, in Forbes, New York, N.Y.: Forbes Media, →ISSN, →OCLC, archived from the original on 2022-08-31",
          "text": "As [Brian] Christian notes, the alignment problem bears a real resemblance to parenting: “The story of human civilization has always been about how to instill values in strange, alien, human-level intelligences who will inevitably inherit the reins of society from us—namely, our kids.”",
          "type": "quotation"
        },
        {
          "ref": "2022 December 13, Melanie Mitchell, “What Does It Mean to Align AI With Human Values?”, in Quanta Magazine, New York, N.Y.: Simons Foundation, →ISSN, →OCLC, archived from the original on 2023-03-15",
          "text": "Properly defining and solving the alignment problem won’t be easy; it will require us to develop a broad, scientifically based theory of intelligence.",
          "type": "quotation"
        },
        {
          "ref": "2023 February 27, Derek Thompson, “The AI Disaster Scenario”, in The Atlantic, Washington, D.C.: The Atlantic Monthly Group, →ISSN, →OCLC, archived from the original on 2023-03-22",
          "text": "For years, AI ethicists have worried about this so-called alignment problem. In short: How do we ensure that the AI we build, which might very well be significantly smarter than any person who has ever lived, is aligned with the interests of its creators and of the human race? An unaligned superintelligent AI could be quite a problem.",
          "type": "quotation"
        }
      ],
      "glosses": [
        "The problem of how to create a superintelligent artificial intelligence whose values would align with the interests of humankind."
      ],
      "links": [
        [
          "artificial intelligence",
          "artificial intelligence"
        ],
        [
          "problem",
          "problem#Noun"
        ],
        [
          "superintelligent",
          "superintelligent#Adjective"
        ],
        [
          "artificial intelligence",
          "artificial intelligence#Noun"
        ],
        [
          "values",
          "value#Noun"
        ],
        [
          "align",
          "align#Verb"
        ],
        [
          "interests",
          "interest#Noun"
        ],
        [
          "humankind",
          "humankind#Noun"
        ]
      ],
      "qualifier": "artificial intelligence",
      "raw_glosses": [
        "(artificial intelligence) The problem of how to create a superintelligent artificial intelligence whose values would align with the interests of humankind."
      ],
      "wikipedia": [
        "AI alignment#The alignment problem",
        "Brian Christian",
        "The Alignment Problem"
      ]
    }
  ],
  "word": "alignment problem"
}

This page is a part of the kaikki.org machine-readable All languages combined dictionary. This dictionary is based on structured data extracted on 2024-05-10 from the enwiktionary dump dated 2024-05-02 using wiktextract (a644e18 and edd475d). The data shown on this site has been post-processed and various details (e.g., extra categories) removed, some information disambiguated, and additional data merged from other sources. See the raw data download page for the unprocessed wiktextract data.

If you use this data in academic research, please cite Tatu Ylonen: Wiktextract: Wiktionary as Machine-Readable Structured Data, Proceedings of the 13th Conference on Language Resources and Evaluation (LREC), pp. 1317-1325, Marseille, 20-25 June 2022. Linking to the relevant page(s) under https://kaikki.org would also be greatly appreciated.