"alignment problem" meaning in English

See alignment problem in All languages combined, or Wiktionary

Proper name

Etymology: Popularized by the 2020 book The Alignment Problem by Brian Christian. Head templates: {{en-prop}} alignment problem
  1. (artificial intelligence) The problem of how to create a superintelligent artificial intelligence whose values would align with the interests of humankind. Wikipedia link: AI alignment#The alignment problem, Brian Christian, The Alignment Problem Categories (topical): Artificial intelligence, Ethics
    Sense id: en-alignment_problem-en-name-1H1l0ilb Categories (other): English entries with incorrect language header, Entries with translation boxes, Pages with 1 entry, Pages with entries
{
  "etymology_text": "Popularized by the 2020 book The Alignment Problem by Brian Christian.",
  "head_templates": [
    {
      "args": {},
      "expansion": "alignment problem",
      "name": "en-prop"
    }
  ],
  "lang": "English",
  "lang_code": "en",
  "pos": "name",
  "senses": [
    {
      "categories": [
        {
          "kind": "other",
          "name": "English entries with incorrect language header",
          "parents": [
            "Entries with incorrect language header",
            "Entry maintenance"
          ],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Entries with translation boxes",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Pages with 1 entry",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "other",
          "name": "Pages with entries",
          "parents": [],
          "source": "w"
        },
        {
          "kind": "topical",
          "langcode": "en",
          "name": "Artificial intelligence",
          "orig": "en:Artificial intelligence",
          "parents": [
            "Computer science",
            "Cybernetics",
            "Computing",
            "Sciences",
            "Applied mathematics",
            "Systems theory",
            "Technology",
            "All topics",
            "Mathematics",
            "Systems",
            "Fundamental",
            "Formal sciences",
            "Interdisciplinary fields",
            "Society"
          ],
          "source": "w"
        },
        {
          "kind": "topical",
          "langcode": "en",
          "name": "Ethics",
          "orig": "en:Ethics",
          "parents": [
            "Philosophy",
            "All topics",
            "Fundamental"
          ],
          "source": "w"
        }
      ],
      "examples": [
        {
          "ref": "2022 March 1, Rob Toews, “7 Must-Read Books About Artificial Intelligence”, in Forbes, New York, N.Y.: Forbes Media, →ISSN, →OCLC, archived from the original on 2022-08-31:",
          "text": "As [Brian] Christian notes, the alignment problem bears a real resemblance to parenting: “The story of human civilization has always been about how to instill values in strange, alien, human-level intelligences who will inevitably inherit the reins of society from us—namely, our kids.”",
          "type": "quote"
        },
        {
          "ref": "2022 December 13, Melanie Mitchell, “What Does It Mean to Align AI With Human Values?”, in Quanta Magazine, New York, N.Y.: Simons Foundation, →ISSN, →OCLC, archived from the original on 2023-03-15:",
          "text": "Properly defining and solving the alignment problem won’t be easy; it will require us to develop a broad, scientifically based theory of intelligence.",
          "type": "quote"
        },
        {
          "ref": "2023 February 27, Derek Thompson, “The AI Disaster Scenario”, in The Atlantic, Washington, D.C.: The Atlantic Monthly Group, →ISSN, →OCLC, archived from the original on 2023-03-22:",
          "text": "For years, AI ethicists have worried about this so-called alignment problem. In short: How do we ensure that the AI we build, which might very well be significantly smarter than any person who has ever lived, is aligned with the interests of its creators and of the human race? An unaligned superintelligent AI could be quite a problem.",
          "type": "quote"
        }
      ],
      "glosses": [
        "The problem of how to create a superintelligent artificial intelligence whose values would align with the interests of humankind."
      ],
      "id": "en-alignment_problem-en-name-1H1l0ilb",
      "links": [
        [
          "artificial intelligence",
          "artificial intelligence"
        ],
        [
          "problem",
          "problem#Noun"
        ],
        [
          "superintelligent",
          "superintelligent#Adjective"
        ],
        [
          "artificial intelligence",
          "artificial intelligence#Noun"
        ],
        [
          "values",
          "value#Noun"
        ],
        [
          "align",
          "align#Verb"
        ],
        [
          "interests",
          "interest#Noun"
        ],
        [
          "humankind",
          "humankind#Noun"
        ]
      ],
      "qualifier": "artificial intelligence",
      "raw_glosses": [
        "(artificial intelligence) The problem of how to create a superintelligent artificial intelligence whose values would align with the interests of humankind."
      ],
      "wikipedia": [
        "AI alignment#The alignment problem",
        "Brian Christian",
        "The Alignment Problem"
      ]
    }
  ],
  "word": "alignment problem"
}
{
  "etymology_text": "Popularized by the 2020 book The Alignment Problem by Brian Christian.",
  "head_templates": [
    {
      "args": {},
      "expansion": "alignment problem",
      "name": "en-prop"
    }
  ],
  "lang": "English",
  "lang_code": "en",
  "pos": "name",
  "senses": [
    {
      "categories": [
        "English entries with incorrect language header",
        "English lemmas",
        "English multiword terms",
        "English proper nouns",
        "English terms with quotations",
        "English uncountable nouns",
        "Entries with translation boxes",
        "Pages with 1 entry",
        "Pages with entries",
        "Translation table header lacks gloss",
        "en:Artificial intelligence",
        "en:Ethics"
      ],
      "examples": [
        {
          "ref": "2022 March 1, Rob Toews, “7 Must-Read Books About Artificial Intelligence”, in Forbes, New York, N.Y.: Forbes Media, →ISSN, →OCLC, archived from the original on 2022-08-31:",
          "text": "As [Brian] Christian notes, the alignment problem bears a real resemblance to parenting: “The story of human civilization has always been about how to instill values in strange, alien, human-level intelligences who will inevitably inherit the reins of society from us—namely, our kids.”",
          "type": "quote"
        },
        {
          "ref": "2022 December 13, Melanie Mitchell, “What Does It Mean to Align AI With Human Values?”, in Quanta Magazine, New York, N.Y.: Simons Foundation, →ISSN, →OCLC, archived from the original on 2023-03-15:",
          "text": "Properly defining and solving the alignment problem won’t be easy; it will require us to develop a broad, scientifically based theory of intelligence.",
          "type": "quote"
        },
        {
          "ref": "2023 February 27, Derek Thompson, “The AI Disaster Scenario”, in The Atlantic, Washington, D.C.: The Atlantic Monthly Group, →ISSN, →OCLC, archived from the original on 2023-03-22:",
          "text": "For years, AI ethicists have worried about this so-called alignment problem. In short: How do we ensure that the AI we build, which might very well be significantly smarter than any person who has ever lived, is aligned with the interests of its creators and of the human race? An unaligned superintelligent AI could be quite a problem.",
          "type": "quote"
        }
      ],
      "glosses": [
        "The problem of how to create a superintelligent artificial intelligence whose values would align with the interests of humankind."
      ],
      "links": [
        [
          "artificial intelligence",
          "artificial intelligence"
        ],
        [
          "problem",
          "problem#Noun"
        ],
        [
          "superintelligent",
          "superintelligent#Adjective"
        ],
        [
          "artificial intelligence",
          "artificial intelligence#Noun"
        ],
        [
          "values",
          "value#Noun"
        ],
        [
          "align",
          "align#Verb"
        ],
        [
          "interests",
          "interest#Noun"
        ],
        [
          "humankind",
          "humankind#Noun"
        ]
      ],
      "qualifier": "artificial intelligence",
      "raw_glosses": [
        "(artificial intelligence) The problem of how to create a superintelligent artificial intelligence whose values would align with the interests of humankind."
      ],
      "wikipedia": [
        "AI alignment#The alignment problem",
        "Brian Christian",
        "The Alignment Problem"
      ]
    }
  ],
  "word": "alignment problem"
}

Download raw JSONL data for alignment problem meaning in English (2.8kB)


This page is a part of the kaikki.org machine-readable English dictionary. This dictionary is based on structured data extracted on 2024-11-06 from the enwiktionary dump dated 2024-10-02 using wiktextract (fbeafe8 and 7f03c9b). The data shown on this site has been post-processed and various details (e.g., extra categories) removed, some information disambiguated, and additional data merged from other sources. See the raw data download page for the unprocessed wiktextract data.

If you use this data in academic research, please cite Tatu Ylonen: Wiktextract: Wiktionary as Machine-Readable Structured Data, Proceedings of the 13th Conference on Language Resources and Evaluation (LREC), pp. 1317-1325, Marseille, 20-25 June 2022. Linking to the relevant page(s) under https://kaikki.org would also be greatly appreciated.