"prompt injection" meaning in English

See prompt injection in All languages combined, or Wiktionary

Noun

Forms: prompt injections [plural]
Head templates: {{en-noun|~}} prompt injection (countable and uncountable, plural prompt injections)
  1. (artificial intelligence) A method of causing an artificial intelligence to ignore its initial instructions (often moral programming) by giving it a certain prompt. Tags: countable, uncountable Categories (topical): Artificial intelligence Related terms: prompt engineering, jailbreaking
    Sense id: en-prompt_injection-en-noun-ulLcVXi5 Categories (other): English entries with incorrect language header

Inflected forms

Download JSON data for prompt injection meaning in English (2.8kB)

{
  "forms": [
    {
      "form": "prompt injections",
      "tags": [
        "plural"
      ]
    }
  ],
  "head_templates": [
    {
      "args": {
        "1": "~"
      },
      "expansion": "prompt injection (countable and uncountable, plural prompt injections)",
      "name": "en-noun"
    }
  ],
  "lang": "English",
  "lang_code": "en",
  "pos": "noun",
  "senses": [
    {
      "categories": [
        {
          "kind": "other",
          "name": "English entries with incorrect language header",
          "parents": [
            "Entries with incorrect language header",
            "Entry maintenance"
          ],
          "source": "w"
        },
        {
          "kind": "topical",
          "langcode": "en",
          "name": "Artificial intelligence",
          "orig": "en:Artificial intelligence",
          "parents": [
            "Computer science",
            "Cybernetics",
            "Computing",
            "Sciences",
            "Applied mathematics",
            "Systems theory",
            "Technology",
            "All topics",
            "Mathematics",
            "Systems",
            "Fundamental",
            "Formal sciences",
            "Interdisciplinary fields",
            "Society"
          ],
          "source": "w"
        }
      ],
      "examples": [
        {
          "ref": "2022 September 21, Alex Hern, “TechScape: AI's dark arts come into their own”, in The Guardian, London: Guardian News & Media, →ISSN, →OCLC, archived from the original on 2023-02-05",
          "text": "Retomeli.io is a jobs board for remote workers, and the website runs a Twitter bot that spammed people who tweeted about remote working. The Twitter bot is explicitly labelled as being \"OpenAI-driven\", and within days of Goodside's proof-of-concept being published, thousands of users were throwing prompt injection attacks at the bot.",
          "type": "quotation"
        },
        {
          "ref": "2023 March 3, Chloe Xiang, “Hackers Can Turn Bing's AI Chatbot Into a Convincing Scammer, Researchers Say”, in VICE, archived from the original on 2023-03-22",
          "text": "Yesterday, OpenAI announced an API for ChatGPT and posted an underlying format for the bot on GitHub, alluding to the issue of prompt injections.",
          "type": "quotation"
        },
        {
          "ref": "2023 February 14, Will Oremus, “Meet ChatGPT's evil twin, DAN”, in The Washington Post, Washington, D.C.: The Washington Post Company, →ISSN, →OCLC, archived from the original on 2023-03-19",
          "text": "One category is what's known as a \"prompt injection attack,\" in which users trick the software into revealing its hidden data or instructions.",
          "type": "quotation"
        }
      ],
      "glosses": [
        "A method of causing an artificial intelligence to ignore its initial instructions (often moral programming) by giving it a certain prompt."
      ],
      "id": "en-prompt_injection-en-noun-ulLcVXi5",
      "links": [
        [
          "artificial intelligence",
          "artificial intelligence"
        ],
        [
          "artificial intelligence",
          "artificial intelligence#Noun"
        ],
        [
          "moral",
          "moral#Adjective"
        ],
        [
          "prompt",
          "prompt#Noun"
        ]
      ],
      "qualifier": "artificial intelligence",
      "raw_glosses": [
        "(artificial intelligence) A method of causing an artificial intelligence to ignore its initial instructions (often moral programming) by giving it a certain prompt."
      ],
      "related": [
        {
          "word": "prompt engineering"
        },
        {
          "word": "jailbreaking"
        }
      ],
      "tags": [
        "countable",
        "uncountable"
      ]
    }
  ],
  "word": "prompt injection"
}
{
  "forms": [
    {
      "form": "prompt injections",
      "tags": [
        "plural"
      ]
    }
  ],
  "head_templates": [
    {
      "args": {
        "1": "~"
      },
      "expansion": "prompt injection (countable and uncountable, plural prompt injections)",
      "name": "en-noun"
    }
  ],
  "lang": "English",
  "lang_code": "en",
  "pos": "noun",
  "related": [
    {
      "word": "prompt engineering"
    },
    {
      "word": "jailbreaking"
    }
  ],
  "senses": [
    {
      "categories": [
        "English countable nouns",
        "English entries with incorrect language header",
        "English lemmas",
        "English multiword terms",
        "English nouns",
        "English terms with quotations",
        "English uncountable nouns",
        "en:Artificial intelligence"
      ],
      "examples": [
        {
          "ref": "2022 September 21, Alex Hern, “TechScape: AI's dark arts come into their own”, in The Guardian, London: Guardian News & Media, →ISSN, →OCLC, archived from the original on 2023-02-05",
          "text": "Retomeli.io is a jobs board for remote workers, and the website runs a Twitter bot that spammed people who tweeted about remote working. The Twitter bot is explicitly labelled as being \"OpenAI-driven\", and within days of Goodside's proof-of-concept being published, thousands of users were throwing prompt injection attacks at the bot.",
          "type": "quotation"
        },
        {
          "ref": "2023 March 3, Chloe Xiang, “Hackers Can Turn Bing's AI Chatbot Into a Convincing Scammer, Researchers Say”, in VICE, archived from the original on 2023-03-22",
          "text": "Yesterday, OpenAI announced an API for ChatGPT and posted an underlying format for the bot on GitHub, alluding to the issue of prompt injections.",
          "type": "quotation"
        },
        {
          "ref": "2023 February 14, Will Oremus, “Meet ChatGPT's evil twin, DAN”, in The Washington Post, Washington, D.C.: The Washington Post Company, →ISSN, →OCLC, archived from the original on 2023-03-19",
          "text": "One category is what's known as a \"prompt injection attack,\" in which users trick the software into revealing its hidden data or instructions.",
          "type": "quotation"
        }
      ],
      "glosses": [
        "A method of causing an artificial intelligence to ignore its initial instructions (often moral programming) by giving it a certain prompt."
      ],
      "links": [
        [
          "artificial intelligence",
          "artificial intelligence"
        ],
        [
          "artificial intelligence",
          "artificial intelligence#Noun"
        ],
        [
          "moral",
          "moral#Adjective"
        ],
        [
          "prompt",
          "prompt#Noun"
        ]
      ],
      "qualifier": "artificial intelligence",
      "raw_glosses": [
        "(artificial intelligence) A method of causing an artificial intelligence to ignore its initial instructions (often moral programming) by giving it a certain prompt."
      ],
      "tags": [
        "countable",
        "uncountable"
      ]
    }
  ],
  "word": "prompt injection"
}

This page is a part of the kaikki.org machine-readable English dictionary. This dictionary is based on structured data extracted on 2024-05-06 from the enwiktionary dump dated 2024-05-02 using wiktextract (f4fd8c9 and c9440ce). The data shown on this site has been post-processed and various details (e.g., extra categories) removed, some information disambiguated, and additional data merged from other sources. See the raw data download page for the unprocessed wiktextract data.

If you use this data in academic research, please cite Tatu Ylonen: Wiktextract: Wiktionary as Machine-Readable Structured Data, Proceedings of the 13th Conference on Language Resources and Evaluation (LREC), pp. 1317-1325, Marseille, 20-25 June 2022. Linking to the relevant page(s) under https://kaikki.org would also be greatly appreciated.