See SLM in All languages combined, or Wiktionary
{ "head_templates": [ { "args": {}, "expansion": "SLM", "name": "en-prop" } ], "lang": "English", "lang_code": "en", "pos": "name", "senses": [ { "alt_of": [ { "word": "Sudan Liberation Movement" } ], "categories": [], "glosses": [ "Initialism of Sudan Liberation Movement." ], "id": "en-SLM-en-name-en:Sudan_Liberation_Movement", "links": [ [ "Sudan Liberation Movement", "w:Sudan Liberation Movement" ] ], "senseid": [ "en:Sudan Liberation Movement" ], "synonyms": [ { "word": "SLA" } ], "tags": [ "abbreviation", "alt-of", "initialism" ] } ], "word": "SLM" } { "forms": [ { "form": "SLMs", "tags": [ "plural" ] } ], "head_templates": [ { "args": {}, "expansion": "SLM (plural SLMs)", "name": "en-noun" } ], "lang": "English", "lang_code": "en", "pos": "noun", "senses": [ { "alt_of": [ { "word": "single–longitudinal-mode [laser] or single–longitudinal-mode laser" } ], "categories": [ { "kind": "other", "name": "English links with redundant wikilinks", "parents": [], "source": "w" } ], "glosses": [ "Initialism of single–longitudinal-mode [laser] or single–longitudinal-mode laser." ], "hypernyms": [ { "word": "laser#Noun" } ], "id": "en-SLM-en-noun-en:single_longitudinal_mode_laser", "links": [ [ "single–longitudinal-mode", "w:longitudinal mode" ], [ "single–longitudinal-mode laser", "w:longitudinal mode" ] ], "senseid": [ "en:single longitudinal-mode laser" ], "tags": [ "abbreviation", "alt-of", "initialism" ] }, { "alt_of": [ { "word": "small language model" } ], "categories": [ { "kind": "other", "langcode": "en", "name": "Machine learning", "orig": "en:Machine learning", "parents": [], "source": "w" }, { "_dis": "35 1 64", "kind": "other", "name": "English entries with incorrect language header", "parents": [], "source": "w+disamb" }, { "_dis": "33 4 63", "kind": "other", "name": "English words spelled without vowels", "parents": [], "source": "w+disamb" }, { "_dis": "15 1 83", "kind": "other", "name": "Pages with 1 entry", "parents": [], "source": "w+disamb" }, { "_dis": "28 1 71", "kind": "other", "name": "Pages with entries", "parents": [], "source": "w+disamb" } ], "coordinate_terms": [ { "word": "LLM" }, { "word": "large language model" } ], "examples": [ { "bold_text_offsets": [ [ 516, 520 ] ], "ref": "2025 April 13, Stephen Ornes, “Small Language Models Are the New Rage, Researchers Say. Larger models can pull off a wider variety of feats, but the reduced footprint of smaller models makes them attractive tools”, in Wired, archived from the original on 2025-04-13:", "text": "Large language models work well because they’re so large. The latest models from OpenAI, Meta, and DeepSeek use hundreds of billions of “parameters” […] With more parameters, the models are better able to identify patterns and connections, which in turn makes them more powerful and accurate. But this power comes at a cost […] huge computational resources […] energy hogs […] In response, some researchers are now thinking small. IBM, Google, Microsoft, and OpenAI have all recently released small language models (SLMs) that use a few billion parameters—a fraction of their LLM counterparts. Small models are not used as general-purpose tools like their larger cousins. But they can excel on specific, more narrowly defined tasks, such as summarizing conversations, answering patient questions as a health care chatbot, and gathering data in smart devices. “For a lot of tasks, an 8 billion–parameter model is actually pretty good,” said Zico Kolter, a computer scientist at Carnegie Mellon University. They can also run on a laptop or cell phone, instead of a huge data center. (There’s no consensus on the exact definition of “small,” but the new models all max out around 10 billion parameters.) To optimize the training process for these small models, researchers use a few tricks. […]", "type": "quote" } ], "glosses": [ "Initialism of small language model." ], "hypernyms": [ { "word": "LM" }, { "word": "language model" }, { "word": "<" }, { "word": "model#Noun" } ], "id": "en-SLM-en-noun-en:small_language_model", "links": [ [ "machine learning", "machine learning" ], [ "small language model", "small language model#English" ] ], "qualifier": "machine learning", "raw_glosses": [ "(machine learning) Initialism of small language model." ], "senseid": [ "en:small language model" ], "tags": [ "abbreviation", "alt-of", "initialism" ] } ], "word": "SLM" }
{ "categories": [ "English countable nouns", "English entries with incorrect language header", "English lemmas", "English nouns", "English proper nouns", "English uncountable nouns", "English words spelled without vowels", "Pages with 1 entry", "Pages with entries" ], "head_templates": [ { "args": {}, "expansion": "SLM", "name": "en-prop" } ], "lang": "English", "lang_code": "en", "pos": "name", "senses": [ { "alt_of": [ { "word": "Sudan Liberation Movement" } ], "categories": [ "English initialisms" ], "glosses": [ "Initialism of Sudan Liberation Movement." ], "links": [ [ "Sudan Liberation Movement", "w:Sudan Liberation Movement" ] ], "senseid": [ "en:Sudan Liberation Movement" ], "synonyms": [ { "word": "SLA" } ], "tags": [ "abbreviation", "alt-of", "initialism" ] } ], "word": "SLM" } { "categories": [ "English countable nouns", "English entries with incorrect language header", "English lemmas", "English nouns", "English proper nouns", "English uncountable nouns", "English words spelled without vowels", "Pages with 1 entry", "Pages with entries" ], "forms": [ { "form": "SLMs", "tags": [ "plural" ] } ], "head_templates": [ { "args": {}, "expansion": "SLM (plural SLMs)", "name": "en-noun" } ], "lang": "English", "lang_code": "en", "pos": "noun", "senses": [ { "alt_of": [ { "word": "single–longitudinal-mode [laser] or single–longitudinal-mode laser" } ], "categories": [ "English initialisms", "English links with redundant wikilinks" ], "glosses": [ "Initialism of single–longitudinal-mode [laser] or single–longitudinal-mode laser." ], "hypernyms": [ { "word": "laser#Noun" } ], "links": [ [ "single–longitudinal-mode", "w:longitudinal mode" ], [ "single–longitudinal-mode laser", "w:longitudinal mode" ] ], "senseid": [ "en:single longitudinal-mode laser" ], "tags": [ "abbreviation", "alt-of", "initialism" ] }, { "alt_of": [ { "word": "small language model" } ], "categories": [ "English initialisms", "English terms with quotations", "en:Machine learning" ], "coordinate_terms": [ { "word": "LLM" }, { "word": "large language model" } ], "examples": [ { "bold_text_offsets": [ [ 516, 520 ] ], "ref": "2025 April 13, Stephen Ornes, “Small Language Models Are the New Rage, Researchers Say. Larger models can pull off a wider variety of feats, but the reduced footprint of smaller models makes them attractive tools”, in Wired, archived from the original on 2025-04-13:", "text": "Large language models work well because they’re so large. The latest models from OpenAI, Meta, and DeepSeek use hundreds of billions of “parameters” […] With more parameters, the models are better able to identify patterns and connections, which in turn makes them more powerful and accurate. But this power comes at a cost […] huge computational resources […] energy hogs […] In response, some researchers are now thinking small. IBM, Google, Microsoft, and OpenAI have all recently released small language models (SLMs) that use a few billion parameters—a fraction of their LLM counterparts. Small models are not used as general-purpose tools like their larger cousins. But they can excel on specific, more narrowly defined tasks, such as summarizing conversations, answering patient questions as a health care chatbot, and gathering data in smart devices. “For a lot of tasks, an 8 billion–parameter model is actually pretty good,” said Zico Kolter, a computer scientist at Carnegie Mellon University. They can also run on a laptop or cell phone, instead of a huge data center. (There’s no consensus on the exact definition of “small,” but the new models all max out around 10 billion parameters.) To optimize the training process for these small models, researchers use a few tricks. […]", "type": "quote" } ], "glosses": [ "Initialism of small language model." ], "hypernyms": [ { "word": "LM" }, { "word": "language model" }, { "word": "<" }, { "word": "model#Noun" } ], "links": [ [ "machine learning", "machine learning" ], [ "small language model", "small language model#English" ] ], "qualifier": "machine learning", "raw_glosses": [ "(machine learning) Initialism of small language model." ], "senseid": [ "en:small language model" ], "tags": [ "abbreviation", "alt-of", "initialism" ] } ], "word": "SLM" }
Download raw JSONL data for SLM meaning in English (4.1kB)
This page is a part of the kaikki.org machine-readable English dictionary. This dictionary is based on structured data extracted on 2025-08-01 from the enwiktionary dump dated 2025-07-20 using wiktextract (ed078bd and 3c020d2). The data shown on this site has been post-processed and various details (e.g., extra categories) removed, some information disambiguated, and additional data merged from other sources. See the raw data download page for the unprocessed wiktextract data.
If you use this data in academic research, please cite Tatu Ylonen: Wiktextract: Wiktionary as Machine-Readable Structured Data, Proceedings of the 13th Conference on Language Resources and Evaluation (LREC), pp. 1317-1325, Marseille, 20-25 June 2022. Linking to the relevant page(s) under https://kaikki.org would also be greatly appreciated.