Harald Sack<p>In our <a href="https://sigmoid.social/tags/ISE2025" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ISE2025</span></a> lecture last Wednesday, we learned how in n-gram language models via Markov assumption and maximum likelihood estimation we can predict the probability of the occurrence of a word given a specific context (i.e. n words previous in the sequence of words).</p><p><a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/languagemodels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>languagemodels</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lecture</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>fizise</span></a></span> @tabea <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>fiz_karlsruhe</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@KIT_Karlsruhe" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>KIT_Karlsruhe</span></a></span></p>