mastouille.fr est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Mastouille est une instance Mastodon durable, ouverte, et hébergée en France.

Administré par :

Statistiques du serveur :

598
comptes actifs

#ollama

5 messages5 participants0 message aujourd’hui
C++ Wage Slave<p>If you think MP3 sounds good, choose a song you love that has a detailed, spacious sound, and encode it in <a href="https://infosec.space/tags/MP3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MP3</span></a> at low bandwidth. Hear the jangly tuning, the compression artifacts, the lack of detail and stability and the claustrophobic sound. Now that you know it's there, you'll detect it even in MP3 samples at higher bitrates.</p><p>This toot is actually about <a href="https://infosec.space/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a>. If you can, download <a href="https://infosec.space/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> and try some small models with no more than, say, 4bn parameters. Ask detailed questions about subjects you understand in depth. Watch the models hallucinate, miss the point, make logical errors and give bad advice. See them get hung up on one specific word and launch off at a tangent. Notice how the tone is always the same, whether they're talking sense or not.</p><p>Once you've seen the problems with small models, you'll spot them even in much larger models. You'll be inoculated against the idea that <a href="https://infosec.space/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> are intelligent, conscious or trustworthy. That, today, is an important life skill.</p>
EventHub<p>Wir haben spaßeshalber mal ein lokales Large Language Model (LLM) über einen Retrieval Augmented Generation (RAG) -Ansatz mit Veranstaltungen des EventHub gefüttert. </p><p>Rausgekommen, ist ein Chatbot, den man über den Browser nach Veranstaltungen fragen kann.</p><p>Eine Oberfläche wäre leicht zu machen. </p><p>Ich denke jedoch nicht, dass das wirklich jemand braucht. Oder doch?</p><p>Der Sourcecode ist hier:<br><a href="https://codeberg.org/EventHub/application_chat" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">codeberg.org/EventHub/applicat</span><span class="invisible">ion_chat</span></a></p><p><a href="https://feedbeat.me/tags/ki" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ki</span></a> <a href="https://feedbeat.me/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://feedbeat.me/tags/chat" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chat</span></a> <a href="https://feedbeat.me/tags/chatbot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatbot</span></a> <a href="https://feedbeat.me/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a> <a href="https://feedbeat.me/tags/krefeld" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>krefeld</span></a> <a href="https://feedbeat.me/tags/OpenSource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenSource</span></a></p>
Entité terrestre auto-critique<p>Test d'IA locale ...<br>Jusque là, ça va 🤣<br><a href="https://piaille.fr/tags/IA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>IA</span></a> <a href="https://piaille.fr/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a></p>
Michael Blume<p><span class="h-card" translate="no"><a href="https://chaos.social/@root42" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>root42</span></a></span> </p><p>Ja, früher war alles besser... ;-) </p><p>Ernsthaft: Es gibt auch hier im <a href="https://sueden.social/tags/Fediversum" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Fediversum</span></a> eine sehr aktive Szene, die bereits lokale KI-Anwendungen ausprobieren, beispielsweise via <a href="https://sueden.social/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a>. </p><p><span class="h-card" translate="no"><a href="https://friendica.andreaskilgus.de/profile/musenhain" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>musenhain</span></a></span> </p><p><a href="https://ollama.org/de/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">ollama.org/de/</span><span class="invisible"></span></a></p>
Markus Eisele<p>Ollama v0.10.0 is here! Major highlights:</p><p>- New native app for macOS &amp; Windows<br>- 2-3x performance boost for Gemma3 models <br>- 10-30% faster multi-GPU performance<br>- Fixed tool calling issues with <a href="https://mastodon.online/tags/Granite3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Granite3</span></a>.3 &amp; Mistral-Nemo<br>- `ollama ps` now shows context length<br>- WebP image support in OpenAI API</p><p><a href="https://github.com/ollama/ollama/releases/tag/v0.10.0" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/ollama/ollama/relea</span><span class="invisible">ses/tag/v0.10.0</span></a></p><p><a href="https://mastodon.online/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://mastodon.online/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.online/tags/LocalLLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LocalLLM</span></a> <a href="https://mastodon.online/tags/OpenSource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenSource</span></a></p>
MacGeneration<p>Ollama propose une nouvelle app pour le Mac qui permet de se passer entièrement du terminal <a href="http://dlvr.it/TMD2ZY" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="">dlvr.it/TMD2ZY</span><span class="invisible"></span></a> <a href="https://social.macg.co/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://social.macg.co/tags/MacApp" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MacApp</span></a></p>
Markus Eisele<p>Building a Real-Time Collaborative AI Editor with Quarkus, CRDTs, and Local LLMs<br>Learn how to combine CRDTs, WebSockets, and a local LLM to build a fast, conflict-free collaborative text editor with AI suggestions <br><a href="https://myfear.substack.com/p/real-time-ai-editor-quarkus-crdt-langchain4j" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">myfear.substack.com/p/real-tim</span><span class="invisible">e-ai-editor-quarkus-crdt-langchain4j</span></a><br><a href="https://mastodon.online/tags/Java" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Java</span></a> <a href="https://mastodon.online/tags/Websockets" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Websockets</span></a> <a href="https://mastodon.online/tags/Quarkus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Quarkus</span></a> <a href="https://mastodon.online/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.online/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://mastodon.online/tags/LangChain4j" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LangChain4j</span></a></p>
Markus Eisele<p>Fast and Smart: Content Moderation with Java, Bloom Filters, and Local LLMs<br>Build a blazing-fast moderation API in Quarkus that filters content intelligently using n-grams, probabilistic data structures, and llama3.<br><a href="https://myfear.substack.com/p/quarkus-content-moderation-bloom-filter-llm" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">myfear.substack.com/p/quarkus-</span><span class="invisible">content-moderation-bloom-filter-llm</span></a><br><a href="https://mastodon.online/tags/Java" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Java</span></a> <a href="https://mastodon.online/tags/Quarkus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Quarkus</span></a> <a href="https://mastodon.online/tags/Langchain4j" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Langchain4j</span></a> <a href="https://mastodon.online/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a> <a href="https://mastodon.online/tags/BloomFilter" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BloomFilter</span></a></p>
Ivan Todorov<p>Using Home Assistant OS, I wired up Whisper for speech recognition, Piper for voice responses, and Ollama for the LLM brain — all running on my own machines, stitched together with the Wyoming protocol. I could literally talk to my smart home, and it talked back. All offline, all private.</p><p><a href="https://mastodon.social/tags/SelfHosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SelfHosting</span></a> <a href="https://mastodon.social/tags/HomeAssistant" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HomeAssistant</span></a> <a href="https://mastodon.social/tags/VoiceAssistant" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VoiceAssistant</span></a> <a href="https://mastodon.social/tags/LocalAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LocalAI</span></a> <a href="https://mastodon.social/tags/Whisper" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Whisper</span></a> <a href="https://mastodon.social/tags/Piper" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Piper</span></a> <a href="https://mastodon.social/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://mastodon.social/tags/HomeLab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HomeLab</span></a> <a href="https://mastodon.social/tags/TechDIY" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechDIY</span></a></p>
Ivan Todorov<p>Sometime back, I got obsessed with the idea of running my own voice assistant — fully local, no Google, no Amazon, no "cloud intelligence." Just me, my wife and our server(s)... </p><p>So I built Marvin.</p><p>👇</p><p><a href="https://mastodon.social/tags/SelfHosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SelfHosting</span></a> <a href="https://mastodon.social/tags/HomeAssistant" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HomeAssistant</span></a> <a href="https://mastodon.social/tags/VoiceAssistant" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VoiceAssistant</span></a> <a href="https://mastodon.social/tags/LocalAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LocalAI</span></a> <a href="https://mastodon.social/tags/Whisper" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Whisper</span></a> <a href="https://mastodon.social/tags/Piper" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Piper</span></a> <a href="https://mastodon.social/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://mastodon.social/tags/HomeLab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HomeLab</span></a> <a href="https://mastodon.social/tags/TechDIY" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechDIY</span></a></p>
Markus Eisele<p>Tracing the Mind of Your AI: Java Observability with Quarkus and LangChain4j<br>Instrument, trace, and monitor your AI-powered Java apps using local LLMs, Ollama, and OpenTelemetry with zero boilerplate. <br><a href="https://myfear.substack.com/p/java-ai-observability-quarkus-langchain4j" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">myfear.substack.com/p/java-ai-</span><span class="invisible">observability-quarkus-langchain4j</span></a><br><a href="https://mastodon.online/tags/Java" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Java</span></a> <a href="https://mastodon.online/tags/LangChain4j" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LangChain4j</span></a> <a href="https://mastodon.online/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://mastodon.online/tags/OpenTelemetry" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenTelemetry</span></a></p>
Markus Eisele<p>Smart Local AI Routing in Java: Build a Hybrid LLM Gateway with Quarkus and Ollama<br>Use LangChain4j, semantic embeddings, and Quarkus to route prompts to the best local LLM for coding, summarization, or chat <br><a href="https://myfear.substack.com/p/smart-local-llm-routing-quarkus-java-ollama" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">myfear.substack.com/p/smart-lo</span><span class="invisible">cal-llm-routing-quarkus-java-ollama</span></a><br><a href="https://mastodon.online/tags/Java" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Java</span></a> <a href="https://mastodon.online/tags/Quarkus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Quarkus</span></a> <a href="https://mastodon.online/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://mastodon.online/tags/Langchain4j" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Langchain4j</span></a> <a href="https://mastodon.online/tags/SemanticRouting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SemanticRouting</span></a></p>
Magnus Hedemark<p>Now that I've got a <a href="https://pompat.us/tags/gpu" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPU</span></a> rig <a href="https://pompat.us/tags/rtx5070ti" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RTX5070Ti</span></a> my <a href="https://pompat.us/tags/localllm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LocalLLM</span></a> with <a href="https://pompat.us/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> is going much better than it did with Mac Mini.</p><p>But I've got so much background agentic work going to that GPU now that I sometimes hit contention and feel like I should be thinking about having multiple GPU's around.</p><p>I can quit anytime I want.</p>
Markus Eisele<p>Mastering AI Tool-Calling with Java: Build Your Own Dungeon Master with Quarkus and LangChain4j. Turn a local LLM into a dice-rolling, decision-making RPG game master. Powered by Java, Quarkus, and the magic of LangChain4j. <br><a href="https://myfear.substack.com/p/ai-dungeon-master-quarkus-langchain4j-java" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">myfear.substack.com/p/ai-dunge</span><span class="invisible">on-master-quarkus-langchain4j-java</span></a><br><a href="https://mastodon.online/tags/Java" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Java</span></a> <a href="https://mastodon.online/tags/Quarkus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Quarkus</span></a> <a href="https://mastodon.online/tags/LangChain4j" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LangChain4j</span></a> <a href="https://mastodon.online/tags/Game" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Game</span></a> <a href="https://mastodon.online/tags/RPG" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RPG</span></a> <a href="https://mastodon.online/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://mastodon.online/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a></p>
Evan Hahn<p>In an apocalypse scenario, is it better to have a local LLM or offline Wikipedia? <a href="https://evanhahn.com/local-llms-versus-offline-wikipedia/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">evanhahn.com/local-llms-versus</span><span class="invisible">-offline-wikipedia/</span></a></p><p><a href="https://bigshoulders.city/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://bigshoulders.city/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://bigshoulders.city/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://bigshoulders.city/tags/wikipedia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>wikipedia</span></a> <a href="https://bigshoulders.city/tags/kiwix" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kiwix</span></a> <a href="https://bigshoulders.city/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a></p>
Markus Eisele<p>Build a Smart Credit Card Validator with Quarkus, Langchain4j, and Ollama<br>Validate cards like a bank, talk like a human. <br><a href="https://myfear.substack.com/p/smart-credit-card-validator-quarkus-langchain4j-ollama" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">myfear.substack.com/p/smart-cr</span><span class="invisible">edit-card-validator-quarkus-langchain4j-ollama</span></a><br><a href="https://mastodon.online/tags/Java" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Java</span></a> <a href="https://mastodon.online/tags/Langchain4j" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Langchain4j</span></a> <a href="https://mastodon.online/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://mastodon.online/tags/CreditCardValidator" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CreditCardValidator</span></a> <a href="https://mastodon.online/tags/AiAgent" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AiAgent</span></a></p>
Markus Eisele<p>From Black Box to Blueprint: Tracing Every LLM Decision with Quarkus<br>Build trust, traceability, and visual insight into your AI-powered Java apps using LangChain4j, Ollama, and CDI interceptors. <br><a href="https://myfear.substack.com/p/llm-observability-quarkus-langchain4j" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">myfear.substack.com/p/llm-obse</span><span class="invisible">rvability-quarkus-langchain4j</span></a><br><a href="https://mastodon.online/tags/Java" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Java</span></a> <a href="https://mastodon.online/tags/Quarkus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Quarkus</span></a> <a href="https://mastodon.online/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.online/tags/Observability" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Observability</span></a> <a href="https://mastodon.online/tags/Traces" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Traces</span></a> <a href="https://mastodon.online/tags/CDI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CDI</span></a> <a href="https://mastodon.online/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://mastodon.online/tags/LangChain4j" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LangChain4j</span></a></p>
openSUSE Linux<p>Want to run powerful <a href="https://fosstodon.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> locally on <a href="https://fosstodon.org/tags/openSUSE" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openSUSE</span></a> Tumbleweed? With <a href="https://fosstodon.org/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a>, it's just a one-line install. Privacy ✅ Offline Access ✅ Customization ✅ This article can get started and bring <a href="https://fosstodon.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> to your own machine! <a href="https://news.opensuse.org/2025/07/12/local-llm-with-openSUSE/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">news.opensuse.org/2025/07/12/l</span><span class="invisible">ocal-llm-with-openSUSE/</span></a></p>
Hongster<p>Cheatsheet for `ollama` command.</p><p><a href="https://tech.mrleong.net/cheatsheet-examples-ollama" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">tech.mrleong.net/cheatsheet-ex</span><span class="invisible">amples-ollama</span></a></p><p><a href="https://fosstodon.org/tags/cheatsheet" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cheatsheet</span></a> <a href="https://fosstodon.org/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a> <a href="https://fosstodon.org/tags/cli" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cli</span></a> <a href="https://fosstodon.org/tags/terminal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>terminal</span></a></p>
MacGeneration<p>L'utilitaire pour faire tourner des LLM en local Ollama devient une application native <a href="http://dlvr.it/TLjRQz" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">http://</span><span class="">dlvr.it/TLjRQz</span><span class="invisible"></span></a> <a href="https://social.macg.co/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://social.macg.co/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a></p>