LLMs Go Beyond Hallucinations All the Way to Acting Like Con Men - HTTAY 124 Sylph
LLMs Go Beyond Hallucinations All the Way to Acting Like Con Men - HTTAY 124 Sylph
In writing about the Republican Party's nightmare class war budget proposal, what Trump dubbed the "One Big Beautiful Bill," I've tried to focus on the mortal toll this shit is going to foist on labor class Americans in order to give outrageous tax cuts to the richest people and companies in our society. I am not employing hyperbole when I say this reconciliation bill is going to kill people; the GOP is actively engaging in corpse farming to make the rich richer, while pretending they're fighting a meaningless US debt "crisis" the bill won't address at fucking all. This is the most important issue here, and Americans need to know that their representatives are literally going to kill thousands and thousands of them in a clear act of class warfare; these people must not be allowed to accomplish their goals, and the public, particularly folks in "red" districts. needs to let these folks know they will have neither a career, nor a moment of peace until they shuffle off this mortal coil, if they pass this theft and murder bill.
Unfortunately however, GOP control of the House and Senate, albeit by very slim margins, has also allowed them to stuff this odious piece of legislation with all kinds of other gifts to broligarchs and folks trying to transform America into a fascist dictatorship. One particularly disturbing provision, given the Trump regime's close ties to Silicon Valley nazi billionaire ideologues pumping an AI-bubble worth trillions of dollars, would block all State-level legislation of any kind on what the GOP broadly defines as "artificial intelligence or automated decision-making systems" for the next ten years.
"Republicans in US Congress are trying to bar states from being able to introduce or enforce laws that would create guardrails for artificial intelligence or automated decision-making systems for 10 years.
A provision in the proposed budgetary bill now before the House of Representatives would prohibit any state or local governing body from pursuing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” unless the purpose of the law is to “remove legal impediments to, or facilitate the deployment or operation of” these systems.
The provision was a last-minute addition by House Republicans to the bill just two nights before it was due to be marked up on Tuesday. The House energy and commerce committee voted to advance the reconciliation package on Wednesday morning.
The bill defines AI systems and models broadly, with anything from facial recognition systems to generative AI qualifying. The proposed law would also apply to systems that use algorithms or AI to make decisions including for hiring, housing and whether someone qualifies for public benefits."
As the article notes, this broad overreach of Congressional power comes at a time when there are already class action lawsuits being filed against companies using algorithmic software to collude on rent prices, or discriminate against marginalized renters, and will almost certainly invalidate safeguards against such practices already legislated in some "blue" states. Perhaps more disturbingly however, this deregulatory provision in the budget bill will also protect and benefit projects like Elon Musk's DOGE-infused quest to replace most federal government workers with AI systems likely provided by and to benefit his companies. The ban would also apply to online AI information resources like chatbots and LLM-powered research tools; which doesn't sound that sinister until you remember we're currently in the middle of a massive AI scandal because *someone* (it was Elon Musk) programed the Twitter chatbot tool Grok to promote a nazi conspiracy theory about white genocide in South Africa and engage in a little Holocaust minimization and denial.
Look folks, I know that AI technology, for all of its dangers, isn't inherently evil. But at some point as the evidence that I'm right continues to compile before our very eyes, I'm going to need people to accept that the really-existing AI industry we're facing down today in the not at all hypothetical world we live in, is being run by billionaire nazi cultists who want to use these programs to destroy critical thinking, turn your kids into fascists, and transform our society into a reactionary hellscape controlled entirely by guys like them. That's just who these people are, and what they're trying to do with LLM's and algorithmic technology; and you can't say it doesn't matter because we just watched these guys transform social media algorithms into a new generation of young fascist converts over the past three or four years in America. Now the Republican Party wants to legislate a free hand to accelerate that project, tucked into a class war budget bill that will absolutely kill poor people. There's a clear plan at here, if you want to see it. Do you?
As those who pay attention have been saying for many years now:
#ai #agi #snakeoil #llm #technology
https://www.nytimes.com/2025/05/16/technology/what-is-agi.html
All tools create a path of least resistance. When it comes to AI chatbots, that path is to trust the AI's outputs.
Unfortunately, all LLMs hallucinate. And as users get used to relying on the machine, their ability and willingness to spot these errors deteriorates.
Blaming the user for this is irresponsible. The problem is caused by the way these tools are designed - so it's up to us, as designers, to fix it.
#AI #GenAI #LLM #UX #UXDesign #UserExperience #tech
https://www.nngroup.com/articles/ai-chatbots-discourage-error-checking/
Dear registered students for ISWS 2025, please keep in mind that the deadline for the poster submission is not far away...
We are looking forward to your submissions!
#isws2025 #summerschool #semanticweb #semweb #llm #nlp #knowledgegraph #AI #responsibleAI #reliableAI @sourisnumerique @lysander07
Generating Shakespeare-like text with an n-gram language model is straight forward and quite simple. But, don't expect to much of it. It will not be able to recreate a lost Shakespear play for you ;-) It's merely a parrot, making up well sounding sentences out of fragments of original Shakespeare texts...
#ise2025 #lecture #nlp #llm #languagemodel @fiz_karlsruhe @fizise @tabea @enorouzi @sourisnumerique #shakespeare #generativeAI #statistics
If you, for some reason, have a subscription for some #LLM like #chatGPT from #openAI or #claude from #Anthropic, consider switching to @kagihq ultimate plan. You can still use your favourite LLM, but with better privacy, around the same amount money spend AND you support a great privacy focused search engine project.
What could go wrong?…
𝙏𝙚𝙘𝙝 𝙢𝙤𝙜𝙪𝙡 𝙋𝙖𝙡𝙢𝙚𝙧 𝙇𝙪𝙘𝙠𝙚𝙮 𝙘𝙧𝙚𝙖𝙩𝙞𝙣𝙜 𝙖𝙧𝙨𝙚𝙣𝙖𝙡 𝙤𝙛 𝘼𝙄-𝙥𝙤𝙬𝙚𝙧𝙚𝙙 𝙖𝙪𝙩𝙤𝙣𝙤𝙢𝙤𝙪𝙨 𝙬𝙚𝙖𝙥𝙤𝙣𝙨
I am talking about the fact that we should not adopt the #narrative of these companies, but differentiate!
As here: These people don't "commission art", they write prompts or programs to buy a product.
You can #commission #art only from humans.
#LLM / #genAI isn't is not capable of #creative processes, it can only execute calculation systems.
We need to move away from attributing capabilities to these systems that they do not have. We owe that to real people.
Waarom GPT-NL niet zonder mensen kan: ‘de monsterklus van finetuning’
https://gpt-nl.nl/nieuws/finetuning/
#AI #LLM #GPTNL #finetuning
Upcoming seminar by @danmcquillan:
"Drawing on Illich's 'Tools for Conviviality', this talk will argue that an important role for the contemporary university is to resist AI. The university as a space for the pursuit of knowledge and the development of independent thought has long been undermined by neoliberal restructuring and the ambitions of the Ed Tech industry. So-called generative AI has added computational shock and awe to the assault on criticality, both inside and outside higher education, despite the gulf between the rhetoric and the actual capacities of its computational operations. Such is the synergy between AI's dissimulations and emerging political currents that AI will become embedded in all aspects of students' lives at university and afterwards, preempting and foreclosing diverse futures. It's vital to develop alternatives to AI's optimised nihilism and to sustain the joyful knowledge that nothing is inevitable and other worlds are still possible. The talk will ask what Illich has to teach us about an approach to technology that prioritises creativity and autonomy, how we can bolster academic inquiry through technical inquiry, workers' inquiry and struggle inquiry, and whether the future of higher education should enrol lecturers and students in a process of collective decomputing."
#LLM die #ProgramCode generieren mit integrierter #Werbung
国会議員の過去発言をAIで分析し、政治的立場をまとめたサイト「KOKKAI DOC」を東大などが発表(生成AIクローズアップ)
https://www.techno-edge.net/article/2025/05/19/4367.html
#technoedge #テクノロジー #ニュース #レビュー #ゲーム #ガジェット #生成AIウィークリー #LLM
@jwildeboer well, anyone who used #ChatGPT for sensitive data should have known better. I think the #NYT case is very important. I hope it provokes a real discussion about the theft that is at the heart of #LLM development.
Christmas Comes Early With AI Santa Demo - With only two hundred odd days ’til Christmas, you just know we’re already feeling... - https://hackaday.com/2025/05/18/christmas-comes-early-with-ai-santa-demo/ #artificialintelligence #speechrecognition #speechsynthesis #santaclaus #libpeer #openai #llm #ai
AI Chatbots Are Becoming Even Worse At Summarizing Data
I put up an essay on my blog: "Unsupervised AI Children". It's about the issue of unsupervised AI doing self-directed learning.
https://netsettlement.blogspot.com/2025/05/unsupervised-ai-children.html
MCP Blender Addon Lets AI Take the Wheel and Wield the Tools - Want to give an AI the ability to do stuff in Blender? The BlenderMCP addon does e... - https://hackaday.com/2025/05/18/mcp-blender-addon-lets-ai-take-the-wheel-and-wield-the-tools/ #artificialintelligence #softwarehacks #blender #llm #mcp #ai
Sure, there are responsible, appropriate use cases for LLMs.
Those, equally hypothetically, could run on local compute, using open-weight models with healthy, curated training data that isn't stolen or toxic.
Sure. But you can't argue for something based on its best version. It's whitewashing the _real_ version.