mastouille.fr est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Mastouille est une instance Mastodon durable, ouverte, et hébergée en France.

Administré par :

Statistiques du serveur :

689
comptes actifs

#llm

74 messages72 participants0 message aujourd’hui
Suite du fil

In writing about the Republican Party's nightmare class war budget proposal, what Trump dubbed the "One Big Beautiful Bill," I've tried to focus on the mortal toll this shit is going to foist on labor class Americans in order to give outrageous tax cuts to the richest people and companies in our society. I am not employing hyperbole when I say this reconciliation bill is going to kill people; the GOP is actively engaging in corpse farming to make the rich richer, while pretending they're fighting a meaningless US debt "crisis" the bill won't address at fucking all. This is the most important issue here, and Americans need to know that their representatives are literally going to kill thousands and thousands of them in a clear act of class warfare; these people must not be allowed to accomplish their goals, and the public, particularly folks in "red" districts. needs to let these folks know they will have neither a career, nor a moment of peace until they shuffle off this mortal coil, if they pass this theft and murder bill.

Unfortunately however, GOP control of the House and Senate, albeit by very slim margins, has also allowed them to stuff this odious piece of legislation with all kinds of other gifts to broligarchs and folks trying to transform America into a fascist dictatorship. One particularly disturbing provision, given the Trump regime's close ties to Silicon Valley nazi billionaire ideologues pumping an AI-bubble worth trillions of dollars, would block all State-level legislation of any kind on what the GOP broadly defines as "artificial intelligence or automated decision-making systems" for the next ten years.

"Republicans in US Congress are trying to bar states from being able to introduce or enforce laws that would create guardrails for artificial intelligence or automated decision-making systems for 10 years.

A provision in the proposed budgetary bill now before the House of Representatives would prohibit any state or local governing body from pursuing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” unless the purpose of the law is to “remove legal impediments to, or facilitate the deployment or operation of” these systems.

The provision was a last-minute addition by House Republicans to the bill just two nights before it was due to be marked up on Tuesday. The House energy and commerce committee voted to advance the reconciliation package on Wednesday morning.

The bill defines AI systems and models broadly, with anything from facial recognition systems to generative AI qualifying. The proposed law would also apply to systems that use algorithms or AI to make decisions including for hiring, housing and whether someone qualifies for public benefits."

As the article notes, this broad overreach of Congressional power comes at a time when there are already class action lawsuits being filed against companies using algorithmic software to collude on rent prices, or discriminate against marginalized renters, and will almost certainly invalidate safeguards against such practices already legislated in some "blue" states. Perhaps more disturbingly however, this deregulatory provision in the budget bill will also protect and benefit projects like Elon Musk's DOGE-infused quest to replace most federal government workers with AI systems likely provided by and to benefit his companies. The ban would also apply to online AI information resources like chatbots and LLM-powered research tools; which doesn't sound that sinister until you remember we're currently in the middle of a massive AI scandal because *someone* (it was Elon Musk) programed the Twitter chatbot tool Grok to promote a nazi conspiracy theory about white genocide in South Africa and engage in a little Holocaust minimization and denial.

Look folks, I know that AI technology, for all of its dangers, isn't inherently evil. But at some point as the evidence that I'm right continues to compile before our very eyes, I'm going to need people to accept that the really-existing AI industry we're facing down today in the not at all hypothetical world we live in, is being run by billionaire nazi cultists who want to use these programs to destroy critical thinking, turn your kids into fascists, and transform our society into a reactionary hellscape controlled entirely by guys like them. That's just who these people are, and what they're trying to do with LLM's and algorithmic technology; and you can't say it doesn't matter because we just watched these guys transform social media algorithms into a new generation of young fascist converts over the past three or four years in America. Now the Republican Party wants to legislate a free hand to accelerate that project, tucked into a class war budget bill that will absolutely kill poor people. There's a clear plan at here, if you want to see it. Do you?

#Fascism#GOP#Budget

All tools create a path of least resistance. When it comes to AI chatbots, that path is to trust the AI's outputs.

Unfortunately, all LLMs hallucinate. And as users get used to relying on the machine, their ability and willingness to spot these errors deteriorates.

Blaming the user for this is irresponsible. The problem is caused by the way these tools are designed - so it's up to us, as designers, to fix it.

#AI #GenAI #LLM #UX #UXDesign #UserExperience #tech

nngroup.com/articles/ai-chatbo

Nielsen Norman GroupAI Chatbots Discourage Error CheckingAI hallucinations threaten the usefulness of LLM-generated text in professional environments, but today’s LLMs encourage users to take outputs at face value.

Generating Shakespeare-like text with an n-gram language model is straight forward and quite simple. But, don't expect to much of it. It will not be able to recreate a lost Shakespear play for you ;-) It's merely a parrot, making up well sounding sentences out of fragments of original Shakespeare texts...

#ise2025 #lecture #nlp #llm #languagemodel @fiz_karlsruhe @fizise @tabea @enorouzi @sourisnumerique #shakespeare #generativeAI #statistics

What could go wrong?…

𝙏𝙚𝙘𝙝 𝙢𝙤𝙜𝙪𝙡 𝙋𝙖𝙡𝙢𝙚𝙧 𝙇𝙪𝙘𝙠𝙚𝙮 𝙘𝙧𝙚𝙖𝙩𝙞𝙣𝙜 𝙖𝙧𝙨𝙚𝙣𝙖𝙡 𝙤𝙛 𝘼𝙄-𝙥𝙤𝙬𝙚𝙧𝙚𝙙 𝙖𝙪𝙩𝙤𝙣𝙤𝙢𝙤𝙪𝙨 𝙬𝙚𝙖𝙥𝙤𝙣𝙨

youtube.com/watch?v=bWEXnph1El

A répondu dans un fil de discussion

@radiophobicsherkpop

I am talking about the fact that we should not adopt the #narrative of these companies, but differentiate!
As here: These people don't "commission art", they write prompts or programs to buy a product.

You can #commission #art only from humans.
#LLM / #genAI isn't is not capable of #creative processes, it can only execute calculation systems.

We need to move away from attributing capabilities to these systems that they do not have. We owe that to real people.

@amalia12

Upcoming seminar by @danmcquillan:

"Drawing on Illich's 'Tools for Conviviality', this talk will argue that an important role for the contemporary university is to resist AI. The university as a space for the pursuit of knowledge and the development of independent thought has long been undermined by neoliberal restructuring and the ambitions of the Ed Tech industry. So-called generative AI has added computational shock and awe to the assault on criticality, both inside and outside higher education, despite the gulf between the rhetoric and the actual capacities of its computational operations. Such is the synergy between AI's dissimulations and emerging political currents that AI will become embedded in all aspects of students' lives at university and afterwards, preempting and foreclosing diverse futures. It's vital to develop alternatives to AI's optimised nihilism and to sustain the joyful knowledge that nothing is inevitable and other worlds are still possible. The talk will ask what Illich has to teach us about an approach to technology that prioritises creativity and autonomy, how we can bolster academic inquiry through technical inquiry, workers' inquiry and struggle inquiry, and whether the future of higher education should enrol lecturers and students in a process of collective decomputing."

danmcquillan.org/cpct_abstract

danmcquillan.orgAbstract for seminar at the Centre for Philosophy and Critical Thought (CPCT)

Sure, there are responsible, appropriate use cases for LLMs.

Those, equally hypothetically, could run on local compute, using open-weight models with healthy, curated training data that isn't stolen or toxic.

Sure. But you can't argue for something based on its best version. It's whitewashing the _real_ version.