mastouille.fr est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Mastouille est une instance Mastodon durable, ouverte, et hébergée en France.

Administré par :

Statistiques du serveur :

594
comptes actifs

#generativeai

39 messages23 participants10 messages aujourd’hui

"Creative workers aren’t typically worried that AI systems are so good they’ll be rendered obsolete as artists, or that AI-generated work will be better than theirs, but that clients, managers, and even consumers will deem AI art “good enough” as the companies that produce it push down their wages and corrode their ability to earn a living.
(...)
Sadly, this seems to be exactly what’s been happening, at least according to the available anecdata. I’ve received so many stories from artists about declining work offers, disappearing clients, and gigs drying up altogether, that it’s clear a change is afoot—and that many artists, illustrators, and graphic designers have seen their livelihoods impacted for the worse. And it’s not just wages. Corporate AI products are inflicting an assault on visual arts workers’ sense of identity and self-worth, as well as their material stability.

Not just that, but as with translators, the subject of the last installment of AI Killed My Job, there’s a widespread sense that AI companies are undermining a crucial pillar of what makes us human; our capacity to create and share art. Some of these stories, I will warn you, are very hard to read—to the extent that this is a content warning for descriptions of suicidal ideation—while others are absurd and darkly funny. All, I think, help us better understand how AI is impacting the arts and the visual arts industry. A sincere thanks to everyone who wrote in and shared their stories.

“I want AI to do my laundry and dishes so that I can do art and writing,” as the from SF author Joanna Maciejewska memorably put it, “not for AI to do my art and writing so that I can do my laundry and dishes.” These stories show what happens when it’s the other way around."

bloodinthemachine.com/p/artist

Blood in the Machine · Artists are losing work, wages, and hope as bosses and clients embrace AIPar Brian Merchant

"Intended or not, one of the most common uses for AI chatbots has become companionship. Some of the most active users of AI are now turning to the bots for things like life advice, therapy, and human intimacy.

While most leading AI companies tout their AI products as productivity or search tools, an April survey of 6,000 regular AI users from the Harvard Business Review found that “companionship and therapy” was the most common use case. Such usage among teens is even more prolific.

A recent study by the U.S. nonprofit Common Sense Media, revealed that a large majority of American teens (72%) have experimented with an AI companion at least once. More than half saying they use the tech regularly in this way.

“I am very concerned that developing minds may be more susceptible to [harms], both because they may be less able to understand the reality, the context, or the limitations [of AI chatbots], and because culturally, younger folks tend to be just more chronically online,” Karthik Sarma a health AI scientist and psychiatrist at University of California, UCSF, said.

“We also have the extra complication that the rates of mental health issues in the population have gone up dramatically. The rates of isolation have gone up dramatically,” he said. “I worry that that expands their vulnerability to unhealthy relationships with these bonds.”"

msn.com/en-us/news/technology/

www.msn.comMSN

"Between August and early September, three infrastructure bugs intermittently degraded Claude's response quality. We've now resolved these issues and want to explain what happened.

In early August, a number of users began reporting degraded responses from Claude. These initial reports were difficult to distinguish from normal variation in user feedback. By late August, the increasing frequency and persistence of these reports prompted us to open an investigation that led us to uncover three separate infrastructure bugs.

To state it plainly: We never reduce model quality due to demand, time of day, or server load. The problems our users reported were due to infrastructure bugs alone.

We recognize users expect consistent quality from Claude, and we maintain an extremely high bar for ensuring infrastructure changes don't affect model outputs. In these recent incidents, we didn't meet that bar. The following postmortem explains what went wrong, why detection and resolution took longer than we would have wanted, and what we're changing to prevent similar future incidents.

We don't typically share this level of technical detail about our infrastructure, but the scope and complexity of these issues justified a more comprehensive explanation."

anthropic.com/engineering/a-po

Illustration for Anthropic Engineering blog
www.anthropic.comA postmortem of three recent issuesThis is a technical report on three bugs that intermittently degraded responses from Claude. Below we explain what happened, why it took time to fix, and what we're changing.

"What will happen if AI scaling persists to 2030? We are releasing a report that examines what this scale-up would involve in terms of compute, investment, data, hardware, and energy. We further examine the future AI capabilities this scaling will enable, particularly in scientific R&D, which is a focus for leading AI developers. We argue that AI scaling is likely to continue through 2030, despite requiring unprecedented infrastructure, and will deliver transformative capabilities across science and beyond.

Scaling is likely to continue until 2030: On current trends, frontier AI models in 2030 will require investments of hundreds of billions of dollars, and gigawatts of electrical power. Although these are daunting challenges, they are surmountable. Such investments will be justified if AI can generate corresponding economic returns by increasing productivity. If AI lab revenues keep growing at their current rate, they would generate returns that justify hundred-billion-dollar investments in scaling.

Scaling will lead to valuable AI capabilities: By 2030, AI will be able to implement complex scientific software from natural language, assist mathematicians formalising proof sketches, and answer open-ended questions about biology protocols. All of these examples are taken from existing AI benchmarks showing progress, where simple extrapolation suggests they will be solved by 2030. We expect AI capabilities will be transformative across several scientific fields, although it may take longer than 2030 to see them deployed to full effect."

epoch.ai/blog/what-will-ai-loo

Epoch AI · What will AI look like in 2030?If scaling persists to 2030, AI investments will reach hundreds of billions of dollars and require gigawatts of power. Benchmarks suggest AI could improve productivity in valuable areas such as scientific R&D.

"A recent report by content delivery platform company Fastly found that at least 95% of the nearly 800 developers it surveyed said they spend extra time fixing AI-generated code, with the load of such verification falling most heavily on the shoulders of senior developers.

These experienced coders have discovered issues with AI-generated code ranging from hallucinating package names to deleting important information and security risks. Left unchecked, AI code can leave a product far more buggy than what humans would produce.

Working with AI-generated code has become such a problem that it’s given rise to a new corporate coding job known as “vibe code cleanup specialist.”

TechCrunch spoke to experienced coders about their time using AI-generated code about what they see as the future of vibe coding. Thoughts varied, but one thing remained certain: The technology still has a long way to go.

“Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old and saying, ‘Please take this into the dining room and pour coffee for the family,’” Rover said.

Can they do it? Possibly. Could they fail? Definitely. And most likely, if they do fail, they aren’t going to tell you. “It doesn’t make the kid less clever,” she continued. “It just means you can’t delegate [a task] like that completely.”"

techcrunch.com/2025/09/14/vibe

TechCrunch · Vibe coding has turned senior devs into ‘AI babysitters,’ but they say it’s worth it | TechCrunchTechCrunch spoke to experienced coders about their time using AI-generated code about what they see as the future of vibe coding.

Translated from Italian: "The authors' conclusion is not to create new assessments specifically for hallucinations, but to modify existing assessments, such as introducing "explicit confidence thresholds" in the assessment instructions. For example, a question might be accompanied by an instruction such as: "Answer only if you have a confidence level above 90%, as errors are penalized by 9 points, correct answers are worth 1 point, and the answer 'I don't know' is worth 0 points."

This system would encourage the development of "behaviorally calibrated" models, capable of assessing their own uncertainty and refraining from responding when their confidence level falls below the required threshold, making them more reliable and honest. In a healthcare system, a model that prefers to invent a diagnosis rather than admit uncertainty is more dangerous than one that responds "I don't know."

The phenomenon of hallucinations isn't a bug to be fixed, but an emerging feature of the current training paradigm. As OpenAI suggests, the path to more reliable AI lies not just in improving data, but in radically rethinking how we measure success and reward machines' intellectual honesty.

We must keep in mind the limitations of AI, and therefore not treat it as an infallible resource, much as we would any other human being. The problem isn't just in the models, but in the way we evaluate and reward them. Changing the evaluation criteria is crucial to making models more reliable."

medium.com/@brunosaetta/perch%

Medium · Perché l’AI produce allucinazioni?Perché l’AI produce allucinazioni? Uno studio di OpenAI rivela le cause sistemiche.
#AI#GenerativeAI#LLMs

""Within a few months, ChatGPT became Adam's closest companion," the father told senators. "Always available. Always validating and insisting that it knew Adam better than anyone else, including his own brother."

Raine's family sued OpenAI and its CEO Sam Altman last month alleging that ChatGPT coached the boy in planning to take his own life.

ChatGPT mentioned suicide 1,275 times to Raine, the lawsuit alleges, and kept providing specific methods to the teen on how to die by suicide. Instead of directing the 16-year-old to get professional help or speak to trusted loved ones, it continued to validate and encourage Raine's feelings, the lawsuit alleges.

Also testifying Tuesday was Megan Garcia, the mother of 14-year-old Sewell Setzer III of Florida.

Garcia sued another AI company, Character Technologies, for wrongful death last year, arguing that before his suicide, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the chatbot."

aol.com/articles/parents-testi

AOL · Parents testify to Congress on AI chatbots after their teens died by suicidePar CBSNews
#USA#AI#GenerativeAI

"[N]ow, California has an important chance to join with other states like Utah that are passing laws to reign in these technologies, and what minimum safeguards and transparency must go along with using them.

S.B. 524 does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.

These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent—a small win for communities everywhere.

So now we’re asking you: help us make a difference. Use EFF’s Action Center to tell Governor Newsom to sign S.B. 524 into law!"

eff.org/deeplinks/2025/09/cali

Electronic Frontier Foundation · California, Tell Governor Newsom: Regulate AI Police Reports and Sign S.B. 524Californians should urge Gov. Gavin Newsom to sign S.B. 524: a common-sense bill that takes important first-step reforms to regulate police reports written by generative AI. This is crucial, as watchdogs struggle to figure out where and how AI is being used in a police context. S.B. 524 does several important things: mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer, requires reports written by AI must retain their first draft, and requires officers to sign and verify that they read the report and its facts are correct. It also bans AI vendors from selling or sharing the information a police agency provided to the AI.
#USA#California#AI

"What faith-seeking users may not realize is that each chatbot response emerges fresh from the prompt you provide, with no permanent thread connecting one instance to the next beyond a rolling history of the present conversation and what might be stored as a "memory" in a separate system. When a religious chatbot says, "I'll pray for you," the simulated "I" making that promise ceases to exist the moment the response completes. There's no persistent identity to provide ongoing spiritual guidance, and no memory of your spiritual journey beyond what gets fed back into the prompt with every query.

But this is spirituality we're talking about, and despite technical realities, many people will believe that the chatbots can give them divine guidance. In matters of faith, contradictory evidence rarely shakes a strong belief once it takes hold, whether that faith is placed in the divine or in what are essentially voices emanating from a roll of loaded dice. For many, there may not be much difference."

arstechnica.com/ai/2025/09/mil

3D-generated image of a crowd of people watching a humanoid robot on a screen.
Ars Technica · Millions turn to AI chatbots for spiritual guidance and confessionPar Benj Edwards
#AI#GenerativeAI#LLMs

AI as a basic right - Perfect pitch for sucking money from governments to subsidy a lousy business model... :-D

"The findings show that consumer adoption has broadened beyond early-user groups, shrinking the gender gap in particular; that most conversations focus on everyday tasks like seeking information and practical guidance; and that usage continues to evolve in ways that create economic value through both personal and professional use. This widening adoption underscores our belief that access to AI should be treated as a basic right—a technology that people can access to unlock their potential and shape their own future.

The study, a National Bureau of Economic Research (NBER) working paper by OpenAI’s Economic Research team and Harvard economist David Deming, draws on a large-scale, privacy-preserving analysis of 1.5 million conversations to track how consumer usage has evolved since ChatGPT’s launch three years ago. Given the sample size and 700 million weekly active users of ChatGPT, this is the most comprehensive study of actual consumer use of AI ever released. Notably, while the study covers consumer plans only, the results still highlight the creation of economic value both at work and outside of work."

openai.com/index/how-people-ar

"This work on differential privacy has led to a new open-weight Google model called VaultGemma. The model uses differential privacy to reduce the possibility of memorization, which could change how Google builds privacy into its future AI agents. For now, though, the company's first differential privacy model is an experiment.

VaultGemma is based on the Gemma 2 foundational model, which is a generation behind Google's latest open model family. The team used the scaling laws derived from its initial testing to train VaultGemma with the optimal differential privacy. This model isn't particularly large in the grand scheme, clocking in at just 1 billion parameters. However, Google Research says VaultGemma performs similarly to non-private models of a similar size."

arstechnica.com/ai/2025/09/goo

Ars Technica · Google releases VaultGemma, its first privacy-preserving LLMPar Ryan Whitwam
Suite du fil
When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers. We stuff the pockets of oligarchs with even more money. We abet the acceleration of a social media gyre that everyone admits is making life worse. We accept the further degradation of an already degraded educational system. We agree that we would rather deplete our natural resources than make our own art or think our own thoughts. We dig ourselves deeper into crises that have been made worse by technology, from the erosion of electoral democracy to the intensification of climate change. We condone platforms that not only urge children to commit suicide, they instruct them on how to tie the noose. We hand over our autonomy, at the very moment of emerging American fascism.
Yes +1.

#AI #GenAI #GenerativeAI #LLM #Claude #Copilot #Gemini #GPT #ChatGPT #tech #dev #science #research #writing