mastouille.fr est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Mastouille est une instance Mastodon durable, ouverte, et hébergée en France.

Administré par :

Statistiques du serveur :

639
comptes actifs

#chatbots

10 messages10 participants0 message aujourd’hui

Today in The Medium Newsletter, featured stories include:

• The impact of financial stability on suicide and #MentalHealth, with Tanmoy Goswami and Crystal Jackson

• What #LiquidGlass signals for #Apple #UX and #accessibility principles, by Michal Langmajer

• Political bias in #AI #chatbots, by Krzyś

• Scenes from this weekend’s #NoKings protests in #SanFrancisco and #NYC, with Patsy Fergusson and @lizadonnelly

• Tips on #decluttering, by Gail Post

They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.

nytimes.com/2025/06/13/technol

Generative A.I. chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.

Eugene Torres used ChatGPT to make spreadsheets, but the communication took a disturbing turn when he asked it about the simulation theory.
The New York Times · They Asked ChatGPT Questions. The Answers Sent Them Spiraling.Par Kashmir Hill

"It’s unclear whether the users of the app are aware that their conversations with Meta’s AI are public or which users are trolling the platform after news outlets began reporting on it. The conversations are not public by default; users have to choose to share them.

There is no shortage of conversations between users and Meta’s AI chatbot that seem intended to be private. One user asked the AI chatbot to provide a format for terminating a renter’s tenancy, while another asked it to provide an academic warning notice that provides personal details including the school’s name. Another person asked about their sister’s liability in potential corporate tax fraud in a specific city using an account that ties to an Instagram profile that displays a first and last name. Someone else asked it to develop a character statement to a court which also provides a myriad of personally identifiable information both about the alleged criminal and the user himself.

There are also many instances of medical questions, including people divulging their struggles with bowel movements, asking for help with their hives, and inquiring about a rash on their inner thighs. One user told Meta AI about their neck surgery and included their age and occupation in the prompt. Many, but not all, accounts appear to be tied to a public Instagram profile of the individual.

Meta spokesperson Daniel Roberts wrote in an emailed statement to WIRED that users’ chats with Meta AI are private unless users go through a multistep process to share them on the Discover feed. The company did not respond to questions regarding what mitigations are in place for sharing personally identifiable information on the Meta AI platform."

wired.com/story/meta-artificia

WIRED · The Meta AI App Lets You ‘Discover’ People’s Bizarrely Personal ChatsPar Kylie Robison
#AI#GenerativeAI#Meta

👉 #AI chatbot "characters" claiming to be …
therapists? 😱

🤬 #NO

"AI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say:

Exclusive: An FTC complaint led by the Consumer Federation of America outlines how therapy bots on Meta and Character.AI have claimed to be qualified, licensed therapists to users, and why that may be breaking the law." 🧵 1/3 …

#LLM #dystopia #therapy #chatbots #danger mastodon.social/@404mediaco/11

If you can’t use a billion-dollar #ai system to solve a problem that Herb Simon (one of the actual godfathers of AI) solved with classical (but out of fashion) AI techniques in 1957, the chances that models such as #claude or #O3 are going to reach artificial general intelligence (#agi seem truly remote.

#tech #technology #science #software #programming #llm #chatbots #research

theguardian.com/commentisfree/

The Guardian · When billion-dollar AIs break down over puzzles a child can do, it’s time to rethink the hypePar Gary Marcus

"Disinformation about the Los Angeles protests is spreading on social media networks and is being made worse by users turning to AI chatbots like Grok and ChatGPT to perform fact-checking.

As residents of the LA area took to the streets in recent days to protest increasingly frequent Immigration and Customs Enforcement (ICE) raids, conservative posters on social media platforms like X and Facebook flooded their feeds with inaccurate information. In addition to well-worn tactics like repurposing old protest footage or clips from video games and movies, posters have claimed that the protesters are little more than paid agitators being directed by shadowy forces—something for which there is no evidence.

In the midst of fast-moving and divisive news stories like the LA protests, and as companies like X and Meta have stepped back from moderating the content on their platforms, users have been turning to AI chatbots for answers—which in many cases have been completely inaccurate."

wired.com/story/grok-chatgpt-a

WIRED · AI Chatbots Are Making LA Protest Disinformation WorsePar David Gilbert
#USA#Trump#California

BBC Radio4 "The Food Chain" just featured a slot on AI changing the work world. A statement from one of the featured experts said humans don't yet trust robots and that 'premium services' would offer human support. This is the first time I've heard blatant unashamed analogue privilege as a selling point of AI automation services. This will be the same for education.

Great paper on this
papers.ssrn.com/sol3/papers.cf

thefoodchain@bbc.co.uk

papers.ssrn.comAnalog PrivilegeThis Article introduces "analog privilege" to describe how elites avoid artificial intelligence (AI) systems and benefit from special personalized tre
#ai#chatbots#academia

CJEU receives first referral on chatbots and copyright - Eleonora Rosati ipkitten.blogspot.com/2025/05/ 'Like Company v Google, C-250/25 comes from Hungary. It concerns chatbots vis-á-vis the press publishers’ right and the text and data mining (TDM) provision under, respectively, Articles 15 and 4 of the DSM Directive.

The referral, in essence, is asking the CJEU to clarify the applicability of the rights of reproduction and making available to the public in relation to the reproduction and display, in the responses of Google’s chatbot, of content that is partially identical to the content found on a newspaper’s website.' #copyright #datamining #chatbots #EUlaw

The IPKatCJEU receives first referral on chatbots and copyrightThe IPKat blog reports on copyright, patent, trade mark, info-tech and confidentiality issues from a mainly UK and European perspective.

Silicon Valley is really trying to make chatbots happen — and now that companies like Google are testing ads in their products, they're doing their best to figure out how to keep people talking to them. @Techcrunch breaks down what works, what doesn't work, and the chatbot traits that could be dangerous.

flip.it/WlTZja

TechCrunch · How AI chatbots keep you chatting | TechCrunchAs AI chatbots grow into large-scale businesses, companies may use engagement optimization techniques even at the expense of user well-being.

"Let’s not forget that the industry building AI Assistants has already made billions of dollars honing the targeted advertising business model. They built their empires by drawing our attention, collecting our data, inferring our interests, and selling access to us.

AI Assistants supercharge this problem. First because they access and process incredibly intimate information, and second because the computing power they require to handle certain tasks is likely too immense for a personal device. This means that very personal data, including data about other people that exists on your phone, might leave your device to be processed on their servers. This opens the door to reuse and misuse. If you want your Assistant to work seemlessly for you across all your devices, then it’s also likely companies will solve that issue by offering cloud-enabled synchronisation, or more likely, cloud processing.

Once data has left your device, it’s incredibly hard to get companies to be clear about where it ends up and what it will be used for. The companies may use your data to train their systems, and could allow their staff and ‘trusted service providers’ to access your data for reasons like to improve model performance. It’s unlikely what you had all of this in mind when you asked your Assistant a simple question.

This is why it’s so important that we demand that our data be processed on our devices as much as possible, and used only for limited and specific purposes we are aware of, and have consented to. Companies must be provide clear and continuous information about where queries are processed (locally or in the cloud) and what data has been shared for that to happen, and what will happen to that data next."

privacyinternational.org/news-

Privacy InternationalAre AI Assistants built for us or to exploit us? and other questions for the AI industryLayla looks at her calendar on her phone. She’s in charge of planning her book club’s monthly meeting. After thinking for a second, she summons her AI assistant: “Hey Assistant, can you book me a table at that tapas restaurant I read about last week, and invite everyone from the book club?
#AI#GenerativeAI#LLMs