mastouille.fr est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Mastouille est une instance Mastodon durable, ouverte, et hébergée en France.

Administré par :

Statistiques du serveur :

645
comptes actifs

#neuroai

0 message0 participant0 message aujourd’hui

#neuroAI preprint alert: studying the convergence of multimodal AI features with brain activity during movie watching. Notably identifies brain areas where multimodal features outperform unimodal stimuli representations. And it uses @cneuromod.ca 's movie10 dataset :) arxiv.org/abs/2505.20027

arXiv.orgMulti-modal brain encoding models for multi-modal stimuliDespite participants engaging in unimodal stimuli, such as watching images or silent videos, recent work has demonstrated that multi-modal Transformer models can predict visual brain activity impressively well, even with incongruent modality representations. This raises the question of how accurately these multi-modal models can predict brain activity when participants are engaged in multi-modal stimuli. As these models grow increasingly popular, their use in studying neural activity provides insights into how our brains respond to such multi-modal naturalistic stimuli, i.e., where it separates and integrates information across modalities through a hierarchy of early sensory regions to higher cognition. We investigate this question by using multiple unimodal and two types of multi-modal models-cross-modal and jointly pretrained-to determine which type of model is more relevant to fMRI brain activity when participants are engaged in watching movies. We observe that both types of multi-modal models show improved alignment in several language and visual regions. This study also helps in identifying which brain regions process unimodal versus multi-modal information. We further investigate the contribution of each modality to multi-modal alignment by carefully removing unimodal features one by one from multi-modal representations, and find that there is additional information beyond the unimodal embeddings that is processed in the visual and language regions. Based on this investigation, we find that while for cross-modal models, their brain alignment is partially attributed to the video modality; for jointly pretrained models, it is partially attributed to both the video and audio modalities. This serves as a strong motivation for the neuroscience community to investigate the interpretability of these models for deepening our understanding of multi-modal information processing in brain.

I'm giving an online talk starting in 15m (as part of UCL's NeuroAI series).

It's on neural architectures and our current line of research trying to figure out what they might be good for (including some philosophy: what might an answer to this question even look like?).

Sign up (free) at this link to get the zoom link:

eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

Come along to my (free, online) UCL NeuroAI talk next week on neural architectures. What are they good for? All will finally be revealed and you'll never have to think about that question again afterwards. Yep. Definitely that.

🗓️ Wed 12 Feb 2025
⏰ 2-3pm GMT
ℹ️ Details and registration: eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

🧠 Exploring secrets of human vision today at #McGill University! I'll be talking about how our brains achieve efficient visual processing through foveated retinotopy - nature's brilliant solution for high-res central vision.

👉 When: Wednesday 9th of January 2025 at 12 noon.

👉 Where: CRN seminar room, Montreal General Hospital, Livingston Hall, L7-140, with hybrid option.

with Jean-Nicolas JÉRÉMIE and Emmanuel Daucé

📄 Read our findings: arxiv.org/abs/2402.15480

TL;DR: Standard #CNNs naturally mimic human-like visual processing when fed images that match our retina's center-focused mapping. Could this be the key to more efficient AI vision systems?

#ComputationalNeuroscience

#NeuroAI

laurentperrinet.github.io/talk

Suite du fil

(10/n) If you’ve made it this far, you’ll definitely want to check out the full paper. Grab your copy here:
biorxiv.org/content/10.1101/20
📤 Sharing is highly appreciated!
#compneuro #neuroscience #NeuroAI #dynamicalsystems

bioRxiv · From spiking neuronal networks to interpretable dynamics: a diffusion-approximation frameworkModeling and interpreting the complex recurrent dynamics of neuronal spiking activity is essential to understanding how networks implement behavior and cognition. Nonlinear Hawkes process models can capture a large range of spiking dynamics, but remain difficult to interpret, due to their discontinuous and stochastic nature. To address this challenge, we introduce a novel framework based on a piecewise deterministic Markov process representation of the nonlinear Hawkes process (NH-PDMP) followed by a diffusion approximation. We analytically derive stability conditions and dynamical properties of the obtained diffusion processes for single-neuron and network models. We established the accuracy of the diffusion approximation framework by comparing it with exact continuous-time simulations of the original neuronal NH-PDMP models. Our framework offers an analytical and geometric account of the neuronal dynamics repertoire captured by nonlinear Hawkes process models, both for the canonical responses of single-neurons and neuronal-network dynamics, such as winner-take-all and traveling wave phenomena. Applied to human and nonhuman primate recordings of neuronal spiking activity during speech processing and motor tasks, respectively, our approach revealed that task features can be retrieved from the dynamical landscape of the fitted models. The combination of NH-PDMP representations and diffusion approximations thus provides a novel dynamical analysis framework to reveal single-neuron and neuronal-population dynamics directly from models fitted to spiking data. ### Competing Interest Statement The authors have declared no competing interest.

Activation functions are one parameter of AI systems. Recent GenAI/transformers like ChatGPT use fns that have a big linear region, but are not ReLUs — they have a smooth transition around the threshold.

We find real V1 neurons in the awake brain have this shape.
2/
#neuroAI #AI

Interested in the way neuroscience could help enhance the robusteness of deep learning ? If at #CCN2024 Check-out our Poster 2024.ccneuro.org/poster/?id=29 #CCN and our preprint at arxiv.org/abs/2402.15480 laurentperrinet.github.io/publ
#neuroAI #neuroscience #deeplearning #vision @jnjer

refs :

Happy to share a new preprint - the culmination of the past three years of postdoc with @tyrell_turing and @adrien, and my first real foray into #NeuroAI as a tool to study the sleeping brain:

biorxiv.org/content/10.1101/20

(1/🧵) c’est parti!

bioRxiv · Sequential predictive learning is a unifying theory for hippocampal representation and replayThe mammalian hippocampus contains a cognitive map that represents an animal’s position in the environment [1][1] and generates offline “replay” [2][2],[3][3] for the purposes of recall [4][4], planning [5][5],[6][6], and forming long term memories [7][7]. Recently, it’s been found that artificial neural networks trained to predict sensory inputs develop spatially tuned cells [8][8], aligning with predictive theories of hippocampal function [9][9]–[11][10]. However, whether predictive learning can also account for the ability to produce offline replay is unknown. Here, we find that spatially tuned cells, which robustly emerge from all forms of predictive learning, do not guarantee the presence of a cognitive map with the ability to generate replay. Offline simulations only emerged in networks that used recurrent connections and head-direction information to predict multi-step observation sequences, which promoted the formation of a continuous attractor reflecting the geometry of the environment. These offline trajectories were able to show wake-like statistics, autonomously replay recently experienced locations, and could be directed by a virtual head direction signal. Further, we found that networks trained to make cyclical predictions of future observation sequences were able to rapidly learn a cognitive map and produced sweeping representations of future positions reminiscent of hippocampal theta sweeps [12][11]. These results demonstrate how hippocampal-like representation and replay can emerge in neural networks engaged in predictive learning, and suggest that hippocampal theta sequences reflect a circuit that implements a data-efficient algorithm for sequential predictive learning. Together, this framework provides a unifying theory for hippocampal functions and hippocampal-inspired approaches to artificial intelligence. ### Competing Interest Statement The authors have declared no competing interest. [1]: #ref-1 [2]: #ref-2 [3]: #ref-3 [4]: #ref-4 [5]: #ref-5 [6]: #ref-6 [7]: #ref-7 [8]: #ref-8 [9]: #ref-9 [10]: #ref-11 [11]: #ref-12
A répondu dans un fil de discussion

@LMPrida @perpl_lab @acnavasolive @saman @cogneurophys 🥳so happy this is out! I wasn’t sure how the ML models would perform, given the frequency shift in monkey (and human) SWRs, and the larger post-ripple wave compared to those in mice. But, a few models rose to the top straightaway and performed even better with modest “top up” training! 📈Now I’m curious how the #opensource toolbox will perform on human hippocampal data. ……..➡️👩‍🔬both senior authors and one lead author were #womenInSTEM working across two continents, and the original multi-model testing was formulated as a hackathon that included #diverserepresentation ! #neuroai #ann #cnn #neuroscience #hippocampus #HippocampalReplay #internationalwomensday

Introducing our brand new course on NeuroAI​ where we ask "What are the common principles of natural and artificial intelligence?"

Apply now: neuromatch.io/neuroai-course 💻🤖

Please note that this course is aimed at a more advanced audience than our other courses.

Student Applications Close Sunday, March 24 midnight in the last time zone on Earth.

We charge low, regionally adjusted tuition fees for students, and offer fee waivers where needed without impact on admission.

TA Applications Close Sunday, March 17 midnight in the last time zone on Earth.

Teaching Assistants are paid, full-time, temporary, contracted roles.

@neuromatch@a.gup.pe @academicchatter