mastouille.fr est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Mastouille est une instance Mastodon durable, ouverte, et hébergée en France.

Administré par :

Statistiques du serveur :

695
comptes actifs

#computationalneuroscience

0 message0 participant0 message aujourd’hui

🧠 Nouveau Réseau Français De Neurosciences Computationnelles !🧠

Une initiative passionnante est en cours en France ! Des chercheurs établissent un réseau national dédié aux #NeurosciencesComputationnelles pour faciliter :

  • La collaboration entre équipes de recherche
  • Le partage de connaissances et de ressources
  • La formation de la prochaine génération de chercheurs
  • La visibilité de la recherche française dans ce domaine en pleine expansion

🔍 Vous travaillez en #ComputationalNeuroscience sur le territoire français ? Que vous soyez chercheur·e confirmé·e, post-doctorant·e, doctorant·e ou étudiant·e, rejoignez notre communauté !

📝 Inscrivez-vous à notre liste de diffusion pour recevoir les actualités, événements et opportunités :

➡️ listes.services.cnrs.fr/wws/su

Ensemble, renforçons l'excellence française en #NeurosciencesComputationnelles !
#Recherche #Neuroscience #CNRS #Science #IA #IntelligenceArtificielle

listes.services.cnrs.frrt_neurocomp - réseau français de neurosciences computationnelles - subscribe

A few weeks ago, I shared a differential equations tutorial for beginners, written from the perspective of a neuroscientist who's had to grapple with the computational part. Following up on that, I've now tackled the first real beast encountered by most computational neuroscience students: the Hodgkin-Huxley model.

While remaining incredibly elegant to this day, this model is also a mathematically dense system of equations that can overwhelm and discourage beginners, especially those with non-mathematical backgrounds. Similar to the first tutorial, I've tried to build intuition step-by-step, starting with a simple RC circuit, layering in Na⁺ and K⁺ channels, and ending with the full spike-generation story.

Feedback is welcome, especially from fellow non-math converts.
neurofrontiers.blog/building-a

#ComputationalNeuroscience #Python #hodgkinHuxleyModel #math #biophysics

From: @neurofrontiers
neuromatch.social/@neurofronti

Neurofrontiers · Building a virtual neuron - part 2 - Neurofrontiers
Plus via neuronerd

How do babies and blind people learn to localise sound without labelled data? We propose that innate mechanisms can provide coarse-grained error signals to boostrap learning.

New preprint from @yang_chu.

arxiv.org/abs/2001.10605

Thread below 👇

arXiv.orgLearning spatial hearing via innate mechanismsThe acoustic cues used by humans and other animals to localise sounds are subtle, and change during and after development. This means that we need to constantly relearn or recalibrate the auditory spatial map throughout our lifetimes. This is often thought of as a "supervised" learning process where a "teacher" (for example, a parent, or your visual system) tells you whether or not you guessed the location correctly, and you use this information to update your map. However, there is not always an obvious teacher (for example in babies or blind people). Using computational models, we showed that approximate feedback from a simple innate circuit, such as that can distinguish left from right (e.g. the auditory orienting response), is sufficient to learn an accurate full-range spatial auditory map. Moreover, using this mechanism in addition to supervised learning can more robustly maintain the adaptive neural representation. We find several possible neural mechanisms that could underlie this type of learning, and hypothesise that multiple mechanisms may be present and interact with each other. We conclude that when studying spatial hearing, we should not assume that the only source of learning is from the visual system or other supervisory signal. Further study of the proposed mechanisms could allow us to design better rehabilitation programmes to accelerate relearning/recalibration of spatial maps.

When I transitioned from cognitive to computational neuroscience, I found myself in a bit of a bind. I had learned calculus, but I had progressed little beyond pattern recognition: I knew which rules to apply to find solutions to which equations, but the equations themselves lacked any sort of real meaning for me.

So I struggled with understanding how formulas could be implemented in code and why the code I was reading could be described by those formulas. Resources explaining math “for neuroscientists” were unfortunately quite useless for me, because they usually presented the necessary equations for describing various neural systems, assuming the presence of that basic understanding/intuition I lacked.

Of course, I figured things out eventually (otherwise I wouldn’t be writing about it), but I’m 85% sure I’m not the only one who’s ever struggled with this, and so I wrote the tutorial I wish I could’ve had. If you’re in a similar position, I hope you’ll find it useful. And if not, maybe it helps you get a glimpse into the struggles of the non-math people in your life. Either way, it has cats.

neurofrontiers.blog/building-a

Neurofrontiers · Building a virtual neuron - part 1 - Neurofrontiers
Plus via neuronerd

I'm giving an online talk starting in 15m (as part of UCL's NeuroAI series).

It's on neural architectures and our current line of research trying to figure out what they might be good for (including some philosophy: what might an answer to this question even look like?).

Sign up (free) at this link to get the zoom link:

eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

With the current situation in the #US, several of my former colleagues there are looking for a #PostDocJob in #Europe, to do #BehaviouralNeuroscience or #ComputationalNeuroscience in #SpatialCognition (or adjacent).
Lots of hashtags I know..

Do you know a #EU or #UK #Neuroscience lab looking to hire a postdoc in these fields? Let me know and I'll pass it on to them!

Edit: adding #RodentResearch and #humanresearch for the species concerned (in this case)

Come along to my (free, online) UCL NeuroAI talk next week on neural architectures. What are they good for? All will finally be revealed and you'll never have to think about that question again afterwards. Yep. Definitely that.

🗓️ Wed 12 Feb 2025
⏰ 2-3pm GMT
ℹ️ Details and registration: eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

We are very happy to provide a consolidated update on the #NeuroML ecosystem in our @eLife paper, “The NeuroML ecosystem for standardized multi-scale modeling in neuroscience”: doi.org/10.7554/eLife.95135.3

#NeuroML is a standard and software ecosystem for data-driven biophysically detailed #ComputationalModelling endorsed by the @INCF and CoMBINE, and includes a large community of users and software developers.

#Neuroscience #ComputationalNeuroscience #ComputationalModelling 1/x

What's the right way to think about modularity in the brain? This devilish 😈 question is a big part of my research now, and it started with this paper with @GabrielBena finally published after the first preprint in 2021!

nature.com/articles/s41467-024

We know the brain is physically structured into distinct areas ("modules"?). We also know that some of these have specialised function. But is there a necessary connection between these two statements? What is the relationship - if any - between 'structural' and 'functional' modularity?

TLDR if you don't want to read the rest: there is no necessary relationship between the two, although when resources are tight, functional modularity is more likely to arise when there's structural modularity. We also found that functional modularity can change over time! Longer version follows.

NatureDynamics of specialization in neural modules under resource constraints - Nature CommunicationsThe extent to which structural modularity in neural networks ensures functional specialization remains unclear. Here the authors show that specialization can emerge in neural modules placed under resource constraints but varies dynamically and is influenced by network architecture and information flow.

New preprint! With Swathi Anil and @marcusghosh.

If you want to get the most out of a multisensory signal, you should take it's temporal structure into account. But which neural architectures do this best? 🧵👇

biorxiv.org/content/10.1101/20

bioRxiv · Fusing multisensory signals across channels and timeAnimals continuously combine information across sensory modalities and time, and use these combined signals to guide their behaviour. Picture a predator watching their prey sprint and screech through a field. To date, a range of multisensory algorithms have been proposed to model this process including linear and nonlinear fusion, which combine the inputs from multiple sensory channels via either a sum or nonlinear function. However, many multisensory algorithms treat successive observations independently, and so cannot leverage the temporal structure inherent to naturalistic stimuli. To investigate this, we introduce a novel multisensory task in which we provide the same number of task-relevant signals per trial but vary how this information is presented: from many short bursts to a few long sequences. We demonstrate that multisensory algorithms that treat different time steps as independent, perform sub-optimally on this task. However, simply augmenting these algorithms to integrate across sensory channels and short temporal windows allows them to perform surprisingly well, and comparably to fully recurrent neural networks. Overall, our work: highlights the benefits of fusing multisensory information across channels and time, shows that small increases in circuit/model complexity can lead to significant gains in performance, and provides a novel multisensory task for testing the relevance of this in biological systems. Key Points ### Competing Interest Statement The authors have declared no competing interest.

🧠 Exploring secrets of human vision today at #McGill University! I'll be talking about how our brains achieve efficient visual processing through foveated retinotopy - nature's brilliant solution for high-res central vision.

👉 When: Wednesday 9th of January 2025 at 12 noon.

👉 Where: CRN seminar room, Montreal General Hospital, Livingston Hall, L7-140, with hybrid option.

with Jean-Nicolas JÉRÉMIE and Emmanuel Daucé

📄 Read our findings: arxiv.org/abs/2402.15480

TL;DR: Standard #CNNs naturally mimic human-like visual processing when fed images that match our retina's center-focused mapping. Could this be the key to more efficient AI vision systems?

#ComputationalNeuroscience

#NeuroAI

laurentperrinet.github.io/talk

In the spirit of end-of-year celebrations, we've just released Brian 2.8 🌠
Alongside the usual helping of small improvements and fixes, this release comes with an important performance improvement for random number generation in C++ standalone mode.

Head to the release notes for more details! brian2.readthedocs.io/en/2.8.0

brian2.readthedocs.ioRelease notes — Brian 2 2.8.0 documentation

Applications open to the University of Sussex Neuroscience's September 2025 PhD program, fully-funded, with a UKRI rate stipend 🧠

📅 Apply by Jan 13, 2025

Research themes incl:
- Sensory neuroscience
- Neural circuits of behavior
- Learning & memory
- Neurodegeneration
- Computational neuroscience & AI
- Translational & clinical neuroscience

Learn more: bit.ly/3VEnqgW

Applications open to the University of Sussex Neuroscience's September 2025 PhD program, fully-funded, with a UKRI rate stipend 🧠

📅 Apply by Jan 13, 2025

Research themes incl:
- Sensory neuroscience
- Neural circuits of behavior
- Learning & memory
- Neurodegeneration
- Computational neuroscience & AI
- Translational & clinical neuroscience

Learn more: bit.ly/3VEnqgW