mastouille.fr est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Mastouille est une instance Mastodon durable, ouverte, et hébergée en France.

Administré par :

Statistiques du serveur :

639
comptes actifs

#compneuro

0 message0 participant0 message aujourd’hui

How do babies and blind people learn to localise sound without labelled data? We propose that innate mechanisms can provide coarse-grained error signals to boostrap learning.

New preprint from @yang_chu.

arxiv.org/abs/2001.10605

Thread below 👇

arXiv.orgLearning spatial hearing via innate mechanismsThe acoustic cues used by humans and other animals to localise sounds are subtle, and change during and after development. This means that we need to constantly relearn or recalibrate the auditory spatial map throughout our lifetimes. This is often thought of as a "supervised" learning process where a "teacher" (for example, a parent, or your visual system) tells you whether or not you guessed the location correctly, and you use this information to update your map. However, there is not always an obvious teacher (for example in babies or blind people). Using computational models, we showed that approximate feedback from a simple innate circuit, such as that can distinguish left from right (e.g. the auditory orienting response), is sufficient to learn an accurate full-range spatial auditory map. Moreover, using this mechanism in addition to supervised learning can more robustly maintain the adaptive neural representation. We find several possible neural mechanisms that could underlie this type of learning, and hypothesise that multiple mechanisms may be present and interact with each other. We conclude that when studying spatial hearing, we should not assume that the only source of learning is from the visual system or other supervisory signal. Further study of the proposed mechanisms could allow us to design better rehabilitation programmes to accelerate relearning/recalibration of spatial maps.

I'm giving an online talk starting in 15m (as part of UCL's NeuroAI series).

It's on neural architectures and our current line of research trying to figure out what they might be good for (including some philosophy: what might an answer to this question even look like?).

Sign up (free) at this link to get the zoom link:

eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

Come along to my (free, online) UCL NeuroAI talk next week on neural architectures. What are they good for? All will finally be revealed and you'll never have to think about that question again afterwards. Yep. Definitely that.

🗓️ Wed 12 Feb 2025
⏰ 2-3pm GMT
ℹ️ Details and registration: eventbrite.co.uk/e/ucl-neuroai

EventbriteUCL NeuroAI Talk SeriesA series of NeuroAI themed talks organised by the UCL NeuroAI community. Talks will continue on a monthly basis.

What's the right way to think about modularity in the brain? This devilish 😈 question is a big part of my research now, and it started with this paper with @GabrielBena finally published after the first preprint in 2021!

nature.com/articles/s41467-024

We know the brain is physically structured into distinct areas ("modules"?). We also know that some of these have specialised function. But is there a necessary connection between these two statements? What is the relationship - if any - between 'structural' and 'functional' modularity?

TLDR if you don't want to read the rest: there is no necessary relationship between the two, although when resources are tight, functional modularity is more likely to arise when there's structural modularity. We also found that functional modularity can change over time! Longer version follows.

NatureDynamics of specialization in neural modules under resource constraints - Nature CommunicationsThe extent to which structural modularity in neural networks ensures functional specialization remains unclear. Here the authors show that specialization can emerge in neural modules placed under resource constraints but varies dynamically and is influenced by network architecture and information flow.

New preprint! With Swathi Anil and @marcusghosh.

If you want to get the most out of a multisensory signal, you should take it's temporal structure into account. But which neural architectures do this best? 🧵👇

biorxiv.org/content/10.1101/20

bioRxiv · Fusing multisensory signals across channels and timeAnimals continuously combine information across sensory modalities and time, and use these combined signals to guide their behaviour. Picture a predator watching their prey sprint and screech through a field. To date, a range of multisensory algorithms have been proposed to model this process including linear and nonlinear fusion, which combine the inputs from multiple sensory channels via either a sum or nonlinear function. However, many multisensory algorithms treat successive observations independently, and so cannot leverage the temporal structure inherent to naturalistic stimuli. To investigate this, we introduce a novel multisensory task in which we provide the same number of task-relevant signals per trial but vary how this information is presented: from many short bursts to a few long sequences. We demonstrate that multisensory algorithms that treat different time steps as independent, perform sub-optimally on this task. However, simply augmenting these algorithms to integrate across sensory channels and short temporal windows allows them to perform surprisingly well, and comparably to fully recurrent neural networks. Overall, our work: highlights the benefits of fusing multisensory information across channels and time, shows that small increases in circuit/model complexity can lead to significant gains in performance, and provides a novel multisensory task for testing the relevance of this in biological systems. Key Points ### Competing Interest Statement The authors have declared no competing interest.
Suite du fil

(10/n) If you’ve made it this far, you’ll definitely want to check out the full paper. Grab your copy here:
biorxiv.org/content/10.1101/20
📤 Sharing is highly appreciated!
#compneuro #neuroscience #NeuroAI #dynamicalsystems

bioRxiv · From spiking neuronal networks to interpretable dynamics: a diffusion-approximation frameworkModeling and interpreting the complex recurrent dynamics of neuronal spiking activity is essential to understanding how networks implement behavior and cognition. Nonlinear Hawkes process models can capture a large range of spiking dynamics, but remain difficult to interpret, due to their discontinuous and stochastic nature. To address this challenge, we introduce a novel framework based on a piecewise deterministic Markov process representation of the nonlinear Hawkes process (NH-PDMP) followed by a diffusion approximation. We analytically derive stability conditions and dynamical properties of the obtained diffusion processes for single-neuron and network models. We established the accuracy of the diffusion approximation framework by comparing it with exact continuous-time simulations of the original neuronal NH-PDMP models. Our framework offers an analytical and geometric account of the neuronal dynamics repertoire captured by nonlinear Hawkes process models, both for the canonical responses of single-neurons and neuronal-network dynamics, such as winner-take-all and traveling wave phenomena. Applied to human and nonhuman primate recordings of neuronal spiking activity during speech processing and motor tasks, respectively, our approach revealed that task features can be retrieved from the dynamical landscape of the fitted models. The combination of NH-PDMP representations and diffusion approximations thus provides a novel dynamical analysis framework to reveal single-neuron and neuronal-population dynamics directly from models fitted to spiking data. ### Competing Interest Statement The authors have declared no competing interest.

We just completed a new course on #DimensionalityReduction in #Neuroscience, and the full teaching material 🐍💻 is now freely available (CC BY 4.0 license):

🌍 fabriziomusacchio.com/blog/202

The course is designed to provide an introductory overview of the application of dimensionality reduction techniques for neuroscientists and data scientists alike, focusing on how to handle the increasingly high-dimensional datasets generated by modern neuroscience research.

PhD position openings in my group! Check out topics and how to apply below. At the moment, I'm particularly interested in the topic of modularity in both biological and artificial networks, and how it can be used to scale up intelligent processes.

neural-reckoning.org/openings.

Note that competition for PhD funding at Imperial is pretty strong these days, so worth checking if your undergrad/masters grades are equivalent to UK 1st/Distinction before applying. I'm afraid I don't take self-funded PhD students.

neural-reckoning.orgJoin us

Brody and #Hopfield (2003) showed how networks of #SpikingNeurons (#SNN) can be used to process temporal information based on computations on the timing of spikes rather than the rate of spikes. This is particularly relevant in the context of #OlfactoryProcessing, where the timing of spikes in the olfactory bulb is crucial for encoding odor information. Here is a quick tutorial, that recapitulates the main concepts of that network using #NEST #simulator:

🌍 fabriziomusacchio.com/blog/202

New preprint on our "collaborative modelling of the brain" (COMOB) project. Over the last two years, a group of us (led by @marcusghosh) have been working together, openly, online, with anyone free to join, on a computational neuroscience research project

biorxiv.org/content/10.1101/20

This was an experiment in a more bottom up, collaborative way of doing science, rather than the hierarchical PI-led model. So how did we do it?

We started from the tutorial I gave at @CosyneMeeting 2022 on spiking neural networks that included a starter Jupyter notebook that let you train a spiking neural network model on a sound localisation task.

neural-reckoning.github.io/cos

youtube.com/watch?v=GTXTQ_sOxa

Participants were free to use and adapt this to any question they were interested in (we gave some ideas for starting points, but there was no constraint). Participants worked in groups or individually, sharing their work on our repository and joining us for monthly meetings.

The repository was set up to automatically build a website using @mystmarkdown showing the current work in progress of all projects, and (later in the project) the paper as we wrote it. This kept everyone up to date with what was going on.

comob-project.github.io/snn-so

We started from a simple feedforward network of leaky integrate-and-fire neurons, but others adapted it to include learnable delays, alternative neuron models, biophysically detailed models, incorporated Dale's law, etc.

We found some interesting results, including that shorter time constants improved performance (consistent with what we see in the auditory system). Surprisingly, the network seemed to be using an "equalisation cancellation" strategy rather than the expected coincidence detection.

Ultimately, our scientific results were not incredibly strong, but we think this was a valuable experiment for a number of reasons. Firstly, it shows that there are other ways of doing science. Secondly, many people got to engage in a research experience they otherwise wouldn't. Several participants have been motivated to continue their work beyond this project. It also proved useful for generating teaching material, and a number of MSc projects were based on it.

With that said, we learned some lessons about how to do this better, and yes, we will be doing this again (call for participation in September/October hopefully). The main challenge will be to keep the project more focussed without making it top down / hierarchical.

We believe this is possible, and we are inspired by the recent success of the Busy Beaver challenge, a bottom up project of mathematics amateurs that found a proof to a 40 year old conjecture.

quantamagazine.org/amateur-mat

We will be calling for proposals for the next project, engaging in an open discussion with all participants to refine the ideas before starting, and then inviting the proposer of the most popular project to act as a 'project lead' keeping it focussed without being hierarchical.

If you're interested in being involved in that, please join our (currently fairly quiet) new discord server, or follow me or @marcusghosh for announcements.

discord.gg/kUzh5MHjVE

I'm excited for a future where scientists work more collaboratively, and where everyone can participate. Diversity will lead to exciting new ideas and progress. Computational science has huge potential here, something we're also pursuing at @neuromatch.

Let's make it happen!

We have a new paper with @marcusghosh @GabrielBena on why we have nonlinearity in multimodal circuits.

My lab page has links to journal, preprint, code, talk on youtube, etc.:
neural-reckoning.org/pub_multi

TLDR: Why is it a question that we have nonlinearity in these circuits? Well, the classical multimodal task can be solved with a linear network, so maybe those nonlinear neurons aren't actually needed?

We find that nonlinearity is very important when you consider an extension of the classical multimodal task, embedded into a noisy background, and you don't know when the multimodal signal is active. We think this is a more realistic scenario, for example in a predatory-prey interaction.

We're following up with two additional projects at the moment, looking at what happens when you have even more extended temporal structure in the task (preview: you can still do very well with fairly simple feedforward or recurrent circuits), and when you model agents navigating a multimodal environment (e.g. foraging, hunting; early results suggest recurrent circuits are more robust).

I don't think we can fully understand multimodal circuits until we start looking at more realistic, temporally extended tasks. Exciting times ahead, and we'd be happy to work with any experimental groups interested in pursuing this. Please get in touch!

A problem that often comes up in modelling is how complex the model should be. The answer often given is somehow unsatisfying, that it should be as complex as it needs to be and no more. I think there's something right about this, but it's also not very helpful. So how about this as a refinement?

When we publish a model with a certain number of design choices made, we should be able to justify each choice, which means in practice showing what would have happened if a different choice had been made. So, the way to decide if you should add an additional design choice is something like, would this almost doubling of the amount of work needed to justify this choice be worth the gain?

Of course in practice modellers don't justify all their choices, which is why we can get away with making so many. The hypothesis I'm putting forward here is that this is precisely where we should locate the problem. If, culturally, we did feel like we had to justify all the choices, we would have a natural limit applied to the complexity of the models that would be well founded.