How do babies and blind people learn to localise sound without labelled data? We propose that innate mechanisms can provide coarse-grained error signals to boostrap learning.
New preprint from @yang_chu.
https://arxiv.org/abs/2001.10605
Thread below
How do babies and blind people learn to localise sound without labelled data? We propose that innate mechanisms can provide coarse-grained error signals to boostrap learning.
New preprint from @yang_chu.
https://arxiv.org/abs/2001.10605
Thread below
Preview of the talk I'm giving on Friday. #neuroscience #CompNeuro #ComputationalNeuroscience
Low stakes pet peeve of the day: spiking neural network people stop saying SNNs are the third generation of ANNs. They predate them! #compneuro
I'm giving an online talk starting in 15m (as part of UCL's NeuroAI series).
It's on neural architectures and our current line of research trying to figure out what they might be good for (including some philosophy: what might an answer to this question even look like?).
Sign up (free) at this link to get the zoom link:
Come along to my (free, online) UCL NeuroAI talk next week on neural architectures. What are they good for? All will finally be revealed and you'll never have to think about that question again afterwards. Yep. Definitely that.
Wed 12 Feb 2025
2-3pm GMT
Details and registration: https://www.eventbrite.co.uk/e/ucl-neuroai-talk-series-tickets-1216638381149
What's the right way to think about modularity in the brain? This devilish question is a big part of my research now, and it started with this paper with @GabrielBena finally published after the first preprint in 2021!
https://www.nature.com/articles/s41467-024-55188-9
We know the brain is physically structured into distinct areas ("modules"?). We also know that some of these have specialised function. But is there a necessary connection between these two statements? What is the relationship - if any - between 'structural' and 'functional' modularity?
TLDR if you don't want to read the rest: there is no necessary relationship between the two, although when resources are tight, functional modularity is more likely to arise when there's structural modularity. We also found that functional modularity can change over time! Longer version follows.
Discovered the COSYNE 2025 workshop program is out!
Looking forward to fascinating computational neuroscience talks & discussions (March 31-April 1, 2025).
Check out the program: https://www.cosyne.org/workshops-program-2025
See you there! #COSYNE2025 #CompNeuro @CosyneMeeting
@CosyneMeeting #Cosyne
New preprint! With Swathi Anil and @marcusghosh.
If you want to get the most out of a multisensory signal, you should take it's temporal structure into account. But which neural architectures do this best?
(10/n) If you’ve made it this far, you’ll definitely want to check out the full paper. Grab your copy here:
https://www.biorxiv.org/content/10.1101/2024.12.17.628339v1 Sharing is highly appreciated!
#compneuro #neuroscience #NeuroAI #dynamicalsystems
Equivalence between representational similarity analysis, centered kernel alignment, and canonical correlations analysis #stats #compneuro #neuroscience www.biorxiv.org/content/10.1...
We just completed a new course on #DimensionalityReduction in #Neuroscience, and the full teaching material is now freely available (CC BY 4.0 license):
https://www.fabriziomusacchio.com/blog/2024-10-24-dimensionality_reduction_in_neuroscience/
The course is designed to provide an introductory overview of the application of dimensionality reduction techniques for neuroscientists and data scientists alike, focusing on how to handle the increasingly high-dimensional datasets generated by modern neuroscience research.
PhD position openings in my group! Check out topics and how to apply below. At the moment, I'm particularly interested in the topic of modularity in both biological and artificial networks, and how it can be used to scale up intelligent processes.
https://neural-reckoning.org/openings.html
Note that competition for PhD funding at Imperial is pretty strong these days, so worth checking if your undergrad/masters grades are equivalent to UK 1st/Distinction before applying. I'm afraid I don't take self-funded PhD students.
Submit your abstracts for the #SNUFA #SpikingNeuralNetworks conference by tomorrow The conference is free, online and usually has around 700 highly engaged participants. Talks are selected by participant interest.
Please do signal boost this!
Join our ARIA-funded project as a postdoc on brain-inspired computing , at Imperial College London! Super exciting opportunity connecting both fundamental research and the creation of cutting-edge technologies!
Excited to share that we’re recruiting a colleague in Behavioral Neuroscience at UCLA! The application deadline is Oct 31st #compneuro #neuroscience https://recruit.apo.ucla.edu/JPF09740
Brody and #Hopfield (2003) showed how networks of #SpikingNeurons (#SNN) can be used to process temporal information based on computations on the timing of spikes rather than the rate of spikes. This is particularly relevant in the context of #OlfactoryProcessing, where the timing of spikes in the olfactory bulb is crucial for encoding odor information. Here is a quick tutorial, that recapitulates the main concepts of that network using #NEST #simulator:
New preprint on our "collaborative modelling of the brain" (COMOB) project. Over the last two years, a group of us (led by @marcusghosh) have been working together, openly, online, with anyone free to join, on a computational neuroscience research project
https://www.biorxiv.org/content/10.1101/2024.07.19.604252v1
This was an experiment in a more bottom up, collaborative way of doing science, rather than the hierarchical PI-led model. So how did we do it?
We started from the tutorial I gave at @CosyneMeeting 2022 on spiking neural networks that included a starter Jupyter notebook that let you train a spiking neural network model on a sound localisation task.
https://neural-reckoning.github.io/cosyne-tutorial-2022/
https://www.youtube.com/watch?v=GTXTQ_sOxak&list=PL09WqqDbQWHGJd7Il3yVxiBts5nRSxvJ4&index=1
Participants were free to use and adapt this to any question they were interested in (we gave some ideas for starting points, but there was no constraint). Participants worked in groups or individually, sharing their work on our repository and joining us for monthly meetings.
The repository was set up to automatically build a website using @mystmarkdown showing the current work in progress of all projects, and (later in the project) the paper as we wrote it. This kept everyone up to date with what was going on.
https://comob-project.github.io/snn-sound-localization/
We started from a simple feedforward network of leaky integrate-and-fire neurons, but others adapted it to include learnable delays, alternative neuron models, biophysically detailed models, incorporated Dale's law, etc.
We found some interesting results, including that shorter time constants improved performance (consistent with what we see in the auditory system). Surprisingly, the network seemed to be using an "equalisation cancellation" strategy rather than the expected coincidence detection.
Ultimately, our scientific results were not incredibly strong, but we think this was a valuable experiment for a number of reasons. Firstly, it shows that there are other ways of doing science. Secondly, many people got to engage in a research experience they otherwise wouldn't. Several participants have been motivated to continue their work beyond this project. It also proved useful for generating teaching material, and a number of MSc projects were based on it.
With that said, we learned some lessons about how to do this better, and yes, we will be doing this again (call for participation in September/October hopefully). The main challenge will be to keep the project more focussed without making it top down / hierarchical.
We believe this is possible, and we are inspired by the recent success of the Busy Beaver challenge, a bottom up project of mathematics amateurs that found a proof to a 40 year old conjecture.
We will be calling for proposals for the next project, engaging in an open discussion with all participants to refine the ideas before starting, and then inviting the proposer of the most popular project to act as a 'project lead' keeping it focussed without being hierarchical.
If you're interested in being involved in that, please join our (currently fairly quiet) new discord server, or follow me or @marcusghosh for announcements.
I'm excited for a future where scientists work more collaboratively, and where everyone can participate. Diversity will lead to exciting new ideas and progress. Computational science has huge potential here, something we're also pursuing at @neuromatch.
Let's make it happen!
Could we decide if a simulated spiking neural network uses spike timing or not? Given that we have full access to the state of the network and can simulate perturbations. Ideas for how we could decide? Would everyone agree? #neuroscience #SpikingNeuralNetworks #computationalneuroscience #compneuro
We have a new paper with @marcusghosh @GabrielBena on why we have nonlinearity in multimodal circuits.
My lab page has links to journal, preprint, code, talk on youtube, etc.:
http://neural-reckoning.org/pub_multimodal.html
TLDR: Why is it a question that we have nonlinearity in these circuits? Well, the classical multimodal task can be solved with a linear network, so maybe those nonlinear neurons aren't actually needed?
We find that nonlinearity is very important when you consider an extension of the classical multimodal task, embedded into a noisy background, and you don't know when the multimodal signal is active. We think this is a more realistic scenario, for example in a predatory-prey interaction.
We're following up with two additional projects at the moment, looking at what happens when you have even more extended temporal structure in the task (preview: you can still do very well with fairly simple feedforward or recurrent circuits), and when you model agents navigating a multimodal environment (e.g. foraging, hunting; early results suggest recurrent circuits are more robust).
I don't think we can fully understand multimodal circuits until we start looking at more realistic, temporally extended tasks. Exciting times ahead, and we'd be happy to work with any experimental groups interested in pursuing this. Please get in touch!
A problem that often comes up in modelling is how complex the model should be. The answer often given is somehow unsatisfying, that it should be as complex as it needs to be and no more. I think there's something right about this, but it's also not very helpful. So how about this as a refinement?
When we publish a model with a certain number of design choices made, we should be able to justify each choice, which means in practice showing what would have happened if a different choice had been made. So, the way to decide if you should add an additional design choice is something like, would this almost doubling of the amount of work needed to justify this choice be worth the gain?
Of course in practice modellers don't justify all their choices, which is why we can get away with making so many. The hypothesis I'm putting forward here is that this is precisely where we should locate the problem. If, culturally, we did feel like we had to justify all the choices, we would have a natural limit applied to the complexity of the models that would be well founded.