Word cloud of abstracts we've received for #SNUFA #SpikingNeuralNetworks conference 2024. Register (free) by tomorrow afternoon UTC if you want to take part in selecting which abstracts get offered talk slots at the workshop!
We got 50% more submissions this year for the #SNUFA #SpikingNeuralNetworks conference compared to last year: thanks!
We will shortly send out to registered participants a survey to allow you to take part in the approval voting scheme that will decide which abstracts we select as talks.
Register soon if you want to take part!
Submit your abstracts for the #SNUFA #SpikingNeuralNetworks conference by tomorrow The conference is free, online and usually has around 700 highly engaged participants. Talks are selected by participant interest.
Please do signal boost this!
This needs a hand clap !
New preprint on our "collaborative modelling of the brain" (COMOB) project. Over the last two years, a group of us (led by @marcusghosh) have been working together, openly, online, with anyone free to join, on a computational neuroscience research project
https://www.biorxiv.org/content/10.1101/2024.07.19.604252v1
This was an experiment in a more bottom up, collaborative way of doing science, rather than the hierarchical PI-led model. So how did we do it?
We started from the tutorial I gave at @CosyneMeeting 2022 on spiking neural networks that included a starter Jupyter notebook that let you train a spiking neural network model on a sound localisation task.
https://neural-reckoning.github.io/cosyne-tutorial-2022/
https://www.youtube.com/watch?v=GTXTQ_sOxak&list=PL09WqqDbQWHGJd7Il3yVxiBts5nRSxvJ4&index=1
Participants were free to use and adapt this to any question they were interested in (we gave some ideas for starting points, but there was no constraint). Participants worked in groups or individually, sharing their work on our repository and joining us for monthly meetings.
The repository was set up to automatically build a website using @mystmarkdown showing the current work in progress of all projects, and (later in the project) the paper as we wrote it. This kept everyone up to date with what was going on.
https://comob-project.github.io/snn-sound-localization/
We started from a simple feedforward network of leaky integrate-and-fire neurons, but others adapted it to include learnable delays, alternative neuron models, biophysically detailed models, incorporated Dale's law, etc.
We found some interesting results, including that shorter time constants improved performance (consistent with what we see in the auditory system). Surprisingly, the network seemed to be using an "equalisation cancellation" strategy rather than the expected coincidence detection.
Ultimately, our scientific results were not incredibly strong, but we think this was a valuable experiment for a number of reasons. Firstly, it shows that there are other ways of doing science. Secondly, many people got to engage in a research experience they otherwise wouldn't. Several participants have been motivated to continue their work beyond this project. It also proved useful for generating teaching material, and a number of MSc projects were based on it.
With that said, we learned some lessons about how to do this better, and yes, we will be doing this again (call for participation in September/October hopefully). The main challenge will be to keep the project more focussed without making it top down / hierarchical.
We believe this is possible, and we are inspired by the recent success of the Busy Beaver challenge, a bottom up project of mathematics amateurs that found a proof to a 40 year old conjecture.
We will be calling for proposals for the next project, engaging in an open discussion with all participants to refine the ideas before starting, and then inviting the proposer of the most popular project to act as a 'project lead' keeping it focussed without being hierarchical.
If you're interested in being involved in that, please join our (currently fairly quiet) new discord server, or follow me or @marcusghosh for announcements.
I'm excited for a future where scientists work more collaboratively, and where everyone can participate. Diversity will lead to exciting new ideas and progress. Computational science has huge potential here, something we're also pursuing at @neuromatch.
Let's make it happen!
Could we decide if a simulated spiking neural network uses spike timing or not? Given that we have full access to the state of the network and can simulate perturbations. Ideas for how we could decide? Would everyone agree? #neuroscience #SpikingNeuralNetworks #computationalneuroscience #compneuro
SPIKING NEURAL NETWORKS!
If you love them, join us at SNUFA24. Free, online workshop, Nov 5-6 (2-6pm CET). Usually ~700 participants.
Invited speakers: Chiara Bartolozzi, David Kappel, Anna Levina, Christian Machens
Posters + 8 contributed talks selected by participant vote.
Abstract submission is quick and easy (300 word max), and now open until the deadline Sept 27.
Registration is free, but mandatory.
Hope to see you there!
Spiking neural network community. We are thinking of holding the annual SNUFA workshop on Nov 5-6 or 12-13. Preferences? Are there any clashes we should know about? #SpikingNeuralNetworks #Neuroscience #ComputationalNeuroscience
Dear colleagues,
It's a pleasure to share with you this fully-funded #PhD position in #computational neuroscience in interaction with #neuromorphic engineering and #neuroscience:
https://laurentperrinet.github.io/post/2024-05-03_phd-position_focus-of-attention/
TL;DR: This PhD subject focuses on the association between #attention and #SpikingNeuralNetworks for defining new efficient AI models for embedded systems such as drones, robots and more generally autonomous systems. The thesis will take place between the LEAT research lab in Sophia-Antipolis and the INT institute in Marseille which both develop complementary approaches on bio-inspired AI from neuroscience to embedded systems design.
The application should include :
• Curriculum vitæ,
• Motivation Letter,
• Letter of recommendation of the master supervisor.
and sent to Benoit Miramond benoit.miramond@unice.fr, Laurent Perrinet Laurent.Perrinet@univ-amu.fr, and Laurent Rodriguez laurent.rodriguez@univ-cotedazur.fr
Cheers,
Laurent
PS: related references:
Emmanuel Daucé, Pierre Albigès, Laurent U Perrinet (2020). A dual foveal-peripheral visual processing model implements efficient saccade selection. Journal of Vision. doi: https://doi.org/10.1167/jov.20.8.22
Jean-Nicolas Jérémie, Emmanuel Daucé, Laurent U Perrinet (2024). Retinotopic Mapping Enhances the Robustness of Convolutional Neural Networks. arXiv: https://arxiv.org/abs/2402.15480
Just in time for your weekend, we released Brian 2.6, the new version of your friendly spiking network simulator.
It comes with many small improvements, bug and compatibility fixes, and offers a major new feature for running standalone simulations repeatedly (or in parallel) without recompiling code. In addition, it comes with general infrastructure improvements all around (official wheels for Python 3.12! Docker images on Docker hub! Apple Silicon builds/tests!).
Enjoy (and let us know if you run into any issues, of course…)
Special Session on #SpikingNeuralNetworks and #Neuromorphic Computing at the 33rd International Conference on Artificial Neural Networks (ICANN) 2024 - Call for Papers
Sep 17 - 20, Lugano, Switzerland
The special session invites contributions on recent advances in spiking neural networks. Spiking neural networks have gained substantial attention recently as a candidate for low latency and low power AI substrate, with implementations being explored in neuromorphic hardware. This special session aims to bring together practitioners interested in efficient learning algorithms, data representations, and applications.
ORGANIZERS:
Find more details at: https://e-nns.org/wp-content/uploads/2024/ICANN2024-SNNC-CfP.pdf
I'm on the latest episode of Brain Inspired talking about #Neuroscience, #SpikingNeuralNetworks, #MachineLearning and #Metascience! Thanks Paul Middlebrooks (not on Mastodon I think) for the invite and the extremely fun conversation. For the explanation of why this picture you'll have to listen to the episode.
https://braininspired.co/podcast/183/
Also, if you're not yet listening to Brain Inspired you should be - and support Paul on Patreon. He provides this free for the community with no adverts. What a hero!
We are finally on Mastodon, time for a little #introduction !
Brian is a #FOSS simulator for biological #SpikingNeuralNetworks, for research in #ComputationalNeuroscience and beyond. It makes it easy to go from a high-level model description in Python, based on mathematical equations and physical units, to a simulation running efficiently on the CPU or GPU.
We have a friendly community and extensive documentation, links to everything on our homepage: https://briansimulator.org
This account will mostly announce news (releases, other notable events), but we're also looking forward to discussing with y'all
All talks (but one) from SNUFA 2023 #SpikingNeuralNetworks workshop now available on our Youtube channel:
https://youtube.com/playlist?list=PL09WqqDbQWHHPPOpmHezdxbuQqow6EarR&si=aXigfLRfaL8BhJx2
Come and join us now at SNUFA 2023 to hear the latest cool research on #SpikingNeuralNetworks. https://www.crowdcast.io/c/snufa-2023
SNUFA spiking neural network workshop starting in 20m!
Check out the full programme and abstracts at https://snufa.net/2023/
You need to register (free) at https://eventbrite.co.uk/e/snufa-2023-tickets-675972952297
You can also join our discord to chat before/during/after the event: https://discord.gg/aYvgGakrVK
Many thanks to all who sent abstracts for the SNUFA SNN workshop. We will be sending every participant a random sample of abstracts to vote on to help decide what should be a talk/poster. If you want to take part, register today (free).
https://www.eventbrite.co.uk/e/snufa-2023-tickets-675972952297
More info on the workshop at:
New preprint! A simple way to extend the classical evidence weighting model of multimodal integration to solve a much wider range of naturalistic tasks. Spoiler: it's nonlinearity. Works for SNNs/ANNs. with @marcusghosh, Gabriel Béna, Volker Bormuth