13 comments
  • bob10292d

    The biggest problem with recurrent spiking neural networks is searching for them.

    Neuromorphic chips won't help because we don't even know what topology makes sense. Searching for topologies is unbelievably slow. The only thing you can do is run a simulation on an actual problem and measure the performance each time. These simulations turn into tar pits as the power law of spiking activity kicks in. Biology really seems to have the only viable solution to this one. I don't think we can emulate it in any practical way. Chasing STDP and membrane thresholds as some kind of schematic on AI is absolutely the wrong path.

    We should be leaning into what our machines do better than biology. Not what they do worse. My CPU doesn't have to leak charge or simulate any delay if I don't want it to. I can losslessly copy and process information at rates that far exceed biological plausibility.

  • RaftPeople3d

    From article:

    > Cause and Effect: If Neuron A fires just a few milliseconds before Neuron B, the brain assumes A caused B. The synapse between them gets stronger.

    A recent study from Stanford found that it's more complex than this rule, some synapses followed it, some did the opposite, etc.

    • xico1d

      Non-Hebbian and Anti-Hebbian potentiation have been well studied for decades. Anti-Hebbian notably for inhibitory connections.

    • kevlened2d

      > A recent study from Stanford

      Source?

  • mike_hearn3d

    I guess the obvious question is whether something that mimics biology closer is actually useful. Computers are useful exactly because they aren't the same as us. LLMs are useful because they aren't the same as us. The goal is not to be as close to biology as possible, it's to be useful.

    • miki1232111d

      If you could get biology, but:

      * taking less than 18 years to produce a new chip

      * able to task-switch instantly (being a doctor one minute and being a lawyer the next, scaling up/down instantly based on current workload)

      * having millions of identical clones that people intuitively understand how to work with

      * with no need for toilet breaks, sleep, family emergencies, holidays, weekends and all that

      It would be pretty damn useful.

    • 9wzYQbTYsAIc2d

      Neural networks have turned out to be pretty useful. The goal of distributed parallel processing wasn't to recreate the brain but to recreate it's capabilities.

  • geremiiah2d

    Interesting topic, but why am I reading an LLM generated summary?

    • voidUpdate1d

      > "If you’ve been following my recent posts on Metaduck, you know I spend my days building infrastructure for AI agents and wrangling LLMs into production"

      Because LLMs users use LLMs for everything

  • 7777777phil4d

    Neuromorphic chips have been 5 years away for 15 years now.. Nevertheless the Schultz dopamine-TD error convergence is one of the coolest results in neuroscience

  • IshKebab1d

    Interesting, but SpiNNAker (ugh) has been around for 6 years now, and presumably they had smaller options than that before. Has it actually produced anything useful.

    It seems very premature to say "let's build spiking NN hardware!" (or a million core cluster) before we even know how to write the software.

    Spiking NNs need their Alexnet before it makes any sense to make dedicated hardware IMO.

    • HarHarVeryFunny1d

      Agreed, although SpiNNaker really isn't much specialized for spiking NNs - it's really just a large ARM-base cluster specialized for fast message passing. The messages could be inter-neuron communications, or anything else, and it has been used for other purposes.

      I really don't understand the thinking behind these hardware-based neuromorphic projects... as you say it would make more sense to prove ideas out in software first, especially for experimenting with more biologically accurate models of neurons.

      It seems the time to commit to hardware would be if neuromorhic/spiking/asynchronous designs show worthwhile functional benefits, and need custom silicon for efficient implementation.