Tag: SNN
All the articles with the tag "SNN".
SpikeVideoFormer: An Efficient Spike-Driven Video Transformer with Hamming Attention and O(T) Complexity
Published: at 16:56用汉明距离替换Attention中的点乘操作,避免出现Spike错开的情况。中间的做法比较有趣,但是实验感觉做的一般般,尤其是claim了自己有硬件实现的情况下energy计算还用的是纯算法的计算,并且FPGA的具体实现也没有透露,说了也没有说清楚。精度没有超过ANN2SNN的SOTA。重点还是需要用一些其他的操作替换掉对SNN不适应的算子。
Sparse Spiking Neural Network: Exploiting Heterogeneity in Timescales for Pruning Recurrent SNN
Published: at 19:11ICLR 2024 Spotlight, 利用Lyapunov Noise进行SNN Pruning。
Prosperity: Accelerating Spiking Neural Networks via Product Sparsity
Published: at 16:52HPCA在投的一篇SNN加速器文章,里面的“Product Sparsity”本质是减少相同内容的重复计算,和一般讨论的稀疏是两种不同的概念。
Towards Scalable GPU-Accelerated SNN Training via Temporal Fusion
Published: at 14:34意义不明,用Layer-By-Layer写了一下LIF就没别的Contribution了,发在了一个叫做ICANN的会上。工作量也太小了。
Phi: Leveraging Pattern-based Hierarchical Sparsity for High-Efficiency Spiking Neural Networks
Published: at 17:45ISCA 2025, 基于结构化稀疏的SNN加速器。如果直接用LUT存,可能会出现需要保存的稀疏pattern数量太多,显存占用太严重,所以通过预先校准一级“结构化稀疏”,将Online Spike Activation变成一级可以完全用LUT算的L1 Sparse和稀疏度非常高的L2 Sparse。模仿一下idea搬到GPU上来做?
Temporal Flexibility in Spiking Neural Networks: Towards Generalization Across Time Steps and Deployment Friendliness
Published: at 15:38ICLR2025 Poster,似乎也在做Elastic inference?
QKFormer: Hierarchical Spiking Transformer using Q-K Attention
Published: at 18:09QKFormer,NIPS2024 Spotlight,把Direct Training SNN在ImageNet和CIFAR上的点刷的特别高,感觉之后要做就避不开它。
SpikeCV: Open a Continuous Computer Vision Era
Updated: at 14:57Published: at 15:33事件相机开源框架。
Neuromorphic computing at scale
Updated: at 14:57Published: at 22:11发在Nature上的一篇review,讨论了SNN/神经模态计算社区现在面临的一些问题、挑战,和一些可能的发展方向。
SDiT: Spiking Diffusion Model with Transformer
Updated: at 14:57Published: at 14:10脉冲Diffusion Transformer,里面的Transformer的结构是RWKV的。