AndyBlocker
RSS FeedRecent Posts
Sparse Spiking Neural Network: Exploiting Heterogeneity in Timescales for Pruning Recurrent SNN
Published: at 19:11ICLR 2024 Spotlight, 利用Lyapunov Noise进行SNN Pruning。
Prosperity: Accelerating Spiking Neural Networks via Product Sparsity
Published: at 16:52HPCA在投的一篇SNN加速器文章,里面的“Product Sparsity”本质是减少相同内容的重复计算,和一般讨论的稀疏是两种不同的概念。
Towards Scalable GPU-Accelerated SNN Training via Temporal Fusion
Published: at 14:34意义不明,用Layer-By-Layer写了一下LIF就没别的Contribution了,发在了一个叫做ICANN的会上。工作量也太小了。
Recurrent Residual Module for Fast Inference in Videos
Published: at 15:25CVPR2018, DiffEncode + 稀疏加速,但感觉太老了。
Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models
Published: at 14:18NIPS2022上一篇比较有影响力的论文,对GAN和扩散模型做推理加速的工作,提出了Spatially Sparse Inference,仅在被编辑区域上稀疏地应用卷积滤波器,同时对未编辑区域复用缓存的特征
SlowFast Networks for Video Recognition
Updated: at 06:15Published: at 16:57多分支CNN,会不会有一些分支能学到更加相似的帧间变化?
DeltaCNN: End-to-End CNN Inference of Sparse Frame Differences in Videos
Updated: at 15:07Published: at 12:11利用CNN Layer的“线性”特征在帧之间做feature的差分,并且做了CUDA加速。和ViStream几乎一样的思路,能不能解决我们现在的问题?
Phi: Leveraging Pattern-based Hierarchical Sparsity for High-Efficiency Spiking Neural Networks
Published: at 17:45ISCA 2025, 基于结构化稀疏的SNN加速器。如果直接用LUT存,可能会出现需要保存的稀疏pattern数量太多,显存占用太严重,所以通过预先校准一级“结构化稀疏”,将Online Spike Activation变成一级可以完全用LUT算的L1 Sparse和稀疏度非常高的L2 Sparse。模仿一下idea搬到GPU上来做?
Temporal Flexibility in Spiking Neural Networks: Towards Generalization Across Time Steps and Deployment Friendliness
Published: at 15:38ICLR2025 Poster,似乎也在做Elastic inference?
A Simple Framework for Contrastive Learning of Visual Representations
Published: at 13:42对比学习SimCLR的论文。对比学习能对齐每一层的Feature吗?