AndyBlocker
RSS FeedRecent Posts
Recurrent Residual Module for Fast Inference in Videos
Published: at 15:25CVPR2018, DiffEncode + 稀疏加速,但感觉太老了。
Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models
Published: at 14:18NIPS2022上一篇比较有影响力的论文,对GAN和扩散模型做推理加速的工作,提出了Spatially Sparse Inference,仅在被编辑区域上稀疏地应用卷积滤波器,同时对未编辑区域复用缓存的特征
SlowFast Networks for Video Recognition
Updated: at 06:15Published: at 16:57多分支CNN,会不会有一些分支能学到更加相似的帧间变化?
DeltaCNN: End-to-End CNN Inference of Sparse Frame Differences in Videos
Updated: at 15:07Published: at 12:11利用CNN Layer的“线性”特征在帧之间做feature的差分,并且做了CUDA加速。和ViStream几乎一样的思路,能不能解决我们现在的问题?
Phi: Leveraging Pattern-based Hierarchical Sparsity for High-Efficiency Spiking Neural Networks
Published: at 17:45ISCA 2025, 基于结构化稀疏的SNN加速器。如果直接用LUT存,可能会出现需要保存的稀疏pattern数量太多,显存占用太严重,所以通过预先校准一级“结构化稀疏”,将Online Spike Activation变成一级可以完全用LUT算的L1 Sparse和稀疏度非常高的L2 Sparse。模仿一下idea搬到GPU上来做?
Temporal Flexibility in Spiking Neural Networks: Towards Generalization Across Time Steps and Deployment Friendliness
Published: at 15:38ICLR2025 Poster,似乎也在做Elastic inference?
A Simple Framework for Contrastive Learning of Visual Representations
Published: at 13:42对比学习SimCLR的论文。对比学习能对齐每一层的Feature吗?
QKFormer: Hierarchical Spiking Transformer using Q-K Attention
Published: at 18:09QKFormer,NIPS2024 Spotlight,把Direct Training SNN在ImageNet和CIFAR上的点刷的特别高,感觉之后要做就避不开它。
Transformers without Normalization
Published: at 16:09何恺明新作,用DyT代替Norm,把同步操作变成了Element Wise的操作。新文章里面有用到,学习一下。
Visualizing and Understanding the Effectiveness of BERT
Published: at 10:21最近做SNN训练的过程中在研究怎么可视化训练过程中的Loss,在想新加入的方法会不会对模型的Loss Landscape有影响,一般讲Loss Landscape怎么做可视化的文章都会引用这篇文章对Loss Landscape的分析和做法。