continue to read the papers.

I have read the

MulTCIM: Digital Computing-in-Memory-Based Multimodal Transformer Accelerator With Attention-Token-Bit Hybrid Sparsity

and

TensorCIM: Digital Computing-in-Memory Tensor Processor With Multichip-Module-Based Architecture for Beyond-NN Acceleration

the annotate is here:

MulTCIM

TensorCIM

these two have something in common, such as cache-hit‘s idea.

tomorrow I will continue to a new one, An 88.36TOPS/W Bit-Level-Weight-Compressed Large-Language-Model Accelerator with Cluster-Aligned INT-FP-GEMM and Bi-Dimensional Workflow Reformulation

after all papers have been read, I will complete the assessment of researchs.