Abstract: Floating-point (FP) computing-in-memory (CIM) addresses the energy efficiency bottleneck of von Neumann architectures and fixed-point CIM in high-accuracy neural network training/inference.