UM

Browse/Search Results:  1-10 of 50 Help

  Show only claimed items
Selected(0)Clear Items/Page:    Sort:
A 512-nW 0.003-mm2 Forward-Forward Black Box Trainer for an Analog Voice Activity Detector in 28-nm CMOS Journal article
LI JUNDE, XIN GUOQIANG, YU WEI HAN, UN KA FAI, RUI P MARTINS, MAK PUI IN. A 512-nW 0.003-mm2 Forward-Forward Black Box Trainer for an Analog Voice Activity Detector in 28-nm CMOS[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2024, 71(11), 4703-4707.
Authors:  LI JUNDE;  XIN GUOQIANG;  YU WEI HAN;  UN KA FAI;  RUI P MARTINS; et al.
Favorite | TC[WOS]:0 TC[Scopus]:0  IF:4.0/3.7 | Submit date:2024/08/29
Voice Activity Detection  Convolution Neural Network  Edge Learning  Forward-forward Algorithm  Black Box Training  Back Propagation  
A 28-nm 18.7 TOPS/mm 2 89.4-to-234.6 TOPS/W 8b Single-Finger eDRAM Compute-in-Memory Macro With Bit-Wise Sparsity Aware and Kernel-Wise Weight Update/Refresh Journal article
Zhan, Yi, Yu, Wei Han, Un, Ka Fai, Martins, Rui P., Mak, Pui In. A 28-nm 18.7 TOPS/mm 2 89.4-to-234.6 TOPS/W 8b Single-Finger eDRAM Compute-in-Memory Macro With Bit-Wise Sparsity Aware and Kernel-Wise Weight Update/Refresh[J]. IEEE Journal of Solid-State Circuits, 2024, 59(11), 3866-3876.
Authors:  Zhan, Yi;  Yu, Wei Han;  Un, Ka Fai;  Martins, Rui P.;  Mak, Pui In
Favorite | TC[WOS]:0 TC[Scopus]:0  IF:4.6/5.6 | Submit date:2024/05/16
Compute-in-memory (Cim)  Deep Neural Network (Dnn)  Embedded Dynamic Random Access Memory (Edram)  Input-sparsity  Single-finger (Sf)  Weight Update/refresh  
An FPGA-Based Transformer Accelerator With Parallel Unstructured Sparsity Handling for Question-Answering Applications Journal article
Cao, Rujian, Zhao, Zhongyu, Un, Ka Fai, Yu, Wei Han, Martins, Rui P., Mak, Pui In. An FPGA-Based Transformer Accelerator With Parallel Unstructured Sparsity Handling for Question-Answering Applications[J]. IEEE Transactions on Circuits and Systems II-Express Briefs, 2024, 71(11), 4688-4692.
Authors:  Cao, Rujian;  Zhao, Zhongyu;  Un, Ka Fai;  Yu, Wei Han;  Martins, Rui P.; et al.
Favorite | TC[WOS]:0 TC[Scopus]:0  IF:4.0/3.7 | Submit date:2024/10/10
Sparse Matrices  Computational Modeling  Transformers  Hardware  Energy Efficiency  Circuits  Throughput  Dataflow  Digital Accelerator  Energy-efficient  Field-programmable Gate Array (Fpga)  Sparsity  Transformer  
A 5T-SRAM Based Computing-in-Memory Macro Featuring Partial Sum Boosting and Analog Non-Uniform Quantization Conference paper
Xin, Guoqiang, Tan, Fei, Li, Junde, Chen, Junren, Yu, Wei Han, Un, Ka Fai, Martins, Rui P., Mak, Pui In. A 5T-SRAM Based Computing-in-Memory Macro Featuring Partial Sum Boosting and Analog Non-Uniform Quantization[C]:Institute of Electrical and Electronics Engineers Inc., 2024, 882-887.
Authors:  Xin, Guoqiang;  Tan, Fei;  Li, Junde;  Chen, Junren;  Yu, Wei Han; et al.
Favorite | TC[WOS]:0 TC[Scopus]:0 | Submit date:2024/10/10
5t-sram  Analog Non-uniform Quantization (Anuq)  Computing-in-memory (Clm)  Machine Learning (Ml)  Matrix-vector Multiplication (Mvm)  Partial Sum Boosting (Psb)  
A 1.8% FAR, 2 ms Decision Latency, 1.73 nJ/Decision Keywords-Spotting (KWS) Chip Incorporating Transfer-Computing Speaker Verification, Hybrid-IF-Domain Computing and Scalable 5T-SRAM Journal article
TAN FEI, YU WEI HAN, LIN JINHAI, UN KA FAI, RUI P. MARTINS, MAK PUI IN. A 1.8% FAR, 2 ms Decision Latency, 1.73 nJ/Decision Keywords-Spotting (KWS) Chip Incorporating Transfer-Computing Speaker Verification, Hybrid-IF-Domain Computing and Scalable 5T-SRAM[J]. IEEE Journal of Solid State Circuits, 2024.
Authors:  TAN FEI;  YU WEI HAN;  LIN JINHAI;  UN KA FAI;  RUI P. MARTINS; et al.
Favorite |  | Submit date:2024/08/19
An FPGA-Based Transformer Accelerator with Parallel Unstructured Sparsity Handling for Question-Answering Applications Journal article
CAO RUJIAN, ZHAO ZHONGYU, UN KA FAI, YU WEI HAN, RUI P. MARTINS, MAK PUI IN. An FPGA-Based Transformer Accelerator with Parallel Unstructured Sparsity Handling for Question-Answering Applications[J]. IEEE Transactions on Circuits and Systems II: Express Briefs, 2024.
Authors:  CAO RUJIAN;  ZHAO ZHONGYU;  UN KA FAI;  YU WEI HAN;  RUI P. MARTINS; et al.
Favorite |  | Submit date:2024/08/29
A Delta-Sigma-Based Computing-In-Memory Macro Targeting Edge Computation Conference paper
ZHANG RAN, UN KA FAI, GUO MINGQIANG, QI LIANG, XU DENGKE, ZHAO WEIBING, RUI P. MARTINS, FRANCO MALOBERTI, SIN SAI WENG. A Delta-Sigma-Based Computing-In-Memory Macro Targeting Edge Computation[C]:IEEE, 2024.
Authors:  ZHANG RAN;  UN KA FAI;  GUO MINGQIANG;  QI LIANG;  XU DENGKE; et al.
Favorite | TC[WOS]:0 TC[Scopus]:0 | Submit date:2024/08/19
Machine Learning  Edge Computation  Computing-in-memory  Delta-sigma Converter  Floating Inverter Amplifier  
A 90.7-nW Vibration-Based Condition Monitoring Chip Featuring a Digital Compute-in-Memory-Based DNN Accelerator Using an Ultra-Low-Power 13T-SRAM Cell Journal article
Zhang, Haochen, Yu, Wei Han, Yang, Zhizhan, Un, Ka Fai, Yin, Jun, Martins, Rui P., Mak, Pui In. A 90.7-nW Vibration-Based Condition Monitoring Chip Featuring a Digital Compute-in-Memory-Based DNN Accelerator Using an Ultra-Low-Power 13T-SRAM Cell[J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2024.
Authors:  Zhang, Haochen;  Yu, Wei Han;  Yang, Zhizhan;  Un, Ka Fai;  Yin, Jun; et al.
Favorite | TC[WOS]:0 TC[Scopus]:0  IF:4.6/5.6 | Submit date:2024/08/05
13t-sram  Accelerometer Sensor  Compute-in-memory (Cim)  Deep Neural Network (Dnn)  Feature Extractor  Internet-of-things  Ultra-low Power (Ulp)  Vibration-based Condition Monitoring (Vbcm)  
FLEX-CIM: A Flexible Kernel Size 1-GHz 181.6-TOPS/W 25.63-TOPS/mm2 Analog Compute-in-Memory Macro Journal article
Fu, Yuzhao, Yu, Wei Han, Un, Ka Fai, Chan, Chi Hang, Zhu, Yan, Zhang, Minglei, Martins, Rui P., Mak, Pui In. FLEX-CIM: A Flexible Kernel Size 1-GHz 181.6-TOPS/W 25.63-TOPS/mm2 Analog Compute-in-Memory Macro[J]. IEEE Journal of Solid-State Circuits, 2024.
Authors:  Fu, Yuzhao;  Yu, Wei Han;  Un, Ka Fai;  Chan, Chi Hang;  Zhu, Yan; et al.
Favorite | TC[WOS]:1 TC[Scopus]:1  IF:4.6/5.6 | Submit date:2024/05/16
Analog Partial Sum (Aps)  Compute-in-memory (Cim)  Convolutional Neural Network (Cnn)  Flexible Kernel Size  Utilization  
A 119.64 GOPs/W FPGA-Based ResNet50 Mixed-Precision Accelerator Using the Dynamic DSP Packing Journal article
Yaozhong Ou, Wei-Han Yu, Ka-Fai Un, Chi-Hang Chan, Yan Zhu. A 119.64 GOPs/W FPGA-Based ResNet50 Mixed-Precision Accelerator Using the Dynamic DSP Packing[J]. IEEE Transactions on Circuits and Systems II: Express Briefs, 2024.
Authors:  Yaozhong Ou;  Wei-Han Yu;  Ka-Fai Un;  Chi-Hang Chan;  Yan Zhu
Favorite |   IF:4.0/3.7 | Submit date:2024/08/07