Xing Hu
Xing Hu
Professor:
Address:
Email: huxing@ict.ac.cn

Education

    2009.09 — 2014.07  Ph.D.   State Key Lab of Computer Architecture. Institute of Computing Technology (ICT), University of Chinese Academy of Sciences (UCAS)

2005.09 — 2009.07 B.S.   Computing Science and technology Department. HuaZhong University of Science & Technology (HUST)

Professional Experience 

    2020.04 –– Present, Associate Professor, Institute of Computing Technology, Chinese Academy of Sciences.

2017.01 –– 2020.03 Postdoc, University of California, Santa Barbara, Department of Electrical and Computer Engineering.   Advisor: Yuan Xie

2014.07 –– 2016.12 Research Scientist, HUAWEI Technologies, Shannon Cognitive Lab. 

    Selected Publications

  • Publications


    • Xing Hu, Ling Liang, Xiaobing Chen, Lei Deng, Yu Ji, Yufei Ding, Zidong Du, Qi Guo, Tim Sherwood, Yuan Xie, A Systematic View of Model Leakage Risks in Deep Neural Network Systems, IEEE Transactions on Computers (TC), 2022. (CCF-A)

    • Husheng Han, Kaidi Xu, Xing Hu#, Xiaobing Chen, Ling Liang, Zidong Du, Qi Guo, Yanzhi Wang, Yunji Chen, ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers, Advances in Neural Information Processing Systems (NeurIPS), 2021. (CCF-A)

    • Ling Liang*, Xing Hu*, Lei Deng, Yujie Wu, Guoqi Li, Yufei Ding, Peng Li, Yuan Xie, Exploring adversarial attack in spiking neural networks with spike-compatible gradient,IEEE transactions on neural networks and learning systems (TNNLS), 2021. (JCR-1)

    • Xinkai Song, Tian Zhi, Zhe Fan, Zhenxing Zhang, Xi Zeng, Wei Li, Xing Hu, Zidong Du, Qi Guo, Yunji Chen, Cambricon-G: A Polyvalent Energy-Efficient Accelerator for Dynamic Graph Neural Networks, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 2021. (CCF-A)

    • Yongwei Zhao, Chang Liu, Zidong Du, Qi Guo, Xing Hu, Yimin Zhuang, Zhenxing Zhang, Xinkai Song, Wei Li, Xishan Zhang, Ling Li, Zhiwei Xu, Tianshi Chen, Cambricon-Q: a hybrid architecture for efficient training, in International Symposium on Computer Architecture (ISCA), 2021. (CCF-A)

    • Xinfeng Xie, Zheng Liang, Peng Gu, Abanti Basak, Lei Deng, Ling Liang, Xing Hu, Yuan Xie, Spacea: Sparse matrix vector multiplication on processing-in-memory accelerator, inIEEE International Symposium on High-Performance Computer Architecture (HPCA), 2021. (CCF-A)

    • Yuanbo Wen, Qi Guo, Zidong Du, Jianxing Xu, Zhenxing Zhang, Xing Hu, Wei Li, Rui Zhang, Chao Wang, Zhou Xuehai, Tianshi Chen, Enabling One-size-fits-all Compilation Optimization across Machine Learning Computers for Inference, in IEEE Transactions on Computers (TC), 2021. (CCF-A)

    • Xing Hu, Ling Liang, Lei Deng, Shuangchen Li, Pengfei Zuo, Xinfeng Xie, Yu Ji, Yufei Ding, Timothy Sherwood, Yuan Xie, DeepSniffer, an Neural Network Model Extraction Framework by Learning Architecture Hints, to appear in ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2020.  (CCF-A)

    • Xing Hu, Yang Zhao, Lei Deng, Ling Liang, Pengfei Zuo, Yingyan Lin, Yuan Xie, Hardware Trojaning in Neural Network Accelerator, submitted to IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD). (CCF-A)

    • Xing Hu, Matheus Ogleari, Jishen Zhao, Shuangchen Li, Abanti Basak, Yuan Xie, Persistence parallelism optimization: a holistic approach from memory bus to RDMA network, in IEEE/ACM International Symposium on Microarchitecture (MICRO), pp 494-506, 2018. (CCF-A)

    • Xing Hu, Dylan Stow, Yuan Xie, Die stack is happening, in IEEE Micro, pp. 22-28, 2018.

    • Jilan Lin, Cheng-Da Wen, Xing Hu, Tianqi Tang, Chao Lin, Yu Wang, Yuan Xie,Rescuing RRAM-Based Computing From Static and Dynamic Faults, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 2020.  (CCF-A)

    • Pengfei Zuo, Yu Hua, Ling Liang, Xinfeng Xie, Xing Hu, Yuan Xie, Sealing neural network models in secure deep learning accelerators, Design Automation Conference (DAC), 2020. (CCF-A)

    • Zhaodong Chen, Lei Deng, Guoqi Li, Jiawei Sun, Xing Hu, Ling Liang, Yufei Ding, Yuan Xie, Effective and efficient batch normalization using a few uncorrelated data for statistics estimation, IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2020. (JCR-1)

    • Yang Zhao*, Xing Hu*, Shuangchen Li, Jing Ye, Lei Deng, Yu Ji, Jianyu Xu, Dong Wu, and Yuan Xie. Memory Trojan Attacks on Neural Network Accelerators, in Design Automation and Test in Europe (DATE), pp. 1402-1407, 2019.

    • Mingyu Yan, Lei Deng, Xing Hu, Ling Liang, Yujing Feng, Xiaochun Ye, Zhiming Zhang, Dongrui Fan, Yuan Xie, HyGCN: A GCN Accelerator with Hybrid Architecture, in High Performance Computer Architecture (HPCA), 2020.(CCF-A)

    • Minyu Yan, Xing Hu, Shuangchen Li,Abanti Basak, Han Li, Xin Ma, Itir Akgun, Yujing Feng, Peng Gu, Lei Deng, Xiaochun Ye, Zhimin Zhang, Dongrui Fan, Yuan Xie, Alleviating Irregularity in Graph Analytics Acceleration: a Hardware/Software Co-Design Approach, in IEEE/ACM International Symposium on Microarchitecture (MICRO), 2019. (CCF-A)

    • Lei Deng, Yujie Wu, Xing Hu, Ling Liang, Yufei Ding, Guoqi Li, Guangshe Zhao, Peng Li, Yuan Xie. Rethinking the Performance Comparison between SNNs and ANNs, to appear in Neural Networks 2019.

    • Wenqing Huangfu, Shuangchen Li, Xing Hu, Yuan Xie, RADAR: a 3D-reRAM based DNA alignment accelerator architecture, in Design Automation Conference (DAC), pp. 59-64, 2018. (CCF-A)

    • Xinfeng Xie, Xing Hu, Peng Gu, Shuangchen Li, Yu Ji, Yuan Xie, NNBench-X: Benchmarking and Understanding Neural Network Workloads for Accelerator Designs, to appear in Computer Architecture Letter (CAL), pp. 38-42, 2019.

    • Mingyu Yan, Xing Hu, Shuangchen Li, Itir Akgun, Han Li, Xin Ma, Lei Deng, Xiaochun Ye, Zhimin Zhang, Dongrui Fan, and Yuan Xie, Balancing Memory Accesses for Energy-Efficient Graph Analytics Accelerators, to appear in ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED), 2019.

    • Shuangchen Li, Alvin Oliver Glova, Xing Hu, Peng Gu, Dimin Niu, Krishna T. Malladi, Hongzhong Zheng, Yuan Xie, SCOPE: a stochastic computing engine for DRAM-based in-situ accelerator,in IEEE/ACM International Symposium on Microarchitecture (MICRO), pp. 696-709, 2018. (CCF-A)

    • Wenqin Huangfu, Xueqi Li, Shuangchen Li, Xing Hu, Peng Gu, Yuan Xie, MEDAL: Scalable DIMM based Near Data Processing Accelerator for DNA Seeding Algorithm, accepted in IEEE/ACM International Symposium on Microarchitecture (MICRO), 2019. (CCF-A)

    • Abanti Basak, Xing Hu, Shuangchen Li, Sang Min Oh, Yuan Xie, exploring core and cache hierarchy bottlenecks in graph processing workloads, in Computer Architecture Letter (CAL), pp. 197-200, 2018.

    • Abanti Basak, Shuangchen Li, Xing Hu, Sang Min Oh, Yuan Xie, Analysis and Optimization of the Memory Hierarchy for Graph Processing Workloads, in High Performance Computer Architecture (HPCA), pp 373-386, 2019. (CCF-A)

    • Liu Liu, Lei Deng, Xing Hu, Maohua Zhu, Guoqi Li, Yufei Ding, Yuan Xie, Dynamic Sparse Graph for Efficient Deep Learning, in International Conference on Learning Representations (ICLR), 2019.  

    • Lin Ning, Hang Lu, Xing Hu, Xiaowei Li, When Deep Learning Meets the Edge: AutoMasking Deep Neural Networks for Efficient Machine Learning on Edge Devices, to appear in IEEE International Conference on Computer Design (ICCD), 2019.

    • Yu Ji, Youyang Zhang, Xinfeng Xie, Shuangchen Li, Peiqi Wang, Xing Hu, Youhui Zhang, Yuan Xie, FPSA: A Full System Stack Solution for Reconfigurable ReRAM-based NN Accelerator Architecture, in 24th ACM International Conference on Architectural Support for Programming Languages and Operating Systems ASPLOS, pp 733-747, 2019. (CCF-A)

    • Jilan Lin, Shuangchen Li, Xing Hu, Lei Deng, Yuan Xie, CNNWire: Boosting Convolutional Neural Network with Winograd on ReRAM based Accelerators, in Great Lakes Symposium on VLSI (GLSVLSI), pp 283-286, 2019.

    • Kun Wu, Guohao Dai, Xing Hu, Shuangchen Li, Yu Wang, Yuan Xie, Memory-bounded Proof of Work Acceleration for Block-chain Applications. in Design Automation Conference (DAC), 2019. (CCF-A)

    • Ling Liang, Lei Deng, Yueling Zeng, Xing Hu, Yu Ji, Xin Ma, Guoqi Li, Yuan Xie, Crossbar-aware neural network pruning, in IEEE Access 6:58324-58337, 2018.

    • Lei Deng, Ling Liang, G. Wang, Liang Chang, Xing Hu, Xin Ma, Liu Liu, Jing Pei, Guoqi Li, and Y. Xie, “SemiMap: A semi-folded convolution mapping for speed-overhead balance on crossbars,” in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), 2018. (CCF-A)

    • Lei Deng, Zhe Zou, Xin Ma, Ling Liang, Guanrui Wang, Xing Hu, Liu Liu, Jing Pei, Guoqi Li, Yuan Xie, Fast Object Tracking on a Many-core Neural Network Chip, in Frontiers in Neuroscience, section Neuromorphic Engineering, 2018.

    • Guoqing Chen, Yi Xu, Xing Hu, Xiangyang Guo, Jun Ma, Yu Hu, Yuan Xie, TSocket: Thermal-sustainable power budgeting, in ACM Transaction on Design Automation Electrical Systems, 21(2):29, 2016.

    • Xing Hu, Yi Xu, Jun Ma, Guoqing Chen, Yu Hu, and Yuan Xie, Tsocket: Thermal-sustainable power budgeting for dynamic threading, in Design Automation Conference (DAC), pp.181-187, 2014. (CCF-A)

    • Xing Hu, Guihai Yan, Yu Hu, and Xiaowei Li, Orchestrator: guarding against voltage emergencies in multi-threaded applications, in IEEE Transactions on VLSI systems, 22(12):2476-2487, 2014.

    • Xing Hu, Yi Xu, Yu Hu, and Yuan Xie, Swimming Lane: a composite design to mitigate voltage droop effects in 3D chips, in Asia and South Pacific Design Automation Conference (ASPDAC), pp. 550-555, 2014.

    • Xing Hu, Guihai Yan, Yu Hu, and Xiaowei Li, Orchestrator: a low-cost solution to reduce voltage emergencies for multi-threaded applications, in proceedings of Conference on Design, Automation and Test in Europe (DATE), pp. 208-213, 2013.

    • Songjun Pan,Yu Hu, Xing Hu, and Xiaowei Li,A cost-effective substantial-impact-filter based method to tolerate voltage emergencies, in proceedings of Conference on Design, Automation and Test in Europe (DATE), pp. 311-316, 2011.



    View More

    Research Interests

  • Her interests lies at the intersection of computer architecture and machine learning techniques. Designing systems for more efficient and robust AI, and AI-driven design for computing architecture and systems. She has published more than 30 top-tier research papers, including ASPLOS, MICRO, ISCA, HPCA, DAC, NeurIPS, ICLR, TC, TNNLS, and TCAD.

    Patents

    • Data transmission method and apparatus, Xing Hu, Yu Hu, Xiaowei Li, (Granted), US10069604B2.  

    • Voltage droop mitigation in 3D chip system, Yi Xu, Xing Hu, Yuan Xie, (Granted), USO09595508B2

    • Memory refresh technology and computer system, Xing Hu, Chuanzeng Liang, Shihai Xiao, Kanwen Wang, WO2018188083A1

    • Chip having extensible memory,Daifen, Xing Hu, Jun Xu, Yuangang Wang, WO2018058430A1

    • Scheduling method and device for memory access instruction, and computer system, Xing Hu, Yuntan Fang, Shihai Xiao, WO2017201693A1


    Addictional Professional Service

    • Program committee member

    • ICCAD’2022, DAC’2022, MICRO’2021, ICCAD’2021, DAC’2021, ICCAD’2020

    • Session Chair:

    • MICRO’2021, DAC’2020

    • External committee member

    • ISCA2022, ASPLOS2021, ASPLOS2020,HPCA2021, HPCA2020

    Recent Professional Service

    Link