April 14th, 2023 (Friday)Location: ECE 202, New Jersey Institute of Technology, Newark, NJ, USA |
|
Research topics on Artificial Intelligence (AI) accelerator designs for the edge devices have attracted vast interest, where accelerating Deep Neural Network (DNN) using Processing-in-Memory (PIM) and Processing-in-Sensor (PIS) platforms is an actively-explored direction with great potential. Such accelerators, which simultaneously aim to address power- and memory-wall bottlenecks, have shown orders of performance enhancement in comparison to the conventional computing platforms with Von-Neumann architecture. As one direction of accelerating DNN in PIM/PIS, resistive memory array (aka. crossbar) has drawn great research interest owing to its analog current mode weighted summation operation which intrinsically matches the dominant Multiplication-and-Accumulation (MAC) operation in DNN, making it one of the most promising candidates. An alternative direction is through bulk bit-wise logic operations directly performed on the content in digital memories.
The main goal of this seasonal school is to dive deep into the rapidly developing field of PIM and PIS with a focus on the intelligent memory and sensor circuit and system at the edge and cover its cross-layer design challenges from device to algorithms. The IEEE Seasonal School in Circuits and Systems on Intelligent Memory & Sensor at the Edge (IMS 2023) offers talks and tutorials by leading researchers from multiple disciplines and prominent universities and promotes student short presentations to demonstrate new research and results, discuss the potential and challenges of the edge accelerators, future research needs, and directions, and shape collaborations.
Registration Postdoc, Ph.D./M.Sc./U.G. students are encouraged to register for the event. Registration Link |
Program Schedule (Times in EST) |
9:00 AM - 9:30 AM |
Breakfast, Coffee, Registration, Networking |
9:30 AM - 10:00 AM
|
Student Short Presentations 1- Aseel Zeinati (Switching Devices for Neuromorphic Computing) 2- Mahmoud Nazzal (Centralized and Decentralized Graph Neural Networks) 3-
|
10:00 AM - 10:10 AM
|
Opening Remarks by Dr. Angizi
|
10:10 AM - 10:50 AM
|
Talk I: Dr. Maryam Parsa (George Mason University) Title: Bayesian Brain-Inspired Computing Abstract: Neuromorphic computing is a novel computing paradigm that aims to mimic the biological processes of the human brain in silicon-based hardware. By doing so, it has the potential to revolutionize computing by enabling faster and more efficient cognitive tasks. In this talk, I will explore the key aspects of neuromorphic computing, including what it is, why it is important, how it works, and when it can be used. Through various case studies, I will also examine different learning approaches within neuromorphic computing, such as Bayesian and evolutionary learning, and neural architecture search for algorithm-hardware codesign. Additionally, I will discuss our latest advancements in event-based cameras and their potential in enabling ultra-fast and robust decision-making in cognitive tasks. Bio: Maryam Parsa is an Assistant Professor in Electrical and Computer Engineering (ECE) department at George Mason University (GMU) since August 2021. Prior to joining GMU, she was a Postdoctoral Researcher at Oak Ridge National Laboratory (ORNL). She received her PhD in ECE from the Center for Brain Inspired Computing (C-BRIC) at Purdue university in 2020. She was the recipient of several prestigious awards including four-year Intel/SRC PhD fellowship, ORNL ASTRO fellowship, Purdue university Ross fellowship, TECHCON'18 best presentation award, and ICONS'21 best paper award. Her research interests are in the areas of neuromorphic computing, hyperdimensional computing, Bayesian-evolutionary learning and algorithm-hardware codesign. |
10:50 AM - 11:30 AM
|
Talk II: Dr. Yao Ma (New Jersey Institute of Technology) Title: Understanding and Enhancing Graph Neural Networks with A Unified Framework Abstract: Graph Machine Learning (graph ML) is a rapidly growing field in Data Science and Artificial Intelligence that holds great potential for intelligent advancements in various applications. Recent advances in graph ML have been centered around Graph Neural Networks (GNNs). The extensive exploration and broad adoption of GNNs have led to an initial series of successes in numerous applications. However, the limitations of vanilla GNNs, including their vulnerability to adversarial data perturbations and limited applicability to diverse graph types, have been exposed. To address these challenges, in this talk, I will present a unified framework of GNNs by connecting them with the graph signal denoising problem. This framework offers a novel and principled approach to designing GNN architectures that meet various demands. I will illustrate the versatility and potential of the unified framework with some examples. In particular, I will describe how to design intrinsically robust GNNs with this unified framework. Also, I will demonstrate how to utilize the framework for understanding GNNs’ behavior on graphs with heterophily. Bio: Yao Ma is an Assistant Professor in the Department of Computer Science at the New Jersey Institute of Technology (NJIT). He received his Ph.D. in Computer Science from Michigan State University (MSU) in 2021, with a focus on machine learning with graph-structured data. His research contributions to this area have led to numerous innovative works presented at top-tier conferences such as KDD, WWW, WSDM, ICLR, NeurIPS, and ICML. He has also organized and presented several well-received tutorials at AAAI and KDD, attracting over 1000 attendees. He is the author of the book "Deep Learning on Graphs", which has been downloaded tens of thousands of times from over 100 countries. He was awarded the Outstanding Graduate Student Award (2019-2020) from the College of Engineering at MSU.
|
11:30 AM- 1:00 PM
|
Lunch and Networking
|
1:00 PM - 1:40 PM
|
Talk III: Dr. Adnan Siraj Rakin (Binghamton University (SUNY)) Title: Exploring Security and Privacy Challenges through Adversarial Weight Perturbation in Deep Learning Models Abstract: In the emerging artificial intelligence era, deep neural networks (DNNs), a.k.a. deep learning, have gained unprecedented success in various applications. However, DNNs are usually storage intensive, computation intensive and very energy consuming, thereby posing severe challenges on the future wide deployment in many application scenarios, especially for the resource-constraint low-power IoT application and embedded systems. In this talk, I will introduce the algorithm/hardware co-design works for energy-efficient DNN in my group, from both the sparse and low-rank perspectives. First, I will show the benefit of using structured and unstructured sparsity of DNN for designing low-latency and low-power DNN hardware accelerators. In the second part of my talk, I will present an algorithm/hardware co-design framework that leverages low tensor rankness towards energy-efficient high-accuracy DNN model and accelerators. Bio: Adnan Siraj Rakin is an Assistant Professor in the Department of Computer Science at Binghamton University (SUNY). Previously, He completed his Ph.D. (2022) and Masters (2021) in Computer Engineering from Arizona State University (ASU). His research interests include deep learning, computer vision and security. He has been the author/co-author of many publications in IEEE/ACM top-tier journals and conferences (e.g., CVPR, ICCV, T-PAMI, USENIX Security) on this broad topic of Machine Learning Security. He has received the 2022 dean's dissertation award from the dean of Arizona State University (ASU) in recognition of his contribution to Machine Learning Security. |
1:40 PM - 2:20 PM
|
Talk IV: Dr. Shaahin Angizi (New Jersey Institute of Technology) Title: Smart Edge for Neural Network Acceleration Abstract: Internet of Things (IoT) devices are projected to attain an $1100B market by 2025, with a web of interconnection projected to comprise approximately 75+ billion IoT devices. The large number of IoTs consist of sensory imaging systems that enable massive data collection from the environment and people. However, considerable portions of the captured sensory data are redundant and unstructured. Data conversion of such large raw data, storing in volatile memories, transmission, and computation in on-/off-chip processors, impose high energy consumption, latency, and a memory bottleneck at the edge. In this talk, I will be focusing on AppCiP architecture as a sensing and computing integration design to efficiently enable Artificial Intelligence (AI) on resource-limited sensing devices. AppCiP provides a number of unique capabilities, including instant and reconfigurable RGB to grayscale conversion, highly parallel analog convolution-in-pixel, and realizing low-precision quinary weight neural networks. These features significantly mitigate the overhead of analog-to-digital converters and analog buffers, leading to a considerable reduction in power consumption and area overhead. Our circuit-to-application co-simulation results demonstrate that AppCiP achieves ~3 orders of magnitude higher efficiency on power consumption compared with the fastest existing designs considering different CNN workloads. It reaches a frame rate of 3000 and an efficiency of ~4.12 TOp/s/W. Bio: Shaahin Angizi is an Assistant Professor in the Department of Electrical and Computer Engineering, the New Jersey Institute of Technology (NJIT), and the director of the Advanced Circuit-to-Architecture Design Laboratory (ACAD Lab). He received his Ph.D. in Electrical Engineering at the School of Electrical, Computer and Energy Engineering, Arizona State University (ASU). His research interests include the cross-layer design of energy-efficient and high-performance processing-in-memory, processing-in-sensor, and ASIC platforms to enhance complex machine-learning tasks, bioinformatics, and graph processing. He has authored and co-authored 90+ research articles in top-ranked international journals and top-tier electronic design automation conferences such as IEEE TNANO, IEEE TCAD, IEEE TC, IEEE TCASI, IEEE TETC, DAC, DATE, ICCAD, ASP-DAC, etc. He received the “Best Ph.D. research award - 1st place” at the Design Automation Conference’s Ph.D. forum in 2018, two “Best Paper” awards at the IEEE Computer Society Annual Symposium on Very Large-Scale Integration (VLSI) in 2017 and 2018, and a “Best Paper” award at the ACM Great Lakes Symposium on VLSI in 2019. He has served as a technical reviewer for over 30 international journals/conferences, such as IEEE TC, TVLSI, TCAD, TNANO, TCAS, ESL, ACM JETC, MICRO, DAC, ASP-DAC, DATE, ICCAD, ICCD, GLSVLSI, ISVLSI, etc. For more information, please see http://shaahinangizi.com.
|
2:20 PM - 3:00 PM |
Talk V: Dr. Ajay K. Podar (Synergy Microwave) Title: Design Challenges in High-Frequency Low-Phase Noise Signal Sources Abstract: The high-frequency low-phase noise source circuit is an important and critical module for modern communication systems. The design for high-frequency signal sources is challenging but holds a promising future for application in 5G, and IoT and for improving the efficiency of modern communication systems. Reducing noise and power consumption becomes a major challenge in wireless, biomedical sensors, Radio identification devices,s and deep space communication applications. As a vital factor affecting system cost and lifetime, energy consumption is an emerging and challenging research topic. This talk presents a practical design approach for low-phase nose signal source targeting frequency from 10 MHz to 100 GHz operating frequency ranges. Examples are Crystal Oscillator, SAW Oscillator, Dielectric Resonators Oscillator, Planar Resonators Oscillator, and Opto-Electronic Oscillator. Bio: Ajay K. Poddar is an IEEE Fellow, working as a Chief Scientist at Synergy Microwave, NJ, USA, responsible for design and development of signal generation and signal processing electronics for industrial, medical, space applications. He is also serving as a visiting professor in University of Oradea, Romania, Indian Institute of Technology Jammu, India, and guest lecturer in Technical University Munich, Germany. Ajay Poddar graduated from IIT-Delhi, India; Doctorate (Dr.-Ing.) from Technical University Berlin, Germany; Post Doctorate (Dr.-Ing. habil) from Brandenburg Technical University Cottbus, Germany. He has received over dozen awards and 30 plus patents for technological innovations and leaderships, published 250 plus scientific papers in journals, magazine, and conference proceedings, co-authored 4-technical books/chapters. For the past 30 years, he has been serving in several scientific committee, professional societies, and voluntary organizations. He has served as elected Members-at Large AdCom member of IEEE MTT-S and AdCom Member of IEEE AP-S. Currently, he is serving as a Region-1 Coordinator MTT-S MGA, MTT-S SIGHT Committee Member, MTT-S Inter-Society Committee Member, Chair of IEEE HAC Society Partnership Engagement Working Group, Chair of IEEE AP-S Chapter Activity Committee, Co-Chair of IEEE AP-S COPE (Committee on Promoting Equality), Chair of IEEE AP-S Inter-Society Committee and Chair of IEEE North Jersey Section.
|
3:00 PM - 4:00 PM
|
Networking and Conclusion by ECE Department Chair Dr. Misra
|
Organizers |
Dr. Shaahin Angizi (IEEE Senior Member)Dr. Durga Misra (IEEE Fellow) |