Current Projects

Here are some active projects that we currently work on:

Software Engineering for Robust and Explainable AI-  This research integrates software engineering techniques such as combinatorial testing (CT) and delta debugging into the development of robust and explainable AI systems. CT is employed to construct surrogate models by systematically sampling diverse feature interactions and to optimize hyperparameter tuning by reducing configuration space while preserving key interactions. Delta debugging is adapted to deep neural networks to identify minimal, causally sufficient subsets of feature maps that influence decisions, resulting in compact and interpretable saliency maps. Together, these methods provide a principled framework for enhancing AI robustness and interpretability.


Fairness Testing of Black Box Machine Learning Models-  Decision-making by ML systems can exhibit biases, resulting in unfair outcomes for different individuals.This work presents a novel method based on t-way testing in a VAE’s latent space to systematically explore the search space.By decoding these latent-space samples, our approach generates highly natural instances that find the fairness violations in a black box setting. [More Details]


Constructing Good Surrogate Model for Machine Learning-  Understanding and interpreting the decision-making process of black-box machine learning models is often challenging, making their predictions less transparent. As ML models are increasingly adopted in sensitive domains like healthcare and finance, ensuring trustworthy and accountable decision-making is critical. This project aims to use surrogate models—simpler, interpretable models that approximate the behavior of complex black-box systems—to improve transparency. These models are easy to analyze and can also be valuable tools in studying adversarial attacks. We are exploring innovative techniques to construct effective surrogate models for black-box ML systems.


Privacy Testing in Machine Learning based systems-  Machine Learning systems are inherently vulnerable to Privacy attacks. These attacks can steal different parts of the ML model including training data, model parameters and sensitive attributes regarding the training data. The goal is to test and detect whether a model is sufficiently guarded against these kinds of attacks.  [More Details]


Security Analysis of Ethereum Smart Contracts-  Ethereum blockchain is the decentralized platform for Ether (ETH, cryptocurrency ether) and smart contracts. Ether is second only to Bitcoin in market capitalization. Smart contracts enable Ethereum to remove the need for a third party to handle transactions between peers, which can reduce the time and save money. They are either all or part of the backends of the distributed applications (Dapps). Since smart contracts are mainly involved in financially based transactions, security is a major concern for wide application. The immutable nature makes this concern more serious as they are rather difficult to patch. Therefore, security analysis of smart contracts is critical. [More Details]


Fuzz Testing of Zigbee Protocol Implementation-  Zigbee protocol is one of global most popular IoT wireless standards used by millinion devices and customers. It has also been deployed in NASA Mars mission as communication radio between flying drone and Perseverance rover. Recently, server vulnerabilities in Zigbee protocol implementations have compromised IoT dvices from different manufactuers. It becomes imperative to perform security testing on Zigbee protocol implementations. Thus, this research project aims to apply existing state-of-art vulnerability detection techniques, such as fuzzing and data flow analysis, to Zigbee protocol implementations. [More Details]