Stamp's Master's Students' Defenses: Fall 2022






Who
When
Where
Abbreviated Title
Vrinda Malhotra
Tuesday 12/13 @ 11:00am
MH 225
Graph Neural Networks for Malware Classification
Srilekha Nune
Tuesday 12/13 @ noon
MH 210
Adversarial Attacks on Android Malware Detection and Classification
Yashna Peerthum
Friday 12/9 @ noon
MH 422
The Role of the Shift and Scale Parameters in Batch Normalization
Shruti Sharma
Friday 12/9 @ 1:00pm
MH 422
Classification of Darknet Traffic by Application Type
Alexander Wolski
TBD
TBD
CSG-CRN: 3D Shape Learning with Constructive Solid Geometry






Graph Neural Networks for Malware Classification

by Vrinda Malhotra

Malware is a growing threat to the digital world. The first step to managing this threat is malware detection and classification. While traditional techniques rely on analysis of malware, the generation of such features requires expert knowledge. Function call graphs (FCGs) consist of a set of program functions and their interprocedural calls, providing a rich source of information that can be leveraged to classify malware without feature extraction that requires experts. This project treats malware classification as a graph classification problem, setting node features using the Local Degree Profile (LDP) model and using different graph neural networks (GNN) to generate embeddings for each graph, which can then be classified. We highlight Jumping Knowledge Graph Isomorphism Network (JK-GIN), Jumping Knowledge GraphSAGE (JK-GraphSAGE), UnetGraph, and Deep Graph Convolutional Network (DGCNN), since these performed the best in our experiments and had similar runtimes. These models perform better than existing state-of-the-art approaches in terms of F1 scores. They also address the over-smoothing problem that is rampant with other GNN models.




Adversarial Attacks on Android Malware Detection and Classification

by Srilekha Nune

Recent years have seen an increase in sales of intelligent gadgets, particularly those using the Android operating system. This popularity has not gone unnoticed by malware writers. Consequently, many research efforts have been made to develop learning models that can detect Android malware. As a countermeasure, malware writers can consider adversarial attacks that disrupt the training or usage of such learning models. In this paper, we train a wide variety of machine learning models using the KronoDroid Android malware dataset, and we consider adversarial attacks on these models. Specifically, we carefully measure the decline in performance when the feature sets used for training or testing are contaminated. Our experimental results demonstrate that elementary adversarial attacks pose a significant threat in the Android malware domain.




Evaluation of the Role of the Shift and Scale Parameters in Batch Normalization

by Yashna Peerthum

Batch Normalization (BatchNorm) is a technique that enables the training of deep neural networks, especially Convolutional Neural Networks (CNN) for computer vision tasks. It has been empirically demonstrated that BatchNorm increases performance, stability, and accuracy, although the reasons for these improvements are unclear. BatchNorm consists of a normalization step with trainable shift and scale parameters. In this paper, we examine the role of normalization and the shift and scale parameters in BatchNorm. We implement two new optimizers in PyTorch: a version of BatchNorm that we refer to as AffineLayer, which includes the shift and scale transform without normalization, and a version with just the normalization step, which we call BatchNorm-minus. We compare the performance of our AffineLayer and BatchNorm-minus implementations to standard BatchNorm, and we also compare these to the case where no batch normalization is used. We experiment with the ResNet18 and ResNet50 models over various batch sizes. Among other findings, we provide empirical evidence that the success of BatchNorm may be primarily due to improved weight initialization.




Classification of Darknet Traffic by Application Type

by Shruti Sharma

The darknet is constantly exploited for illegal purposes and activities. Previous research has focused on various classification techniques for darknet traffic using machine learning and deep learning. We extend this previous work by considering the effectiveness of a wide range of machine learning and deep learning technique for the classification of darknet traffic by application type. We consider the CIC-Darknet2020 dataset, which has been used in many previous studies, thus enabling a direct comparison of our results to previous work. We find that XGBoost performs the best among the classifiers that we have tested.




CSG-CRN: Cascaded Refinement Network for 3D Shape Learning with Constructive Solid Geometry

by Alexander Wolski

A long-standing goal of computer vision and geometry processing is to efficiently and accurately reconstruct complex 3D geometries. Deep learning techniques have become a popular solution to the problem because of their ability to automatically learn and generalize features. Recent work implements deep learning to generate 3D reconstructions using Constructive Solid Geometry (CSG) modeling. This is a method to build complex volumes by combining simple, atomic shape primitives. Prior work accurately reconstructs simple objects using several dozen shape primitives. In this work, we propose the Constructive Solid Geometry Cascaded Refinement Network (CSG-CRN): a model capable of reconstructing complex geometry using an unlimited number of shape primitives. We implement an unsupervised autoencoder network to iteratively improve a reconstruction through cascaded refinement. The model relies on a Siamese encoder network to analyze the differences between a target and reconstruction point cloud. Then a Multi-Level Perceptron (MLP) decoder sequentially predicts multiple shape primitives to refine the reconstruction.