Arjun Naga Siddappa

Projects
Experience
Education

Projects

Collaborators: Atsushi Shimizu, Karthik Udhayakumar

What is it?
Given a text query, the system returns the videos that are most relevant to the query in the database.

Contribution:
Headed a team of three in developing the video search system. Developed dual encoder model of the following combinations: BERT and IncpetionV3; BERT and yolo. Configured Milvus vector database to store the embeddings. Assesed the performance of various combinations. Beyond the course project, continued to build a pipeline for the system and hosted it on this site.

Use it here: Video Search


Face Mask Detection

Collaborators: Atsushi Shimizu

What is it?
Detects and labels faces in a video feed as “with mask” or “without mask”.

Contribution: Experimented with various yolo models to learn the object detection task. Performed transfer learning on yoloface model to detect the two different categories. yolo is an object detection model. yoloface is a variant of yolo that was trained to identify faces. Teammate and I used the learning of this model by freezing the upstream layers and training the downstream layers to learn to detect the classes “with mask” or “without mask”.


Olympics - A Small Study

Collaborators: Atsushi Shimizu

About the project:

A article-style data visualization project. Using D3.js we created this short read that poses curious questions and investigates it with graphs. These graphs are a genius of my teammate and I. I would love to hear your feedback on it.

View it here: Olympics - A Small Study


Drug Response

Using DBScan a clustering algorithm, identified biomarkers from gene expression data. I also developed ridge regression, lasso regression and random forest regressor models to predict drug response in cancer patients using data such as physiological makeup and economic status.

Collaborators: Arundhati Gorkhe, Meghana S Murthy


Sign Language Alphabet Recognizer

Developed a sign language translation system that would detect and recognise the signs representing alphabets in sign language made in a webcam feed and outputs it on the screen. We had created our own database and trained an SVM Classifier model with it.

Use it here: Sign Language Alphabet Recogniser

Collaborators: Bhavya Jain