My   research    Journey

Project Research time: 2025

Project Title: Brain Computer Interfaces as Clinical Tools for Neurodegenerative Diseases

Research Institute: Neurotech Intern (ThinkNeuro)

Research poster link: Visit the completed research poster

Abstract

BCIs give neurodegenerative disease patients a voice when speech through words and mobility are lost. Because progress in medicine isn’t just about curing the next disease, it’s about improving the quality of life of those, especially in underresearched areas, along the way, so that no patient is left behind.

My research team:

Evaluated the role of brain-computer interface (BCI) technologies in enhancing rehabilitation, diagnostic processes, and communication for neurodegenerative disease patients.

Assessed existing applications and their clinical effectiveness across various therapeutic areas.

Identified gaps and trends in the existing literature to determine under-researched areas and provide recommendations.

ACM HyperText 2025 conference accepted paper: “Synthetic Politics: Prevalence, Spreaders, and Emotional Reception of AI-Generated Political Images on X”

Artificial Intelligence isn’t just shaping the future of technology; people’s perceptions are on the line with its vast impact. In many instances, we don’t notice how political images affect our emotions, and it is very simple for AI images to exploit our trust in democracy, blurring the line of truth and manipulation.

As the visual algorithm of persuasion, AI images both express and distort emotions, and it is important to understand how to ethically utilize AI for civic engagement, especially following the 2024 presidential election.

I researched the emotional sentiment in AI image tweets versus non-AI image tweets. This is an important issue to address due to the spread and adoption of misinformation, the erosion of public trust, and the incitement of violence.

I started with detecting AI images and identifying superspreaders (highly influential political users) using GPT-4o and OpenCV, and for each image superspreader, I sampled 300 tweets. For each tweet, I web scraped the comments, and in total, I web scraped 10,000 political comments. Lastly, I conducted tweetNLP emotional analysis on each comment for the emotional sentiment, with the 11 emotions being anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise, and trust.

After assessing and comparing the averages of each emotion for the AI versus non-AI images using the Mann Whitney U test, I found that the comments under AI image tweets had more positive emotions (with higher levels of joy, optimism, love, and trust) while the comments under non-AI image tweets had more negative emotions (with higher levels of anger, anticipation, disgust, fear, pessimism, sadness, and surprise). All these levels were statistically significant, with p-values less than 0.05, with the exception of surprise.

Project Research time: 2024-2025

Project Title: Synthetic Politics: Prevalence, Spreaders, and Emotional Reception of AI-Generated Political Images on X

Research Institute: USC Information Sciences Institute Research Assistant (HUMANS Lab)

Submitted research paper link: Visit the published research.

Abstract

Project Research time: 2020-2021

Project Title: Computer Vision with Biometric Information for Engagement Detection in Virtual Learning

Project website link: Visit the completed project slides and lab logs

Project Research time: 2022-2023

Project Title: Revolutionizing Bone Fracture Detection: YOLOv5 vs. YOLOv8 Face-off

Submitted research paper for publication in The National High School Journal of Science

In the context of bone fracture detection, the research study titled “Revolutionizing Bone Fracture Detection: YOLOv5 vs. YOLOv8 Face-off” comprehensively assessed the performance of two state of-the-art YOLO (You Only Look Once) object detection algorithms, namely YOLOv5 and YOLOv8, across different model configurations, including small (s), medium (m), and large (l) variants. These six YOLO models underwent rigorous training, validation, and testing phases, employing three distinct datasets. One dataset exclusively consisted of wrist fractures, while the remaining two contained a mix of bone fracture types. The evaluation of model performance was based on precision, F1 score, and recall confidence ratings, and the results were presented through various visualization methods, including tables, images, and graphs. Ultimately, the study established that YOLOv5 and YOLOv8 demonstrate comparable detection capabilities, with YOLOv5 occasionally exhibiting superior accuracy in detecting fractures of varying sizes.

Project Research time: 2021-2022

Project Title: Early Skin Cancer Detection of Basal Cell Carcinoma, Squamous Cell Carcinoma, and Malignant Melanoma through Artificial Intelligence

Project website link: Visit the completed project slides and lab logs.