Introductions
Author: Jericho Nasser
Date: March 8, 2025
Hello my name is Jericho Nasser and this is my first post on my journey through the Project
& Portfolio V: Capstone project. In this series, I'll be documenting my progress,
challenges, and insights as I develop my final project. I'm looking forward to sharing my
experiences and growth throughout this process. Stay tuned as I explore new technologies,
overcome obstacles, and work towards creating something meaningful for my portfolio.
Project Pitches: MelodyMind, HealthDecoder, RRCC
Author: Jericho Nasser
Date: March 8, 2025
In this blog post, I'm outlining the project pitches I'm considering for my Project &
Portfolio V: Capstone project. These projects leverage AI to solve unique problems in
different domains.
MelodyMind: AI-Generated Music
MelodyMind is a web application that uses Generative Adversarial Networks (GANs) to create
original music compositions based on user-defined parameters. It addresses the problem of
content creators needing unique soundtracks but lacking the skills or resources to produce
them.
HealthDecoder: AI for Blood Test Analysis
HealthDecoder is a mobile application that uses AI to analyze and interpret blood test
results in plain language. It tackles the challenge of patients struggling to understand
complex medical terminology, empowering them to take control of their health data.
RRCC: Affordable Autonomous Robotics
RRCC is an affordable autonomous RC car platform that utilizes computer vision and sensor
fusion to navigate environments. It aims to make robotics education accessible by providing
an affordable way to experiment with robotics and AI.
I'm excited to explore these projects further and choose one to develop throughout the
capstone course. Stay tuned for updates on my progress!
Project Selection: MusicGAN & HealthDecoder
Author: Jericho Nasser
Date: March 15, 2025
I wanted to share an update on my project journey. Initially, I was part of a team focused on audio applications, where I pitched my MusicGAN project alongside a teammate's emotion recognition system. Our plan was to integrate these concepts—generating music based on detected emotional states. It was an exciting fusion of technologies that would create a responsive audio experience.
Despite being reassigned to a different team, I've decided to continue developing MusicGAN as a supplemental individual project. I had already mapped out the strategy and begun environment setup, and I'm passionate about seeing this concept through to completion.
HealthDecoder: Team Project
My new team (Austin Paugh, Kevin Lorne, and myself) has decided to pursue another of my pitches: HealthDecoder. We've made significant progress on the foundational elements, including creating a comprehensive design document outlining the technical architecture and developing a style tile with logo concepts and UI design principles.
We've established a division of labor that leverages each team member's strengths. Kevin, with his mobile design background, is leading the frontend development using Flutter. Austin and I, coming from AI specializations, are focusing on the backend systems, machine learning services, and dataset integration.
Our technical stack includes Flutter for the frontend and a Java REST API for the backend. For the machine learning components, we're exploring several specialized models including BioBERT and SciBERT. We plan to train these models using healthcare datasets such as UMLS, MIMIC-III (de-identified clinical notes), and PubMed abstracts.
Currently, we're in the early stages—setting up development environments and configuration channels while Kevin works on finalizing the UI approach. I'm excited about both projects and look forward to sharing more concrete developments soon!
Project Update: HealthDecoder Prototype & Team Progress
Author: Jericho Nasser
Date: March 22, 2025
I'm excited to share that our team has made significant progress on the HealthDecoder project! We've created a working prototype draft in Figma that includes a comprehensive set of screens: splash page, login page, two iterations of the main page, settings page, profile page, and dedicated pages for each feature (Results, Health Plan, Meal Plan, Notes, and Chatbot).
One of the standout UI elements is a sliding top cabinet for main navigation, which provides an intuitive way for users to access different sections of the app. While we've made great strides, we're still refining several aspects including component matching, layout dynamics, individual settings pages, and about pages.
Team Organization
To improve our workflow efficiency, we've established a Jira board for team management and populated the backlog with specific tasks and assignees. We've organized ourselves into partner groups with dedicated focuses:
- Functionality Team: Kevin and myself
- Logic Team: Austin and myself
- Security Team: All three members
- Data Visualization Team: Kevin and Austin
Backlog Overview
Our current backlog includes crucial tasks such as dashboard screen development, medical record upload functionality, login implementation, medical term highlighting, health data visualization (both frontend and backend), personalized recommendations, user profile management, progress tracking, and security features.
AI Strategy Progress
We've made progress in defining our AI approach by separating model concerns, identifying models for testing, and acquiring database authorizations for the project. This structured approach will help us integrate AI capabilities seamlessly into the application.
Next Steps
Our immediate next steps include establishing data policies and AI implementation tactics while we finalize the UI prototype and complete our design documentation. I'm looking forward to seeing how these elements come together as we move into the implementation phase!
UI Enhancements & Development Progress
Author: Jericho Nasser
Date: March 29, 2025
This week, our team made significant strides on both the UI/UX front and backend development for HealthDecoder. We've refined our Figma prototype with several important enhancements that will streamline development and improve user experience.
UI Component Architecture
We've completely restructured our UI component architecture for better scalability. The main page now has two versions (v1 and v2) with an improved overview concept that showcases test result charts, health goals, and a daily meal planner in a more intuitive layout. We've also established dedicated settings pages for health preferences, doctor information, and dietary preferences.
A major improvement is the conversion of UI elements into individual widgets, which will significantly accelerate our development cycles through enhanced cloning and reusability. We've also componentized our color schemes and fonts to enable easier global styling changes. Several animated components have been added to improve engagement and user feedback.
Development Sprints
We've kicked off our first sprint cycles, with clear separation between frontend and backend tasks. The backend team is focused on database setup/integration and building the recommendation engine, while frontend efforts are concentrated on dashboard development and login implementation.
Recommendation Engine Breakthrough
Our most exciting progress has been with the recommendation engine prototype. We've expanded our model testing to include various Hugging Face Transformer models—BioBERT, ClinicalBERT, BERT, and RoBERTa—to identify the optimal approach for health and nutrition insights.
Our evaluation process now includes a structured question set spanning medical terminology, health concerns, diagnostics, and nutritional planning. Performance metrics focus on accuracy, relevance, completeness, clarity, and conciseness of responses.
Interestingly, while RoBERTa (Base) demonstrated the best overall performance, we discovered that different models excel in specific categories. This has led us to consider a dynamic model selection approach that chooses the appropriate model based on question type—a significant advancement for providing specialized expertise across different health domains.
We've also gained access to the UMLS (Unified Medical Language System), which will be crucial for our data sprint in enhancing the accuracy of medical terminology processing.
The next phase involves fine-tuning selected models on medical Q&A datasets and implementing a constraint system to ensure recommendations remain evidence-based and personalized to each user's health profile.