Artificial Intelligence

Artificial Intelligence (AI) is a branch of computer science that aims to create intelligent machines that can perform tasks that typically require human intelligence. These tasks include visual perception, speech recognition, decision-making, and language translation.

AI systems can be categorized into two main types:

Modern AI relies heavily on machine learning techniques, where algorithms learn patterns from data rather than following explicit programming. Deep learning, a subset of machine learning using neural networks, has enabled remarkable breakthroughs in image recognition, natural language processing, and game playing.

AI is transforming industries across the board:

As AI technology continues to advance, it raises important ethical and societal questions about privacy, job displacement, decision transparency, and the future relationship between humans and intelligent machines.

AI Chatbot

AI Music Generator

Music generation service coming soon!

This feature will allow you to generate custom music based on parameters like genre, mood, and tempo.

AI Image Recognition (under construction)

AI Game Bot (under construction)

AI Data Analysis (under construction)

AI Robotics (under construction)

AI Virtual Reality (under construction)

AI Augmented Reality (under construction)

AI Internet of Things (under construction)

Agents

LLaMA (Meta AI)

LLaMA (Large Language Model Meta AI) is a family of large language models developed by Meta AI. These models range from 7B to 70B parameters and have shown impressive capabilities in natural language understanding and generation. Meta has released these models with open weights, allowing researchers to build upon them for various applications while requiring less computing power than many competing models.

Ada (IBM)

Ada is IBM's AI-powered assistant designed to help businesses automate tasks, improve customer service, and enhance productivity. Ada can understand and respond to user queries, provide personalized recommendations, and assist with complex workflows. IBM offers Ada as a white-label solution, allowing businesses to customize the assistant to their specific needs.

Replika (Luka)

Replika is an AI chatbot developed by Luka that uses natural language processing to engage in conversations with users. Replika is designed to be a personal AI companion that learns from users' interactions and adapts to their preferences. Users can chat with Replika about various topics, share their thoughts and feelings, and receive emotional support.

Julia (Microsoft)

Julia is Microsoft's AI assistant that helps users with a wide range of tasks, from scheduling meetings to answering questions and providing recommendations. Julia is integrated with Microsoft 365 and other Microsoft services, allowing users to access information and perform actions through natural language interactions. Julia is designed to be conversational, helpful, and efficient in assisting users with their daily tasks.

Sam (Samsung)

Sam is Samsung's AI assistant that is integrated into Samsung devices and services. Sam can help users with tasks like setting reminders, sending messages, making calls, and controlling smart home devices. Sam is designed to be user-friendly and intuitive, providing a seamless experience across Samsung's ecosystem of products.

GPT (OpenAI)

The GPT (Generative Pre-trained Transformer) series by OpenAI represents some of the most powerful language models available today. GPT-4 can understand and generate human-like text, translate languages, write different creative content, answer questions informatively, and even understand images. These models power applications like ChatGPT and are used across industries for content creation, customer service, and programming assistance.

Claude (Anthropic)

Claude is an AI assistant created by Anthropic, designed to be helpful, harmless, and honest.Known for its conversational capabilities and longer context windows, Claude can process extensive documents, engage in nuanced discussions, and provide thoughtful responses while maintaining ethical boundaries. Claude 2 and its variants are increasingly being used in enterprise settings for document analysis and complex reasoning tasks.

Gemini (Google)

Gemini is Google's multimodal AI model family that can process and generate text, images, audio, and video. Available in different sizes (Ultra, Pro, Nano), Gemini models power Google's AI assistant and developer tools. These models are designed to understand context across different modalities, making them versatile for applications ranging from content creation to complex reasoning tasks.

News

Coming soon...

Algorithms

Deep Learning Neural Networks

Neural networks with many layers that can learn hierarchical representations of data. These power modern AI systems from image recognition to natural language processing.

Convolutional Neural Networks (CNNs)

Specialized for processing grid-like data such as images. CNNs use convolution operations to detect features regardless of their position in the input.

Recurrent Neural Networks (RNNs) & LSTMs

Algorithms designed for sequential data processing with memory of previous inputs. LSTMs (Long Short-Term Memory) solve the vanishing gradient problem in traditional RNNs.

Transformers

The architecture behind modern language models like GPT and BERT. Transformers use self-attention mechanisms to weigh the importance of different parts of the input data.

Reinforcement Learning

Algorithms where agents learn optimal behaviors through trial and error by receiving rewards or penalties. Examples include Q-learning and Policy Gradient methods.

Generative Adversarial Networks (GANs)

Two neural networks contest with each other: a generator creates content while a discriminator evaluates it, resulting in increasingly realistic synthetic outputs.

Diffusion Models

These gradually add noise to data and then learn to reverse the process, enabling high-quality image generation. Used in tools like DALL-E and Stable Diffusion.

Blog

FSU | Project & Portfolio V: Capstone

Introductions

Hello my name is Jericho Nasser and this is my first post on my journey through the Project & Portfolio V: Capstone project. In this series, I'll be documenting my progress, challenges, and insights as I develop my final project. I'm looking forward to sharing my experiences and growth throughout this process. Stay tuned as I explore new technologies, overcome obstacles, and work towards creating something meaningful for my portfolio.

Project Pitches: MelodyMind, HealthDecoder, RRCC

In this blog post, I'm outlining the project pitches I'm considering for my Project & Portfolio V: Capstone project. These projects leverage AI to solve unique problems in different domains.

MelodyMind: AI-Generated Music

MelodyMind is a web application that uses Generative Adversarial Networks (GANs) to create original music compositions based on user-defined parameters. It addresses the problem of content creators needing unique soundtracks but lacking the skills or resources to produce them.

HealthDecoder: AI for Blood Test Analysis

HealthDecoder is a mobile application that uses AI to analyze and interpret blood test results in plain language. It tackles the challenge of patients struggling to understand complex medical terminology, empowering them to take control of their health data.

RRCC: Affordable Autonomous Robotics

RRCC is an affordable autonomous RC car platform that utilizes computer vision and sensor fusion to navigate environments. It aims to make robotics education accessible by providing an affordable way to experiment with robotics and AI.

I'm excited to explore these projects further and choose one to develop throughout the capstone course. Stay tuned for updates on my progress!

Project Selection: MusicGAN & HealthDecoder

I wanted to share an update on my project journey. Initially, I was part of a team focused on audio applications, where I pitched my MusicGAN project alongside a teammate's emotion recognition system. Our plan was to integrate these concepts—generating music based on detected emotional states. It was an exciting fusion of technologies that would create a responsive audio experience.

Despite being reassigned to a different team, I've decided to continue developing MusicGAN as a supplemental individual project. I had already mapped out the strategy and begun environment setup, and I'm passionate about seeing this concept through to completion.

HealthDecoder: Team Project

My new team (Austin Paugh, Kevin Lorne, and myself) has decided to pursue another of my pitches: HealthDecoder. We've made significant progress on the foundational elements, including creating a comprehensive design document outlining the technical architecture and developing a style tile with logo concepts and UI design principles.

We've established a division of labor that leverages each team member's strengths. Kevin, with his mobile design background, is leading the frontend development using Flutter. Austin and I, coming from AI specializations, are focusing on the backend systems, machine learning services, and dataset integration.

Our technical stack includes Flutter for the frontend and a Java REST API for the backend. For the machine learning components, we're exploring several specialized models including BioBERT and SciBERT. We plan to train these models using healthcare datasets such as UMLS, MIMIC-III (de-identified clinical notes), and PubMed abstracts.

Currently, we're in the early stages—setting up development environments and configuration channels while Kevin works on finalizing the UI approach. I'm excited about both projects and look forward to sharing more concrete developments soon!

Project Update: HealthDecoder Prototype & Team Progress

I'm excited to share that our team has made significant progress on the HealthDecoder project! We've created a working prototype draft in Figma that includes a comprehensive set of screens: splash page, login page, two iterations of the main page, settings page, profile page, and dedicated pages for each feature (Results, Health Plan, Meal Plan, Notes, and Chatbot).

One of the standout UI elements is a sliding top cabinet for main navigation, which provides an intuitive way for users to access different sections of the app. While we've made great strides, we're still refining several aspects including component matching, layout dynamics, individual settings pages, and about pages.

Team Organization

To improve our workflow efficiency, we've established a Jira board for team management and populated the backlog with specific tasks and assignees. We've organized ourselves into partner groups with dedicated focuses:

  • Functionality Team: Kevin and myself
  • Logic Team: Austin and myself
  • Security Team: All three members
  • Data Visualization Team: Kevin and Austin

Backlog Overview

Our current backlog includes crucial tasks such as dashboard screen development, medical record upload functionality, login implementation, medical term highlighting, health data visualization (both frontend and backend), personalized recommendations, user profile management, progress tracking, and security features.

AI Strategy Progress

We've made progress in defining our AI approach by separating model concerns, identifying models for testing, and acquiring database authorizations for the project. This structured approach will help us integrate AI capabilities seamlessly into the application.

Next Steps

Our immediate next steps include establishing data policies and AI implementation tactics while we finalize the UI prototype and complete our design documentation. I'm looking forward to seeing how these elements come together as we move into the implementation phase!

UI Enhancements & Development Progress

This week, our team made significant strides on both the UI/UX front and backend development for HealthDecoder. We've refined our Figma prototype with several important enhancements that will streamline development and improve user experience.

UI Component Architecture

We've completely restructured our UI component architecture for better scalability. The main page now has two versions (v1 and v2) with an improved overview concept that showcases test result charts, health goals, and a daily meal planner in a more intuitive layout. We've also established dedicated settings pages for health preferences, doctor information, and dietary preferences.

A major improvement is the conversion of UI elements into individual widgets, which will significantly accelerate our development cycles through enhanced cloning and reusability. We've also componentized our color schemes and fonts to enable easier global styling changes. Several animated components have been added to improve engagement and user feedback.

Development Sprints

We've kicked off our first sprint cycles, with clear separation between frontend and backend tasks. The backend team is focused on database setup/integration and building the recommendation engine, while frontend efforts are concentrated on dashboard development and login implementation.

Recommendation Engine Breakthrough

Our most exciting progress has been with the recommendation engine prototype. We've expanded our model testing to include various Hugging Face Transformer models—BioBERT, ClinicalBERT, BERT, and RoBERTa—to identify the optimal approach for health and nutrition insights.

Our evaluation process now includes a structured question set spanning medical terminology, health concerns, diagnostics, and nutritional planning. Performance metrics focus on accuracy, relevance, completeness, clarity, and conciseness of responses.

Interestingly, while RoBERTa (Base) demonstrated the best overall performance, we discovered that different models excel in specific categories. This has led us to consider a dynamic model selection approach that chooses the appropriate model based on question type—a significant advancement for providing specialized expertise across different health domains.

We've also gained access to the UMLS (Unified Medical Language System), which will be crucial for our data sprint in enhancing the accuracy of medical terminology processing.

The next phase involves fine-tuning selected models on medical Q&A datasets and implementing a constraint system to ensure recommendations remain evidence-based and personalized to each user's health profile.

Research

Coming soon...