Skip to main content

Wednesday, October 1, 2025
8:15 AM – 5:00 PM
Partnership II, Room 208

Details

We are happy to announce the 2025 UCF Annual Research Computing Symposium will take place on Wednesday, October 1st, 2025, at the Partnership II Building in Room 208. The symposium will provide a forum for UCF faculty, post-docs and students to present their work and exchange ideas in an informal setting, showcasing how they are using advanced computational resources for their research. We’re pleased to announce that this year’s event will include a Student Poster Session, celebrating student research with awards for excellence.

The topics for this year are high-performance computing, quantum computing, cloud computing and enabling AI research through computational, data and training resources. The agenda will include a keynote, invited talks from faculty and vendors, a student poster session and networking opportunities. Whether you’re a seasoned researcher, a newcomer, or someone seeking expert guidance for their computational data analysis, we encourage you to join us in this exciting event.

This event is in-person with no option for virtual attendance. Coffee, breakfast and lunch will be provided for all registered attendees. 

This event is organized by UCF Research Cyberinfrastructure Team and sponsored by DataDirect Networks (DDN). 

Any questions related to the event should be directed to ResearchIT@ucf.edu.

Registration

This event is free to attend, but registration is required for all attendees. Please note that space is limited. Please complete the registration form by September 28 by 5pm. If you are unable to attend for any reason, please let us know so we can free up your seat for those on the waiting list. Preference will be given to those with posters and talks to present.

Please secure your seat by registering soon as space is limited.

Register for event

Student Poster Session

If you are a currently enrolled UCF student interested in submitting a poster, visit here for additional information and for the link to submit your poster abstract: Call For Posters. The deadline for submitting abstracts is September 16 by 11:59 PM.

The Student Poster Session is sponsored by NVIDIA. 

Agenda

Please kindly note that this agenda is subject to updates.

The Agentic AI Revolution: Transforming how we work, learn, and live 
Nasir Wasim, Senior AI Consultant, DDN

The evolution from reactive AI to proactive AI agents represents the most significant shift in human-computer interaction since the smartphone. We are witnessing a fundamental transformation from AI systems that respond to commands to intelligent agents that observe our patterns, predict our needs, and take action autonomously on our behalf. 
This revolution is not some distant future concept. Today’s agentic AI systems are already reshaping how we live and work in remarkable ways. For example, Netflix analyzes your viewing patterns and proactively curates content that matches your mood and weekly schedule. GitHub Copilot doesn’t just complete your code but anticipates potential security vulnerabilities and suggests safer approaches before problems emerge.  
We are rapidly advancing toward interconnected AI agent networks where multiple specialized systems coordinate seamlessly to handle complex, multi-step tasks without human intervention. Rather than spending time managing our technology, we are entering an era where intelligent agents manage the logistics of our lives, freeing us to focus on what humans do best: creativity, strategic thinking, and meaningful connections. 
Attendees will gain some insights for preparing themselves and their organizations for this AI-augmented future. We will explore actionable strategies for working alongside intelligent agents, understanding when to embrace automation and when to maintain human oversight.

9:30 AM – 9:40 AM
Multimodal Foundation Models for Science: Biomedicine and Geospatial AI
Dr. Chen Chen, Associate Professor, Computer Science

Foundation models are reshaping AI by enabling generalization across tasks and modalities. This lightning talk will highlight two strands of our recent work toward practical, domain-grounded multimodal FMs. First, BiomedGPT demonstrates a lightweight, open-source vision-language model that unifies medical imaging and clinical text to support tasks such as radiology VQA, report generation, and clinical summarization, illustrating how instruction tuning and efficient fine-tuning deliver strong performance with modest compute. Second, Geospatial Foundation Models via continual pretraining show how we adapt and specialize general models to Earth observation and remote sensing at scale, improving data efficiency and transfer for applications like land cover mapping, change detection, and disaster response. I’ll also touch on the enabling role of high-performance and cloud computing for training, evaluation, and deployment pipelines, and discuss opportunities for UCF researchers and students to leverage these resources in ML/AI workflows.

ML/AI
9:42 AM – 9:52 AM
Predicting Semiconductor Manufacturing Equipment Remaining Useful Life with Machine Learning
Ms. Tori Wright, Researcher, Institute for Simulation and Training

In smart manufacturing, the ability to forecast equipment failures is critical for maintaining productivity and minimizing downtime. Remaining useful life (RUL) refers to the time remaining until a component fails and requires repair or replacement. Data-driven approaches leverage historical run-to-failure data to predict RUL, enabling predictive maintenance strategies that prevent failures and production bottlenecks. However, the complexity of components makes it difficult to accurately model RUL, thus creating a need for accurate and robust modeling techniques. This talk summarizes the recent research on the advancement of machine learning for RUL prediction, with a focus on improving accuracy and generalizability. Techniques such as gradient boosting, fleet learning, transfer learning, and deep learning are discussed, highlighting how each approach addresses different challenges in modeling equipment RUL.

ML/AI
9:54 AM – 10:04 AM
Modeling Storm Surges using Machine Learning
Dr. Meghana Nagaraj, Postdoctoral Scholar, Civil, Environmental, and Construction Engineering

Storm surge is a primary driver of coastal flooding, induced by changes in mean sea level pressure and winds from tropical/extratropical cyclones. Tide gauges measure relative sea level and provide high frequency observations, but records vary in length and completeness. Short/incomplete records can introduce uncertainties in representing extremes, but estimating associated probabilities is critical for coastal design and adaptation. To overcome these challenges, storm surge records can be reconstructed using hydrodynamic models, statistical/ data-driven approaches. We developed a machine learning-based framework to simulate storm surges at multiple tide gauges simultaneously, incorporating localized features. We use a Long Short-Term Memory model and demonstrate the applicability for the coast of Florida using UCF’s high-performance computing resources. This study enables extension of surge records from the 1940s to present, providing a robust and scalable framework to reduce uncertainty, support flood hazard assessments, infrastructure planning, and long-term adaptation. 
Authors: Meghana Nagaraj 12, Alejandra Enriquez 34, Thomas Wahl 12 (1) Department of Civil, Environmental and Construction Engineering, University of Central Florida, Orlando, FL, USA (2) National Center for Integrated Coastal Research, University of Central Florida, Orlando, FL, USA (3) Institute for Environmental Studies (IVM), Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (4) School of Geosciences, College of Arts & Sciences, University of South Florida, St of extreme events—a common challenge in machine learning models. Petersburg, FL 33701, USA

ML/AI
10:06 AM – 10:16 AM
Accelerating LLM model serving via hardware-software co-design
Dr. Jun Wang, Professor, Electrical Engineering and Computer Science; IEEE Fellow

Large Language Models (LLMs) are central to modern AI, yet their deployment for real-time inference faces significant performance bottlenecks. This talk presents our algorithm-system co-design approach to overcome two critical challenges: the GPU memory bottleneck due to the expansive Key-Value (KV) cache, and the scheduling bottleneck caused by Head-of-Line blocking. We introduce ALISA, which utilizes Sparse Window Attention (SWA) and a dynamic scheduler to reduce KV cache memory by up to 80%, boosting throughput by 1.4x-3.0x. Concurrently, ALISE employs speculative output length prediction and a multi-level priority queue with dynamic swapping to eliminate Head-of-Line blocking, achieving up to 3.1x higher throughput. Together, ALISA and ALISE synergistically enhance LLM inference efficiency, scalability, and cost-effectiveness, paving the way for more responsive and accessible AI applications.

ML/AI
10:18 AM – 10:28 AM
From Scripts to Screens: Harnessing Cloud + AI to Scale E-Learning
Dr. Manuel Rivera, Assistant Dean of Research Rosen College, Director UCF Hospitality+ Center for Innovation and Training

This presentation reports on a collaboration between the UCF Hospitality+ Center for Innovation and Training at the Rosen College and the UCF Research Cyberinfrastructure Team to develop a cloud-based, AI-enabled pipeline for scalable e-learning production. Addressing the challenge of producing hundreds of instructional videos for international professional audiences, we implemented a two-stage Extract–Transform–Load (ETL) workflow in partnership with the UCF Cyber team, leveraging AWS and Azure OpenAI services. Faculty-led teams curated learning objectives, scripts, and visual assets, which were programmatically transformed into structured JSON, validated, and processed through automated APIs for AI-driven video generation. Iterative quality control ensured academic accuracy while optimizing efficiency and cost through cloud infrastructure. This work contributes to the emerging uses of applied AI in higher education by demonstrating a reproducible model for integrating faculty expertise with advanced computational resources. The case illustrates how UCF Hospitlaity+ Center for Innovation and Training can extend instructional capacity, reduce time-to-production, and maintain rigor when deploying cloud and AI technologies for global e-learning initiatives.
Collaborators: • Margaret Zorrilla (UCF Hospitality+ Center for Innovation and Training) • Ezequiel Gioia (UCF Research Cyberinfrastructure) • Paola Canales-Bigio (UCF Research Cyberinfrastructure) • Fahad Khan, Ph.D. (UCF Research Cyberinfrastructure) • Nafisa Islam (UCF Research Cyberinfrastructure)

Cloud computing AI
10:30 AM – 10:40 AM
Data-Driven Insights into the Degradation of Photovoltaic and Electronic Materials
Dr. Kristopher Davis, Associated Professor, Materials Science & Engineering

Photovoltaics and electronics are both industries that value high reliability and long-term durability, often in the face of harsh environmental conditions. Performance-limiting degradation is often monitored and tracked, but preventing it requires a causal understanding of the underlying mechanisms that often originate at the micro- or even nano-scale. This work addresses this challenge by applying modern computer and data science methods to high-volume, multimodal data (e.g., text, curves, images) collected across multiple length scales, from meters down to nanometers. Integrating domain expertise and semantic meaning into our analytical workflows transforms these data streams into actionable insights. Furthermore, leveraging automation and cloud computing where possible ensures this approach is scalable.

HPC Cloud Computing ML/AI
11:00 AM – 11:10 AM
Scaling Genomics at UCF: Building a Core for Data-Driven Discovery and Training
Dr. Taj Azarian, Assistance Professor of Medicine, Burnett School of Biomedical Sciences; Lead for Pathogen Genomics Sequencing Core

The UCF Genomics and Bioinformatics Core was established to meet the growing demand for high-throughput sequencing and computational analysis across biomedical, environmental, and translational research. In this talk, I will highlight our vision for the Core as both a research enabler and a teaching platform, supporting investigators through end-to-end workflows—spanning nucleic acid extraction, Illumina and Oxford Nanopore sequencing, and advanced transcriptomic analysis. Our close integration with UCF’s Advanced Research Computing Center (ARCC) allows us to scale computational pipelines for genome assembly, differential expression analysis, and microbial ecology. I will discuss how we are leveraging high-performance computing and open-source tools to support reproducible bioinformatics workflows, onboard new users, and pilot ML/AI applications in genomics. In addition, I will outline our trajectory toward sustainable shared research infrastructure and workforce development, including student training, summer institutes, and collaborative data science initiatives.

HPC
11:12 AM – 11:22 AM
Panthers, pipelines, and pixels: HPC-enabled conservation genomics
Dr. Robert Fitak, Assistant Professor, Department of Biology and Genomics and Member of Bioinformatics Cluster

Dr. Fitak’s lab performs cutting-edge research that integrates a variety of genomic, statistical, and computational techniques to characterize unique biological traits and conserve the biodiversity in which they persist. This presentation highlights multiple case studies where high-performance computing (HPC) has enabled and advanced conservation insights. The first case study evaluates the effects of genetic restoration in Florida panthers, a landmark conservation effort that reversed inbreeding depression through translocation. Next, a few examples of how computing pipelines – powered by HPC – are streamlining the generation of genomic resources for endangered species will be presented. The presentation will end will a short introduction to how techniques that sketch images from DNA datasets can be combined with machine learning for rapid conservation inferences. Together, these studies demonstrate how HPC advances conservation biology and biodiversity studies.

HPC
11:24 AM – 11:34 AM
Leveraging the UCF HPC Cluster for Advanced Computational Studies: Applications in Fluid Dynamics and Combustion
Dr. Ritesh Ghorpade, Postdoctoral Scholar, Center for Advanced Turbomachinery & Energy Research

Computational studies are essential for validating experimental data, facilities, and models, and Vasulab has extensively utilized the University of Central Florida’s (UCF) High-Performance Computing (HPC) cluster to advance this work. Our projects demand intensive resources, including jet characterization in supercritical carbon dioxide (sCO₂), crucial for next-generation power cycles. We also analyzed ignition delay times using CONVERGE CFD, a key parameter in combustion optimization, with results further examined using advanced visualization tools. The cluster enabled high-fidelity sCO₂ velocity computations in confined channels and complex acoustic simulations with ANSYS Fluent, where GPU acceleration significantly reduced processing times. Depending on complexity, simulations ranged from one hour to over a week, highlighting the cluster’s versatility. Access to robust HPC resources has been pivotal in addressing large-scale, complex problems, supporting both numerical model validation and experimental design, and underscoring the critical role of advanced computing in modern scientific discovery.
Authors: Ritesh Ghorpade, Subith Vasu

HPC
11:36 AM – 11:46 AM
High-throughput computations guided design of catalytic materials
Dr. Shyam Kattel, Assistant Professor, Physics

Clean and sustainable energy development is a global challenge, with catalytic materials forming the backbone of many clean energy technologies. These materials facilitate essential reactions that produce value-added chemicals and usable fuels, addressing the surging energy demand driven by demographic and industrial expansion. To optimize these technologies, a fundamental grasp of catalytic mechanisms is crucial. Computational modeling, with its predictive power and atomic-level precision, has become a cornerstone in materials research and is poised to drive future breakthroughs in energy innovation. In this talk, I will showcase a few examples of high-throughput computations guided design of catalytic materials for sustainable generation of clean fuels and chemicals. In particular, I will focus on my group’s efforts to integrate various computational modeling methods, including first-principles density functional theory calculations and machine learning, to understand surface chemical reactions and lay out catalyst design principles

HPC ML/AI
11:48 AM – 11:58 AM
TBD
Dr. Bulent Soykan

TBD

12:00 PM – 12:10 PM
Software Stack Design for Quantum Computing
Dr. Siyuan Niu, Assistant Professor, Electrical and Computer Engineering

Quantum computing holds great promise for addressing classically intractable problems. As an emerging field, it has made remarkable progress in recent years, with numerous quantum algorithms and hardware platforms based on different qubit modalities. However, no practical quantum applications have yet been realized, primarily due to the high noise levels in current quantum hardware.
In this talk, I will discuss how software can help quantum computers move closer to achieving quantum advantage. In particular, I will present techniques for reducing noise through quantum circuit optimization, as well as methods for error mitigation and correction. Building on these techniques, I will also share recent benchmarking efforts designed to evaluate quantum applications and hardware systems, helping us assess how far we are from achieving practical quantum advantage.

Quantum computing
Digital Twins and AI – A Look at the Next Five Years and How They Will Transform Every Industry (NVIDIA/Mark III)

Come to this session to see how building a Digital Twin Center of Excellence, anchored by NVIDIA Omniverse, can dramatically accelerate research by bringing researchers, both traditional and non-traditional, across institutions into shared virtual spaces so that they can view, analyze, discover, and publish their breakthroughs and findings faster and through real-time collaboration. We’ll walk through actual examples and practical case studies to see how you can get started or accelerate the use of digital twins and visualization, infused with AI, in your research and simulations. One of the examples we’ll look at is around a digital twin that’s currently being built of UF Health Shands hospital in Gainesville, including the backstory, team, and tactical steps that it took and will take to launch a first-of-its-kind digital twin based right here in the state.

2:00 PM – 2:10 PM
Seeing the Senses: Modeling Cross-Sensory Expectations from Product Images
Aarushi Aarushi, Ph.D. Candidate, Business Administration (Marketing)

Images shape how consumers evaluate products, yet marketers lack tools to test whether and how those visuals create sensory expectations. This research adapts and fine-tunes a predictive model that links product images to expected sensory qualities like sound, smell, taste, and texture. Using large-scale human annotations and deep learning techniques, our model can understand and forecast how consumers develop cross-sensory expectations in response to images. This tool helps firms improve image selection, reduce expectation gaps, and personalize content. Theoretically, the work advances grounded cognition (Barsalou 2008), predictive coding (Clarke 2013) and affordance theory (Gibson, 1979) by modeling how consumers infer multisensory experiences from images.

HPC ML/AI
2:12 PM – 2:22 PM
TBD
Dr. Carolina Cruz-Neira

TBD

2:24 PM – 2:34 PM
The CORES Lab: Harnessing HPC and AI for National Challenges
Dr. Deborah Penchoff, Assistant Professor in the Department of Chemistry; Senior Fellow at the Institute for Nuclear Security

This presentation will highlight current research in the Computational Research (CORES) Lab. The CORES Lab operates at the intersection of applied high performance computing (HPC), data science, and artificial intelligence (AI), developing strategies to address national challenges. Our research encompasses a range of critical applications, including radiochemical separations, the purification of radiotherapeutics, and the recovery of rare earth elements and critical minerals. We specialize in leveraging HPC, and the design, development, and application of AI for solutions in radiochemistry, nuclear forensics, nuclear nonproliferation, and the optimization of circular bioeconomy systems. By employing computational models, we investigate chemical binding and design novel ligands to enhance separation efficiency. The CORES Lab extends its expertise to smart agriculture, creating data-driven solutions for crop optimization and soil sciences. Central to our mission is the continual innovation in computational protocols design, enabling us to build powerful, predictive tools that accelerate discovery across these domains.

HPC ML/AI
2:36 PM – 2:46 PM
TBD
Dr. Sazadur Rahman

TBD

For more information about the Student Poster Session, click here: https://rci.research.ucf.edu/2025-annual-ucf-research-computing-symposium-call-for-posters/
Below are the posters and their presenters:

Revealing the Active Site Local Atomic Environment of Oxide-supported Ag Single Atom Catalyst
Syeda Faiza Rubab Sherazi

Single atom catalyst (SAC) supported on a metal oxide surface is a promising candidate for various reactions. Determining the relationship between the active site’s local atomic coordination and its catalytic performance is important for designing SAC. Here, we apply the ab initio thermodynamics approach to investigate the local coordination of Ag atoms that stabilize on CeO2(ııo), ZrO2(īıı), and Al2O3(ııı) through calculated phase diagrams and examine NH3 dissociation on so determined Ag SAC. We find that for the CeO2(ııo)-supported Ag SAC structure, one surface oxygen vacancy nearby is the most thermodynamically favorable, while with the ZrO2(īıı) and Al2O3(ııı)supported ones, no oxygen vacancy nearby is the most Abstract Body: thermodynamically favorable. Our results also show that oxygen vacancy formation is spontaneous near the Ag atom when supported by CeO2(ııo), while non-spontaneous when supported by ZrO2(īıı) and Al2O3(ııı). We compare the energetics of NH3 adsorption and dissociation on Ag/CeO2(ııo) with those on Ag/ZrO2(īıı) and Ag/Al2O3(ııı), finding that the first hydrogen abstraction is the most facile on Ag/CeO2(ııo). We trace the catalytic behavior of these oxidesupported Ag SACs to the differences in coordination, charge states and the availability of unoccupied density of states of Ag atoms, as well as the distance from a H atom of the adsorbed NH3 to nearby surface O atoms. We will discuss the good qualitative agreement of the above results with experimental data that we have obtained.
Authors: S. Faiza Sherazi, Duy Le, Kailong Ye, Shaohua Xie, Fudong Liu and Talat S. Rahman

A Mid-Res Grid of Contribution Functions Characterizing Brown Dwarf Atmospheres
Myrla Phillippe

Using a state of the art radiative transfer code, cloud modeling code, and UCF’s Stokes high-performance computing (HPC) cluster, we created a grid of contribution functions for model brown dwarfs atmospheres from 500K to 2000K. Our grid covers a wide range of cloud properties, temperatures, and gravities, explicitly considering both cloudy and cloud-free models. This enables us to examine how clouds alter the pressures probed across molecular and atomic features. We will present the implications of our grid for the first systematic pressure-mapping effort of brown dwarf atmospheres across a wide range of brown dwarf properties. We will discuss optimal observation strategies to study the 3D structure of brown dwarf atmospheres. We will demonstrate how our grid can contribute to the characterization of 3 brown dwarfs using archival HST ( Hubble Space Telescope ) observations, and discuss how we can use it for planning JWST (James Webb Space Telescope ) observations. The results of this work have been used to produce a user-friendly tool hosted on the cloud-based Jetstream2 (NSF ACCESS) and will be made available to the community through a UCF domain, ensuring secure access.
Authors: Myrla Phillippe, Theodora Karalidi, Elena Manjavacas, Kieran M Manjrawala, Natalia Oliveros Gomez, Jonathan Fernandez

Dynamic Circuit Compilation for Sparse Qubit Connectivity
Sumeet Shirgure

In this work, we show how dynamic circuit compilation methods can transform a densely connected circuit into an ancilla-mediated, sparsely connected dynamic circuit that includes mid-circuit measurement and feedforward operations.
This sparse connectivity is better suited to current quantum hardware, which often has limited qubit connectivity. Compared to compilation methods focused on unitary circuits, our method can reduce both circuit depth and the additional CNOTs required to execute non-adjacent qubit interactions.
Authors: Sumeet Shirgure, Siyuan Niu

Leveraging Cloud Infrastructure and AI APIs for Scalable E-Learning Production
Margaret Zorrilla

This project explores the integration of cloud computing and artificial intelligence (AI) to develop a scalable, reproducible workflow for automated e-learning video production. Conducted through a collaboration between the UCF Hospitality+ Center for Innovation and Training and the UCF Research Cyberinfrastructure team, the initiative leverages cloud-native architecture to streamline instructional content development while preserving academic rigor.
A two-stage Extract–Transform–Load (ETL) pipeline was implemented using Amazon Web Services (AWS), Azure OpenAI, and Synthesia. In Pipeline 1, faculty-generated training script objectives—aligned with Bloom’s taxonomy—along with contextual examples and instructional framing, were uploaded in MS Word format to AWS. These documents were automatically extracted and transformed into structured JSON files. Azure OpenAI expanded and validated the content into human-readable scripts. A human-in-the-loop review followed to ensure instructional clarity, pedagogical integrity, and alignment with learning outcomes.
In Pipeline 2, the validated Word documents were uploaded to AWS, reformatted into JSON, and transmitted via API calls to Synthesia. This enabled fully automated video generation with standardized outputs featuring customizable AI avatars, multilingual narration, and dynamic graphic elements. Faculty-specific avatars and voices were developed to enhance authenticity, while template customization supported diverse instructional goals and branding.
This architecture demonstrates how cloud-enabled AI workflows can extend beyond traditional computational domains to support education and workforce training. The integration of automation with human oversight offers a reproducible model for scalable digital content creation across disciplines.
Authors: Dr. Manuel Rivera, Assistant Dean of Research Rosen College, Director UCF Hospitality+ Center for Innovation and Training, Margaret Zorrilla (MA Interdisciplinary Studies, Project Manager UCF Hospitality+ Center for Innovation and Training), Ezequiel Gioia (UCF Research Cyberinfrastructure), Paola Canales-Bigio (UCF Research Cyberinfrastructure)

Platform-Agnostic Modular Architecture for Quantum Benchmarking: Enabling Custom Execution and Flexible Analysis
Neer Patel

We present a platform-agnostic modular architecture that addresses the increasingly fragmented landscape of quantum computing benchmarking by decoupling problem generation, circuit execution, and results analysis into independent, interoperable components. Supporting over 20 benchmark variants ranging from simple algorithmic tests like Bernstein-Vazirani to complex Hamiltonian simulation with observable calculations, the system integrates with multiple circuit generation APIs (Qiskit, CUDA Quantum, Cirq) and enables diverse workflows. We validate the architecture through successful integration with Unitary Foundation’s metriq-gym for execution and results publishing, Sandia’s pyGSTi for advanced circuit analysis, and CUDA Quantum for multi-GPU HPC simulations. Extensibility of the system is demonstrated by implementing dynamic circuit variants of existing benchmarks and a new quantum reinforcement learning benchmark, which become readily available across multiple execution and analysis modes. Our primary contribution is identifying and formalizing modular interfaces that enable interoperability between incompatible benchmarking frameworks, demonstrating that standardized interfaces reduce ecosystem fragmentation while preserving optimization flexibility. This architecture has been developed as a key enhancement to the continually evolving QED-C Application-Oriented Performance Benchmarks for Quantum Computing suite.
Authors: Neer Patel, Anish Giri, Hrushikesh Pramod Patil, Noah Siekierski, Vincent Russo, Changhao Li, Alessandro Cosentino, Nathan Shammah, Avimita Chatterjee, Sonika Johri, Timothy Proctor, Thomas Lubinski, Siyuan Niu

Prompt Optimization Meets Subspace Representation Learning for Few-shot Out-of-Distribution Detection
Faizul Rakib Sayem

The reliability of artificial intelligence (AI) systems in open-world settings depends heavily on their ability to flag out-of-distribution (OOD) inputs unseen during training. Recent advances in large-scale vision-language models (VLMs) have enabled promising few-shot OOD detection frameworks using only a handful of in-distribution (ID) samples. However, existing prompt learning-based OOD methods rely solely on softmax probabilities, overlooking the rich discriminative potential of the feature embeddings learned by VLMs trained on millions of samples. To address this limitation, we propose a novel context optimization (CoOp)-based framework that integrates subspace representation learning with prompt tuning. Our approach improves ID-OOD separability by projecting the ID features into a subspace spanned by prompt vectors, while projecting ID-irrelevant features into an orthogonal null space. To train such OOD detection framework, we design an easy-to-handle end-to-end learning criterion that ensures strong OOD detection performance as well as high ID classification accuracy. Experiments on real-world datasets showcase the effectiveness of our approach.
Authors: Faizul Rakib Sayem, Shahana Ibrahim

Computational Pathways to AI Literacy: Student Engagement with Generative AI in Hospitality Education
Marcelo Viejo Rubio

The rapid integration of generative artificial intelligence (AI) into higher education raises urgent questions about how students perceive, apply, and evaluate these tools in professional training contexts. This project examines computationally enabled learning in HMG 6449 Smart Travel Tourism, a graduate course where 62 students engaged in AI-driven assignments across two semesters (Fall 2023 and Fall 2024). Students generated marketing campaigns, designed sustainable tourism proposals, and critically reflected on AI’s role in shaping creativity, judgment, and career identity.
To analyze these data, we applied a six-step thematic analysis process supported by computational text analysis and qualitative coding, producing a conceptual model of student AI engagement. Results demonstrate how students developed selective prompting strategies, multi-model consensus methods, and bias-detection skills, which function as computational practices that sharpen critical thinking and reveal discipline-specific challenges such as localization, cultural adaptation, and ethical boundary-setting. Students further internalized AI use as part of their professional identity, reporting both increased career confidence and moments of impostor syndrome modulation.
The study contributes to the design of scalable AI-integrated curricula, including a scaffolded AI-literacy sequence, cultural-sensitivity modules, and reflective identity journals. Beyond hospitality education, this work illustrates how computational tools can be systematically embedded into coursework to enhance AI literacy, foster responsible adoption, and align student learning with industry-ready competencies. By bridging pedagogy and computation, the project demonstrates pathways for expanding AI training across disciplines and professions.
Authors: Marcelo Viejo Rubio, Dr. Arthur Huang

Gaming for Justice: A Gamified Storytelling Mobile Prototype App for Hospitality Anti-Human Trafficking Interventions
Aili Wu

Human trafficking remains a critical challenge in hospitality, one of the main venues for this crime. Yet no prior research has tested interactive ways to help hospitality consumers (often key bystanders) recognize and respond to trafficking. This study addresses that gap by designing and testing a gamified storytelling mobile app that builds knowledge and confidence.
The app combines four elements to capture attention and deepen learning. Avatar embodiment places players in the role of a detective, giving them a first-person view of the investigation and creating an immersive, playful experience. A clear, suspenseful storyline moves from noticing suspicious activity to catching the trafficker, keeping the game exciting yet easy to follow. Vivid visuals make scenes feel real and memorable, while interactive choices let players steer the story and control the action. A total of 378 hospitality consumers recruited through Prolific played the game and completed a survey. Analysis using PLS-SEM shows that this gamified storytelling approach sparks both enjoyment and cognitive engagement, creating dual pathways that significantly enhance learning and memory. Participants reported stronger knowledge retention, greater confidence in handling trafficking scenarios, and higher willingness to intervene as bystanders. This research shows that gamified storytelling goes beyond training hospitality consumers. It offers a scalable, engaging approach to raise public awareness and inspire action against human trafficking. Breaking communication barriers around this sensitive issue empowers individuals, supports industry efforts, and informs the wider community and policy initiatives, contributing to meaningful social change well beyond the hospitality sector.
Authors: Aili Wu; Dr. Wei Wei

Machine Learning-Accelerated Discovery of Single-Atom Catalysts for Hydrogen Evolution Reaction
Chidozie Ezeakunne

The hydrogen evolution reaction (HER), half reaction in water splitting, plays a crucial role in the efficient generation of green hydrogen, a key step toward building a sustainable energy economy. However, progress has been impeded by reliance on costly and scarce precious metal-based catalysts, highlighting an urgent need to identify high-performance, cost-effective alternatives. To address this challenge, this work introduces a computational framework that combines first-principles density functional theory (DFT) calculations with machine learning (ML) to explore the vast chemical space of single-atom catalysts (SACs) supported on low-cost transition metal nitride and carbide supports. Metal carbides and nitrides were chosen as hosts given their intrinsic stability and electronic properties as support for atomically dispersed atoms to maximize catalytic efficiency. Our high-throughput DFT screening systematically evaluated ~2,000 unique structures. Gibbs free energy of hydrogen adsorption (ΔGH∗) was calculated as the primary descriptor of HER activity, where a value close to 0 eV indicates an optimal balance of H binding strength. This comprehensive screening identified several highly active HER catalyst candidates with near-ideal ΔGH∗ values. Using the DFT calculated data as a training set, we developed a predictive ML model to accelerate future discovery of single-atom catalysts with accuracy comparable to DFT calculations. Our ML model successfully learns the underlying physical relationships between elemental properties and catalytic activity, enabling the rapid and accurate screening of new SACs.
Authors: Chidozie Ezeakunne, Shyam Kattel

Inverse Scattering Problem for Variable Thin Coating Reconstruction using Generalized Impedance Boundary Conditions
Isabela Viana

We solve the inverse scattering problem of reconstructing the thickness variation of a thin coating on a perfectly electric conducting domain using measurements of the scattered field off of the obstacle generated by impinging incident waves. The problem is nonlinear, ill-posed, and computationally expensive. An important application of this problem is the monitoring of fuel rods, where deposits form thin layers that are difficult to inspect directly. Noninvasive methods to estimate coating thickness are therefore essential for safety assessment. Wave propagation is modeled by the Helmholtz equation with generalized impedance boundary conditions (GIBC), subject to the Sommerfeld radiation, and the data is generated by calculating the scattered field from plane waves. Since the transmission problem with thin layers that models our problem is computationally costly, we use the GIBC model, which provides an efficient approximation. For solving the forward problem numerically, we use boundary integral equations accelerated by fast algorithms. The inverse problem is recast as a nonlinear least-squares problem solved with the Gauss–Newton method and a bandlimited regularization. Numerical experiments on star-shaped geometries with thin coatings of variable width show that the method reconstructs the coating thickness with a small error, demonstrating that the approach provides accurate and efficient recovery of variable thin coatings from scattering data.
Authors: Isabela Viana, Carlos Borges

Data-Driven Insights into the Degradation and Failure of Electronic Materials using Interdigitated Comb Sensors
Janice Yeung

The longevity of electronic systems depends on the reliability of their components. For critical applications such as nuclear systems, conventional maintenance practices rely on manual electrical inspections. While effective, this approach is labor intensive and time-consuming, requiring technicians to perform routine site visits. Interdigitated comb (IDC) sensors provide a cost-effective, data-driven approach for monitoring electronic component health and detecting potential failures. The implementation of IDC sensors alongside electronic components enables in situ monitoring and allows technicians to prioritize essential tasks.
In collaboration with Sandia National Laboratories, a standardized workflow was established to evaluate IDC sensors. Samples were first optically imaged in the pristine state and baseline electrical properties were collected. The experimentation includes submerging each IDC board in deionized water or weak organic acid of varying concentrations while subjected to a voltage bias. Current-time data were collected in-situ until onset of failure or short circuit. After drying, post exposure imaging and electrical measurements were performed.
Quantitative analysis of optical images and electrical data were conducted using custom python packages developed for IDC evaluation. The comparison of pristine and exposed samples revealed significant changes in electrical properties, including increased capacitance after submersion, while RGB and grayscale analysis indicated a shift in color channels. The trends are consistent with the presence of material degradation such as dendritic growth. These data driven correlations between electrical behavior and image features provide important insights into IDC degradation mechanism, establishing a foundation for predictive monitoring of electronics materials.
Authors: Janice Yeung, Jarod Kaltenbaugh, Max Liggett, Matthew A. Kottwitz, J Elliott Fowler, Alp Sehirloglu, Roger H. French, Kristopher O. Davis

Curriculum Learning for Inverse Scattering Problems
Nickolas Arustamyan

We address the inverse acoustic obstacle scattering problem in 2D for sound-soft obstacles, aiming to reconstruct the obstacle’s boundary from scattered field measurements. Building on the neural network approach introduced in [1], which outperformed the classical linear sampling method (LSM) for initial domain estimation, we propose a curriculum learning strategy to enhance training. By progressively incorporating multifrequency data—from low to high frequencies—our method improves accuracy, robustness to noise, and efficiency. Compared to [1], this approach yields faster, more reliable training with reduced computational cost.
Authors: Dr. Carlos Borges, Nickolas Arustamyan, Dr. Jeremy Hoskins

DART: Distributed Assignment of Research Tasks for Heterogeneous Compute Environments
Abdur Rouf

In today’s diverse and dynamic research computing environments, scientists face significant challenges in managing computational tasks across a wide range of systems, from personal machines, cloud virtual machines (VMs), to institutional high-performance computing (HPC) clusters. This paper introduces DART, a lightweight, platform-agnostic job distributor tailored for academic research settings. The distributor is designed for minimal setup, broad compatibility, and seamless job submission, monitoring, and rescheduling across distributed resources, reducing research turnaround time and enhanced fault tolerance. Unlike traditional orchestration tools that rely on uniform infrastructure or require complex configurations, our system focuses on simplicity and portability. We detail the architecture, design rationale, and specific use cases while comparing our tool to more complex frameworks like Dask. While more comprehensive systems like DAGMan exist, our tool stands out for its intuitive interface, quick deployment and minimal scripting (using simple JSON configuration files), making it an ideal solution for research software engineers managing diverse and evolving scientific workflows. In a real-world experiment involving over 37,720 jobs across 68 heterogeneous machines, our dynamic job allocation approach achieved up to a 35x speedup over static allocation strategies, completing workloads in days instead of months.
Authors: Abdur Rouf

Spiking Neural Networks for Classification of Neuromorphically Encoded Tactile Sensing Data
Eugenio Diaz

Restoring a natural sense of touch in prosthetic devices remains a major challenge in neural engineering, particularly in determining how to encode tactile sensor data to evoke lifelike sensations. We collected a comprehensive tactile dataset using a 9-taxel piezoresistive sensor array interacting with 21 3D-printed textures under 5 sliding speeds and 4 force levels, yielding 42,000 trials. The raw sensor signals are converted into biologically plausible spike trains via the Izhikevich neuron model and used to train a three-layer, leaky integrate-and-fire, rate-coded spiking neural network (SNN). The rate-coded SNN achieves high classification performance, correctly identifying individual textures with 93.68% accuracy and broader texture groups with 99.26%. We further investigate data efficiency by truncating the input signals to the onset of first spikes: a rate-coded SNN trained on these shortened spike trains maintains 90.95% (per-texture) and 97.47% (group-level) accuracy. A temporal SNN trained on the same data achieves slightly worse performance than the rate-coded SNN, but much more efficiently and with massively reduced latency for real-time inference pipelines. These results highlight the promise of SNN-based tactile sensing for energy-efficient, real-time texture discrimination, paving methods for improved sensory feedback in prosthetic limbs and realistic haptic feedback in virtual reality.
Authors: Eugenio Diaz, Mohsen Rakhshan

Bayesian Wind Farm Layout Optimization
Taylore Keesler

Maximizing the annual energy production (AEP) of large-scale wind farms is a challenging black-box optimization problem. Each candidate layout requires an expensive wind-farm simulation, gradient information is unavailable, and strict geometric constraints on turbine spacing and site boundaries must be satisfied. We investigate constrained Bayesian optimization (BO) algorithms for efficiently exploring the design space of the wind farm layout optimization (WFLO) problem.
We first evaluate standard BO with Expected Improvement and Lower Confidence Bound acquisition functions on benchmark problems. For the WFLO task, we compare four constrained BO strategies: (1) a vanilla BO baseline using IPOPT (an interior-point method) for constraint handling, (2) the Hybrid BO–IPOPT algorithm for constrained optimization, (3) SCBO (Scalable Constrained Bayesian Optimization), and FuRBO (Feasibility-Driven Trust Region Bayesian Optimization), a trust-region approach that adapts to the feasible set. We also experimented with custom covariance kernels designed to exploit the geometry of turbine layouts.
Empirical results show that FuRBO and the Hybrid BO–IPOPT method consistently outperform both SCBO and the vanilla BO baseline, identifying feasible layouts with substantially higher AEP.
Authors: Taylore Keesler, Michael Melnikov, Sophia Xiao, Allen Zhou

From Signals to Clinical Insights: Transformer-based Feature Extraction and Classification for Epilepsy
Maryam Rahimimovassagh

Epilepsy is one of the diseases that is widely spread with about 50 millions of people affected by it worldwide. It has many categories which make the symptoms diverse. It is important for the clinicians to understand the signals regarding abnormalities as well as the seizure onset zone (SOZ) using Electroencephalograph (EEG) to categorize it and predict the attack in some cases. EEG helps them learn about the type of the disease, its severity, and the prognostic path to manage the severity and the impact of seizure attacks. EEG signals are inherently highly chaotic and have a low signal to noise ratio. Furthermore, there are different sources of error including the way the signals travel, placement of electrodes, and muscle movements. For a clinician, labelling becomes important when the number of the data grows. An automated approach to categorize the normal and abnormal signals becomes important when we are dealing with large amounts of data. Moreover, feature extraction from these signals is important to draw useful conclusions given the chaotic nature of the signals and their high dimensionality. One of the challenges is to extract useful features given the complexity of the brain networks. The current work spans baseline pretrained large language models integrating with feature extraction and machine learning classifiers to help clinician categorize the patients through an interpretable interface rather than a black box model.
Authors: Maryam Rahimimovassagh, Elias Hossain, Mehrdad Shoeibi

Predicting the formation energy and lattice constants of ternary metal oxides using machine learning regression
Nicholas Belden

Metal oxides are commonly used as catalysts or catalyst supports in heterogeneous catalytic systems. Multicomponent oxides, such as ternary oxides, can be formed to tune certain catalytic properties. For these reasons, ternary metal oxides offer an expansive material space to design catalytic materials with desired properties. Experimental techniques or computationally expensive density functional theory (DFT) calculations are typically used to examine the properties and structure of ternary metal oxides. However, machine learning techniques may obtain similar results while being computationally cheaper. In this study, we trained machine learning regressors to predict key bulk properties of ternary metal oxides such as formation energy, crystal structures, and band gaps. Predictions were based on features of the metal elements including atomic radius, cohesion energy, and Lennard-Jones potential (LJP) values. Various commonly used regression models were trained and tested on data for ~2,800 ternary metal oxides collected from the Materials Project. Mixed accuracies were obtained for each target variable, but predictions of formation energy per atom showed promising results with an R2 score of ~0.90 and a test RMSE of 0.23 eV/atom. Additionally, feature importance analysis on the gradient boosting regression (GBR) and random forest regression (RFR) models listed the most important features as the electronegativities, cohesive energies, and work functions of the metals along with the minimum potential for the LJP of the metals. These results provide a step towards an efficient method of predicting properties of ternary metal oxides with comparable error to DFT calculations.
Authors: Nicholas Belden, Shyam Kattel

Data-Driven Compression for Large-Scale Simulations in High-Performance Computing
Arshan Khan

The launch of exascale computing facilities has enabled simulations to capture physical processes at unprecedented fidelity, but the resulting extreme-scale datasets present significant challenges for storage, transfer, and analysis. Existing error-bounded lossy compressors achieve only moderate compression and often rely on data-agnostic predictors with predefined functions, limiting both efficiency and scientific usability. To overcome these limitations, we introduce Data-informed Local Subspaces (DLS), a data-driven compression framework that learns spatially local, generalizable features directly from representative samples and uses them to represent large-scale spatiotemporal solutions. The discontinuous variant, DDLS, partitions the computational domain into patches, encoding each independently under user-specified error tolerances. By relaxing continuity across patch boundaries, DDLS achieves higher compression while still preserving essential physical structures. This patch wise encoding produces sparse representations that reduce storage requirements and support rapid analysis directly on compressed data, eliminating the need for full decompression. Case studies demonstrate the method’s broad applicability. For three-dimensional flow past a cylinder at Reynolds number 100,000, DDLS achieved 8x–100x compression while preserving key turbulent structures and accelerating downstream analysis by 15x–600x. Comparable results were obtained for rotor-body interactions and climate modeling data. These findings establish DLS as a scalable, physics-informed, data-driven compression framework that outperforms traditional data-agnostic compressors such as SZ and MGARD. By coupling strong data reduction with accurate preservation of scientific features, DLS advances the state of the art in data management for high-performance computing.
Authors: Arshan Khan, Rohit Deshmukh

Data-informed Feature Learning for Large Scientific Data Compression
Fahim Sakai

Recent launches of multiple exascale computing facilities have pushed the limits of fidelity in scientific simulations. This unprecedented fidelity comes with increased cost of I/O, data storage, data movement, and analysis. Many existing lossy compression algorithms aim to address this issue by reducing the data while retaining mission-critical information. Our in-house Data-informed Local Subspaces (DLS) algorithm combines feature learning with classical computational mechanics to provide an improvement over the existing lossy algorithms by training the compression model directly on the target data. In this work, we aim to improve the training process by leveraging recent advances in machine learning. In particular, we explore sparse coding using neural networks to learn better spatially local features to enrich our DLS models. Although this effort is still in its early stages, we discuss current limitations, our hypothesis, and possible remedies, accompanied by preliminary demonstrations on 2-D images and scientific data.
Authors: Fahim Sakai, Ande Bhanu Naga Peddi Ganesh, Rohit Deshmukh

Multimodal and Multiscale Characterization of Photovoltaics: Understanding Degradation and Failure from Systems Down to Atoms
Max A. Liggett

The mode and rate of degradation for photovoltaic (PV) modules in the field is caused by a litany of stressors, and as a result, an understanding of root cause of failure often requires more comprehensive investigation with multiple data streams on a system-, module-, device-, and material-level to ensure high confidence in conclusions regarding culpability of performance losses experienced in the field. Intrinsic challenges present for reliability research in which data is collected from multiple levels require careful consideration to meaningfully connect and extract information. In this work, we present a multiscale approach for investigating degradation and failure in fielded modules. Additionally, we cover the ongoing development of tools and methods of analyzing multimodal data, particularly electroluminescence (EL) images and illuminated current-voltage (I-V) curves, collected from modules as a means of assisting the multiscale investigative framework.
Authors: Max A. Liggett, Sameera Venkat, Dylan J. Colvin, Andrew M. Gabor, Philip J. Knodle, Joseph Raby, Brent A. Thompson, Hubert P. Seigneur, Roger H. French, Kristopher O. Davis*

VHS 1256 b, HIP 99770 b, AF Lep b: Expected Thermal Light Polarization
Maxwell Galloway

Direct imaging of planetary mass companions and brown dwarfs has revealed similar spectra to L/T transition brown dwarfs, including hints of rotational variability. Polarization, being sensitive to macro- and micro-physical properties of clouds, can break degeneracy in potential cloud structures from flux-only observations. The dramatic increase in signal-to-noise ratio in flux that can be achieved for exoplanets due to JWST has furthered the need to be able to properly characterize these objects’ signals. Modeling of these exo-atmospheres, however, can be difficult without adequate computational power when utilizing state of the art radiative transfer models across several wavelengths. We used a climate model code (PICASO), a 3D radiative transfer code (ARTES), and UCF’s Stokes high-performance computing (HPC) cluster to model potential light curves and select spectra for VHS 1256 b, HIP 99770 b, and AF Leporis b in the NIR. We explored a range of potential cloud formations, sizes, and cloud sedimentation parameter values. In this presentation, we will discuss the effect of temperature, gravity, cloud parameters, and inclination on the observed signals and potentially observable trends. The models presented here are part of a larger grid computed over hundreds of thousands of CPU hours through Stokes that will be given open-access to the community and can be used to aid in the characterization of directly imaged exoplanets and brown dwarfs.
Authors: Maxwell Galloway, Theodora Karalidi, Sagnick Mukherjee, Max Millar-Blanchaer, Connor Vancil, Joseph Harrington

Decoding Michelin-Starred Restaurant Experience: A Theory-Driven, Topic-Sentiment Weighted Analysis of Online Reviews
Ngoc Tran Nguyen

This study examines how high-end restaurant experiences are constructed and evaluated by consumers through computational methods to bridge theoretical and empirical gaps in hospitality research. Drawing from 28,882 online reviews of 96 Michelin-starred restaurants in California and New York, the research follows a two-stage analytical procedure. In Stage 1, bigram-based keyword extraction was used to develop a domain-specific dictionary encompassing both predefined constructs (e.g., atmosphere, food, and service quality) and emergent constructs (e.g., wine quality and holistic experience). In Stage 2, constructs were quantified using a novel integration of Guided Latent Dirichlet Allocation (GuidedLDA) and VADER sentiment analysis, producing topic–sentiment weighted scores that reflect both thematic relevance and emotional valence.
These quantified measures were then used to empirically test a theory-driven quality–experience–satisfaction–intention framework through Generalized Structural Equation Modeling (GSEM). Results show that all four quality dimensions (i.e., atmosphere, food, service, and wine) significantly predict holistic experience, which in turn drives satisfaction and revisit intention. The emergence of wine quality and experience as distinct constructs enriches theoretical understanding of symbolic consumption in luxury services.
Practically, this approach provides a scalable, cost-effective method for extracting actionable insights from unstructured review data, which eliminates reliance on biased or costly surveys. Fine-dining managers can use these insights to optimize service design, enhance wine programs, and personalize customer engagement strategies. More broadly, this study illustrates how advanced computational methods, specifically, text mining, topic–sentiment weighted analysis, and structural modeling, can transform unstructured consumer narratives into strategic intelligence for experience-centric industries.
Authors: Ngoc Tran Nguyen, Jeong-Yeol Park, Hyoung Ju Song, Ji Eun Lee

N2 splitting via a bare bimetallic catalyst in an L-shaped geometry
Shiv Patel

Nitrogen (N₂) splitting is a crucial step in producing nitrogenous derivatives, such as ammonia, which is essential for fertilizers and the global food supply. Haber–Bosch process, currently used commercially, requires extreme pressures and temperatures and is therefore highly energy-intensive and unsustainable. The discovery of alternative methods for N₂ activation under milder conditions is thus an urgent challenge. Bimetallic transition metals offer a different and viable approach for reducing the activation barrier for N2 bond cleavage.
In this paper, we used density functional theory (DFT) to screen more than one thousand Group 6–10 transition metal bimetallic complexes, modelled in various spin multiplicities in an L-shape conformation. After initial geometry optimizations, complexes were sorted based on binding free energies as an indicator of stability and reactivity. Complexes with metal–metal–nitrogen angles between 70–110° emerged as the most promising for N₂ activation, narrowing the dataset to approximately 150 possible candidates.
This study illustrates the usefulness of computational screening for the rapid identification of promising systems from large chemical space. By checking certain geometric designs and binding energy relevant to N₂ splitting, we can create a rational catalyst design for N₂ activation. This conclusion helps emphasize the significance of new ways to activate N₂. This research can help set future studies with ligands and solvents. Furthermore, this study is an advancement in the continued pursuit of efficient and sustainable catalytic pathways to ammonia synthesis and other nitrogen conversions.
Authors: Shiv Patel, Shengli Zou, Ph.D

Unified Multimodal RAG for Cross-Brand Marketing Analytics Across Modalities
Sabab Ishraq

Contemporary marketing strategies orchestrate intricate narratives across heterogeneous media channels, yet existing analytical paradigms remain constrained by modality-specific limitations. This creates blind spots where important patterns across different media types go unnoticed. This research addresses the fundamental challenge of unified multimodal understanding in marketing analytics by proposing a cross-modal retrieval-augmented generation framework that bridges semantic gaps between visual, textual, and auditory brand communications. We explored how temporal video understanding, paired with cross-modal attention mechanisms can identify previously undetectable patterns in brand messaging consistency across different modalities. The hybrid retrieval approach introduces marketing-specific knowledge recognition as a semantic layer, enabling fine-grained analysis of pricing strategies, promotional messaging, and call-to-action effectiveness. The study contributes three key innovations: (1) a unified embedding space that preserves both semantic and temporal relationships across text, image, video, and audio, (2) marketing entity-aware retrieval that captures domain-specific commercial intent, and (3) cross-modal consistency metrics for brand messaging analysis. We evaluated 71 major brands across diverse industries. Our framework demonstrates significant improvements in capturing nuanced marketing strategies that traditional unimodal approaches miss, revealing novel insights into audio-visual alignment patterns and cross-platform brand positioning inconsistencies. This means better competitive intelligence and more coherent brand strategies for marketing teams. It opens new directions for researchers in commercial content analysis and cross-modal AI applications.
Authors: Sabab Ishraq, Juncai Jiang, Chen Chen

Scientific Machine Learning for Digital Twins
Nam T. Nguyen

Digital twins are dynamic, virtual representations of physical objects, systems, or processes that remain connected to their real-world counterparts through live sensor data. In control systems, constructing accurate digital twins is particularly important, as they provide a virtual environment for evaluating the performance of controllers before deployment in practical settings. Scientific machine learning (SciML) algorithms, which integrate scientific principles into machine learning frameworks, have shown strong potential for precisely approximating complex dynamical systems. This poster presents our results on implementing SciML models that incorporate stability, boundness, and monotonicity properties into the learning technique of real systems using experimental data. Furthermore, we apply an active learning strategy to collect informative data for training the digital twin, while explicitly accounting for the cost of data collection. The results demonstrate that our SciML models outperform standard machine learning models by approximately 30 percent in identifying system dynamics. In addition, the active learning algorithm achieves comparable accuracy with only 10 experiments, in contrast to 30 experiments required by the passive learning approach, according to simulation results.
Authors: Nam T. Nguyen, Binh Nguyen, Truong X. Nghiem

Quantum Algorithms to Improve Density Functional Theory
Hafiz Arslan Hashim

Quantum computers offer new approaches to enhance Density Functional Theory (DFT), particularly for strongly correlated systems where conventional functionals are ineffective. In this work, we develop a hybrid quantum–classical approach to construct spin–dependent exchange–correlation (XC) potentials from ground-state data obtained using the Variational Quantum Eigensolver (VQE). We demonstrated the method on a Fermi–Hubbard model, which is paradigmatic for strongly correlated fermionic physics. We describe calculations and results for the spin-resolved XC potential and the exchange–correlation (Exc) energy as the system size reaches six sites. The XC potentials derived from VQE already outperform Hartree–Fock DFT and direct VQE energy estimates.
Authors: Hafiz Arslan Hashim, Volodymyr M Turkowski, Eduardo R Mucciolo

Barren Plateaus, Complex Entanglement, and Structured Circuits in Variational Quantum Algorithms
Suman Mandal

Variational Quantum Algorithms (VQAs) are useful tools for solving many-body quantum problems on near-term quantum hardware. A major challenge, however, is the onset of barren plateaus, where gradients vanish and optimization becomes very difficult. In this work, we argue that barren plateaus are unavoidable in deep, unstructured circuits, as they develop what we call complex entanglement. To study this phenomenon, we look at both the Cluster–Ising model and the Toric Code Hamiltonian, tracking the entanglement spectrum through level-spacing statistics. At the same time, we compare structured circuit families—including Finite Local Depth Circuits (FLDC), Global Local Depth Circuits (GLDC), and Brickwall layouts with Cartan blocks—that remain trainable at modest depths. Our results show that while deep circuits exhibit Wigner-Dyson statistics and loss of trainability, FLDCs and GLDCs with Cartan blocks achieve low energies at shallow depths, making them promising candidates for practical VQAs.
Authors: Suman Mandal, Maximillian Daughtry, Eduardo R Mucciolo