keyboard_arrow_up
Accepted Papers
Confidence Evaluation Measures for Zero Shot LLM Classification

David Farr1, Iain J. Cruickshank2, Lynnette Hui Xian Ng2, Nico Manzonelli3, Nicholas Clark1, Kate Starbird1 , and Jevin West1, 1University of Washington, 2Carnegie Mellon University, 3Cyber Fusion and Innovation Cell

ABSTRACT

Assessing classification confidence is critical for leveraging Large Language Models (LLMs) in automated labeling tasks, especially in the sensitive domains presented by Computational Social Science (CSS) tasks. In this paper, we apply five different Uncertainty Quantification strategies for three CSS tasks: stance detection, ideology identification and frame detection. We use three different LLMs to perform the classification tasks. To improve the classification accuracy, we propose an ensemble-based UQ aggregation strategy. Our results demonstrate that our proposed UQ aggregation strategy improves upon existing methods and can be used to significantly improve human-in-the-loop data annotation processes.

KEYWORDS

uncertainty quantification, large language models, stance detection, ideology identification, frames detection, ensemble models.


Merging Language and Domain Specific Models: the Impact on Technical Vocabulary Acquisition

Thibault Rousset1, Taisei Kakibuchi2, Yusuke Sasaki2, and Yoshihide Nomura2, 1School of Computer Science, McGill University, 2Fujitsu Ltd.

ABSTRACT

This paper investigates the integration of technical vocabulary in merged language models. We explore the knowledge transfer mechanisms involved when combining a general-purpose language-specific model with a domain-specific model, focusing on the resulting model’s comprehension of technical jargon. Our experiments analyze the impact of this merging process on the target model’s proficiency in handling specialized terminology. We present a quantitative evaluation of the performance of the merged model, comparing it with that of the individual constituent models. The findings offer insights into the effectiveness of different model merging methods for enhancing domain-specific knowledge and highlight potential challenges and future directions in leveraging these methods for crosslingual knowledge transfer in Natural Language Processing.

KEYWORDS

Large Language Models · Knowledge Transfer · Model Merging · Domain Adaptation · Natural Language Processing.


Moving Towards Constructivist Ai Above Epistemic Limitations of Llms Enhancing the Efficacy of Mixed Human-ai Approaches Through Socio-technical Research: Autopoietic Structural Coupling & Consensus Domains of Communities of Practice

Gianni Jacucci, University of Trento, Department of Information Engineering and Computer Science, Italy

ABSTRACT

Current AI models, particularly large language models (LLMs), are predominantly grounded in positivist epistemology, treating knowledge as an external, objective entity derived from statistical patterns in data. However, this paradigm fails to capture "facts-in-the-conscience", the subjective, meaning-laden experiences central to human sciences. In contrast, phenomenology hermeneutics and constructivism, as fostered by socio-technical research (16), provide a more fitting foundation for AI development, recognizing knowledge as an intentional, co-constructed process shaped by human interaction and community consensus. Phenomenology highlights the lived experience and intentionality necessary for meaning-making, while constructivism emphasizes the social negotiation of knowledge within communities of practice. This paper argues for an AI paradigm shift integrating second-order cybernetics, enabling recursive interaction between AI and human cognition. Such a shift would make AI not merely a tool for knowledge retrieval but a co-participant in epistemic evolution, supporting more trustworthy, context-sensitive, and meaning-aware AI systems within socio-technical frameworks.

KEYWORDS

AI epistemology, Large Language Models(LLMs), Consensus Domain, Human-AI Interaction, Structural Coupling.


Information Retrieval vs Cache Augmented Generation vs Fine Tuning: A Comparative Study on Urdu Medical Question Answering

Ahmad Mahmood1, Zainab Ahmad1, Iqra Ameer2, and Grigori Sidorov1, 1Instituto Politécnico Nacional (IPN), Centro de Investigación en Computación(CIC), Mexico City, Mexico, 2Division of Science and Engineering, The Pennsylvania State University, Abington, PA, USA

ABSTRACT

The development of medical question-answering (QA) systems has predominantly focused on high-resource languages, leaving a significant gap for low-resource languages like Urdu. This study proposed a novel corpus designed to advance medical QA research in Urdu, created by translating the benchmark MedQuAD corpus into Urdu using the Generative AI-based translation technique. The proposed corpus is evaluated using three approaches: (i) Information Retrieval (IR), (ii) Cache-Augmented Generation (CAG), and (iii) Fine-Tuning (FT). We conducted two experiments, one on a 500-instance subset and another on the complete 3,152-question corpus, to assess retrieval effectiveness, response accuracy, and computational efficiency. Our results show that JinaAI embeddings outperformed other IR models, while OpenAI 4o mini, FT achieved the highest response accuracy (BERTScore: 70.6%) but is computationally expensive. CAG eliminates retrieval latency but requires high resources. Findings suggest that IR is optimal for real-time QA, Fine-Tuning ensures accuracy, and CAG balances both. This research advances Urdu medical AI, bridging healthcare accessibility gaps.

KEYWORDS

Information retrieval, retrieval-augmented generation, cache-augmented generation, fine-tuning, Urdu medical question-answering.


Enhancing Road Sign Detection for Autonomous Driving using Yolov8 and Multisensory Vision Integration

Yang Liu1, Soroush Mirzaee2, 1Esperanza High School, 1830 Kellogg Dr, Anaheim, CA 92807, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620

ABSTRACT

StopCV aims to improve road sign detection for autonomous and assisted driving systems [1]. One of the main challenges is that current systems struggle with poor lighting, weather conditions, and obstructed signs. To address these issues, we developed StopCV, a vision-based detection system using a Raspberry Pi 5, a high-quality camera, and a custom-trained YOLOv8 model for real-time recognition [2]. Additional sensors, such as ultrasonic and LiDARreplicating systems, enhance object detection accuracy. Through multiple experiments, including real-world testing and public perception surveys, we identified limitations in low-visibility conditions and obscured signs. We mitigated these issues using improved image processing, infrared cameras, and AI training using different datasets. Our results show that enhanced sensor and AI integration can significantly improve accuracy [3]. Ultimately, StopCV demonstrates the potential of AI-driven vision systems to help improve driving safety, and further testing paves the way for autonomous driving applications.

KEYWORDS

Road Sign Detection, YOLOv8, Autonomous Driving, Multisensory Vision System.


A Smart RPG Game-based English Learning Platform using Generative Artificial Intelligence and Nature Language Processing

Hongjia Meng1, Moddwyn Andaya2, 1Kantonale Mittelschule Uri, Gotthardstrasse 59, 6460 Altdorf, Uri, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620

ABSTRACT

Language barriers continue to challenge immigrants as they adapt to new environments, often limiting their confidence and social integration. Many existing language-learning applications fail to provide immersive, adaptive, and emotionally supportive experiences—particularly for beginners. This game is about learning language by talking and interacting with NPCs in real life scenes [1]. The aim is to remove the language barrier of a newcomer who doesn’t have enough motivation and bravery to talk in real life by simulating real life in a game, where the player only talks to NPCs. This paper introduces an AI-powered language learning game designed to simulate real-world conversations in a safe and engaging virtual environment. Players navigate through a city, interact with dynamic NPCs, and receive language support through a personalized guide who speaks their native language. This guide gradually shifts to the target language, helping users build confidence and fluency over time. The system leverages OpenAIs GPT-4o model to deliver context-aware, level-appropriate dialogue, ensuring that players are neither overwhelmed nor under-challenged. A user study showed that the game effectively fosters engagement and supports language acquisition, though it also highlighted areas for improvement in navigation and user interface design. Compared to traditional apps, this game offers a richer, more supportive learning experience by combining adaptive AI, immersive storytelling, and real-time conversational practice. Ongoing development will enhance usability and explore features such as multiplayer interaction to further support language learners.

KEYWORDS

Language Learning Gamification, NPC Interaction, Immersive Language Acquisition, Real-Life Simulation.


Integrating Large Language Models in Financial Investments and Market Analysis: A Survey

Sedigheh Mahdavi, Kristin Chen, Pradeep Kumar Joshi, Lina Huertas Guativa, and Upmanyu Singh, AI Research Lab, Blend360, Columbia, USA

ABSTRACT

Large Language Models (LLMs) have been employed in financial decision making, offering enhanced analytical capabilities for investment strategies. Traditional investment strategies often utilize quantitative models, fundamental analysis, and technical indicators. However, LLMs have introduced new capabilities to process and analyze large volumes of structured and unstructured data, extract meaningful insights, and enhance decision-making in real-time. This survey provides a structured overview of recent research on LLMs within the financial domain, categorizing research contributions into four main frameworks: LLM-based Frameworks and Pipelines, Hybrid Integration Methods, Fine-Tuning and Adaptation Approaches, and Agent-Based Architectures. This study provides a structured review of recent LLMs research on applications in stock selection, risk assessment, sentiment analysis, algorithmic trading, and financial forecasting. By reviewing the existing literature, this study highlights the capabilities, challenges, and potential directions of LLMs in financial markets.

KEYWORDS

Large Language Models, Financial Decision-Making, Investment Strategies, Fine-Tuning, Multi-Agent Systems, Portfolio Optimization, Stock Market Prediction.


Enhancing Road Sign Detection for Autonomous Driving using Yolov8 and Multisensory Vision Integration

Yang Liu1, Soroush Mirzaee2, 1Esperanza High School, 1830 Kellogg Dr, Anaheim, CA 92807, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620

ABSTRACT

StopCV aims to improve road sign detection for autonomous and assisted driving systems [1]. One of the main challenges is that current systems struggle with poor lighting, weather conditions, and obstructed signs. To address these issues, we developed StopCV, a vision-based detection system using a Raspberry Pi 5, a high-quality camera, and a custom-trained YOLOv8 model for real-time recognition [2]. Additional sensors, such as ultrasonic and LiDARreplicating systems, enhance object detection accuracy. Through multiple experiments, including real-world testing and public perception surveys, we identified limitations in low-visibility conditions and obscured signs. We mitigated these issues using improved image processing, infrared cameras, and AI training using different datasets. Our results show that enhanced sensor and AI integration can significantly improve accuracy [3]. Ultimately, StopCV demonstrates the potential of AI-driven vision systems to help improve driving safety, and further testing paves the way for autonomous driving applications.

KEYWORDS

Road Sign Detection, YOLOv8, Autonomous Driving, Multisensory Vision System.


A Smart RPG Game-based English Learning Platform using Generative Artificial Intelligence and Nature Language Processing

Hongjia Meng1, Moddwyn Andaya2, 1Kantonale Mittelschule Uri, Gotthardstrasse 59, 6460 Altdorf, Uri, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620

ABSTRACT

Language barriers continue to challenge immigrants as they adapt to new environments, often limiting their confidence and social integration. Many existing language-learning applications fail to provide immersive, adaptive, and emotionally supportive experiences—particularly for beginners. This game is about learning language by talking and interacting with NPCs in real life scenes [1]. The aim is to remove the language barrier of a newcomer who doesn’t have enough motivation and bravery to talk in real life by simulating real life in a game, where the player only talks to NPCs. This paper introduces an AI-powered language learning game designed to simulate real-world conversations in a safe and engaging virtual environment. Players navigate through a city, interact with dynamic NPCs, and receive language support through a personalized guide who speaks their native language. This guide gradually shifts to the target language, helping users build confidence and fluency over time. The system leverages OpenAIs GPT-4o model to deliver context-aware, level-appropriate dialogue, ensuring that players are neither overwhelmed nor under-challenged. A user study showed that the game effectively fosters engagement and supports language acquisition, though it also highlighted areas for improvement in navigation and user interface design. Compared to traditional apps, this game offers a richer, more supportive learning experience by combining adaptive AI, immersive storytelling, and real-time conversational practice. Ongoing development will enhance usability and explore features such as multiplayer interaction to further support language learners.

KEYWORDS

Language Learning Gamification, NPC Interaction, Immersive Language Acquisition, Real-Life Simulation.


Integrating Large Language Models in Financial Investments and Market Analysis: A Survey

Sedigheh Mahdavi, Kristin Chen, Pradeep Kumar Joshi, Lina Huertas Guativa, and Upmanyu Singh, AI Research Lab, Blend360, Columbia, USA

ABSTRACT

Large Language Models (LLMs) have been employed in financial decision making, offering enhanced analytical capabilities for investment strategies. Traditional investment strategies often utilize quantitative models, fundamental analysis, and technical indicators. However, LLMs have introduced new capabilities to process and analyze large volumes of structured and unstructured data, extract meaningful insights, and enhance decision-making in real-time. This survey provides a structured overview of recent research on LLMs within the financial domain, categorizing research contributions into four main frameworks: LLM-based Frameworks and Pipelines, Hybrid Integration Methods, Fine-Tuning and Adaptation Approaches, and Agent-Based Architectures. This study provides a structured review of recent LLMs research on applications in stock selection, risk assessment, sentiment analysis, algorithmic trading, and financial forecasting. By reviewing the existing literature, this study highlights the capabilities, challenges, and potential directions of LLMs in financial markets.

KEYWORDS

Large Language Models, Financial Decision-Making, Investment Strategies, Fine-Tuning, Multi-Agent Systems, Portfolio Optimization, Stock Market Prediction.


A Context-Aware Mobile App to Support Early Disease Detection and Education using GPT-4 and Visual Symptom Surveys

Qiaoman Cai1, Qiaoqian Cai2, Rodrigo Onate3, 1Crean Lutheran High School, 12500 Sand Canyon Ave, Irvine, CA 92618, 2Crean Lutheran High School, 12500 Sand Canyon Ave, Irvine, CA 92618, 3California State Polytechnic University, Pomona, CA, 91768

ABSTRACT

This research project addresses the growing gap in accessible healthcare, particularly for women in underserved communities [1]. Rising healthcare costs and delayed diagnosis contribute to higher mortality, particularly in low income areas. To address this, we developed Care Bridge Health, a mobile app that uses AI (GPT-4) to simulate medical reasoning based on structured user input [2]. The app features a body diagram for symptom selection, guided surveys for pain description, and access to disease education through multimedia content. The system is divided into three components: the Ask Page for diagnosis, the Learn Page for information retrieval, and the History Page for recordkeeping. Through two experiments, we identified that both symptom clarity and AI consistency can affect diagnosis quality [3]. Despite these limitations, the app provides a scalable, low-cost way to raise health awareness and guide users toward early symptom identification. With further refinement and medical validation, Care Bridge Health could become a valuable tool for community-level health empowerment.

KEYWORDS

Artificial Intelligence in Healthcare, Symptom Checker App, GPT-4 Diagnosis Assistance, Womens Health Accessibility, Mobile Health Technology.


Multi-Domain ABSA Conversation Dataset Generation via LLMs for Real-World Evaluation and Model Comparison

Tejul Pandit1, Meet Raval2, and Dhvani Upadhyay3, 1Palo Alto Networks, Santa Clara, USA, 3University of Southern California, Los Angeles, USA, 3Dhirubhai Ambani University, Gandhinagar, India

ABSTRACT

Aspect-Based Sentiment Analysis (ABSA) offers granular insights into opinions but often suffers from the scarcity of diverse, labeled datasets that reflect real-world conversational nuances. This paper presents an approach for generating synthetic ABSA data using Large Language Models (LLMs) to address this gap. We detail the generation process aimed at producing data with consistent topic and sentiment distributions across multiple domains using GPT-4o. The quality and utility of the generated data were evaluated by assessing the performance of three state-of-the-art LLMs (Gemini 1.5 Pro, Claude 3.5 Sonnet, and DeepSeek-R1) on topic and sentiment classification tasks. Our results demonstrate the effectiveness of the synthetic data, revealing distinct performance trade-offs among the models: DeepSeekR1 showed higher precision, Gemini 1.5 Pro and Claude 3.5 Sonnet exhibited strong recall, and Gemini 1.5 Pro offered significantly faster inference. We conclude that LLM-based synthetic data generation is a viable and flexible method for creating valuable ABSA resources, facilitating research and model evaluation without reliance on limited or inaccessible real-world labeled data.

KEYWORDS

Aspect-Based sentiment analysis (ABSA), Synthetic Data Generation, Large Language Models, GPT-4o, Gemini 1.5 Pro, Claude 3.5 Sonnet, Deepseek-R1, Comparative analysis of LLMs.


Self-explaining Emotion Classification Through Preference-aligned Large Language Models

Muhammad Hammad Fahim Siddiqui and Diana Inkpen, University of Ottawa, Canada

ABSTRACT

Recent advancements in large language models (LLMs) have shown promise for NLP applications, yet producing accurate explanations remains a challenge. In this work, we introduce a self-explaining model for classifying emotions in X posts and construct a novel preference dataset using chain-of-thought prompting in GPT-4o. Using this dataset, we guide GPT-4o with preference alignment via the Direct Preference Optimization (DPO). Beyond GPT-4o, we adapt smaller models such as LLaMA 3 (8B) and DeepSeek (32B distilled) through preference tuning using Odds Ratio Preference Optimization (ORPO), significantly boosting their classification accuracy and explanation quality. Our approach achieves state-ofthe-art performance (68.85%) on the SemEval 2018 E-c multilabel emotion classification benchmark, exhibits comparable results on the DAIR AI multiclass dataset and attains a high sufficiency score—indicating the standalone effectiveness of the generated explanations. These findings highlight the impact of preference alignment for improving interpretability and enhancing classification.

KEYWORDS

LLMs, preference alignment, emotion classification.


IOT Security and Privacy

Nikitha Merilena Jonnada, University of the Cumberlands, USA

ABSTRACT

In this paper, the author discusses the importance of IoT, its security measures, and device protection. IoT devices have become a trend as they allow users to easily use and understand the devices. IoT has become a widely used technique within many industries like banking, agriculture, health care, and others. It made the users experience easy. IoT without AI has been a good investment for many users as its connectivity helps them use multiple devices from a single device and sometimes with a single click.

KEYWORDS

Artificial Intelligence (AI), Machine Learning (ML), Internet of Things (IoT), Security, Hacking, Risks.


The Smart Garbage Bin Management Using Iot & Mobile Application with Cloud Databases

Lakmali Karunarathne, York St John University, UK

ABSTRACT

Smart garbage bins which are automatically opened the bins doors when the person is standing in front of the smart bins are the perfectly innovated garbage bins by the IT industries and developers. The IR sensor is used to sense the waste and then its identified the which category the waste is by the support of sensors like metal proximity sensor, capacitive proximity sensor and the inductive proximity sensor. The expected services are aimed to provide by this entire system. The entire project is included things are, identifying the bins category, dispose the waste based on that category, send notifications and provide the reports for the purpose of getting awareness about the users garbage management. The IOT product is combined with the SMART GARBAGE MASTER (SGM) mobile application to interact with the entire IOT system via the cloud based to provide effective and efficient service to the users who use this system. The data is sent the Arduino for taking the decision that the garbage is either metal or non- metal.

KEYWORDS

Smart, garbage, segregation, plastic, paper, sensors, ultrasonic sensor, IR sensor, bin, level, percentage, IOT, Arduino, Cloud Databases.


Reassessment of Bitcoin Mining: the Utilization of Excessive Energy and Promotion of Green Energy Technologies

Tanja Steigner and Mohammad Ikbal Hossain, Emporia State University, Kansas, USA

ABSTRACT

Bitcoin mining, often criticized for its substantial energy consumption, holds significant potential to drive energy innovation and sustainability. This paper reevaluates Bitcoin minings environmental impact, focusing on its ability to utilize surplus and renewable energy sources. Mining operations absorb excess energy, such as curtailed wind and solar power, that would otherwise go to waste, contributing to grid efficiency and renewable energy integration. The increasing shift toward renewables, now accounting for over 50% of mining’s energy mix, underscores the industrys progress toward sustainability. Through the analysis of industry data, this paper highlights Bitcoin minings dual role as both a flexible energy consumer and a catalyst for green energy investments. Despite challenges like e-waste and the industrys reliance on energy-intensive proof-of-work mechanisms, the findings demonstrate how targeted policies, and technological advancements can transform Bitcoin mining into a force for environmental and economic benefits. The study emphasizes the need for collaborative efforts among stakeholders to unlock Bitcoin minings full potential in supporting the global energy transition.

KEYWORDS

Bitcoin mining, renewable energy, grid stabilization, green energy investments, proof-of-work (PoW), carbon footprint reduction, e-waste management, decentralized energy systems


IOT Experimental Results and Deployment Scenario for Tactical Battle Area

Avnish Kumar Singh and Rachit Ahulwalia, MILIT, Pune, India

ABSTRACT

The Internet of Things (IoT) has revolutionized how businesses interact and run. Unlike the Internet, which found its genesis in a military environment, credit for IoT goes to the civil industry, academia, and researchers. IoT is a disruptive technology and has changed the world in many unfathomable ways. Its genesis has paved opportunities that have reaped a plethora of benefits for various sectors managing a multitude of assets and engaging in the coordination of complex and intricate processes. Armed forces around the world also have shown interest in adopting this revolutionary technology and thereby reaping its tremendous benefits. In this research, we intend to bring out the opportunities in the field of IoT for defence forces, specifically in the context of military base operations. The main aim of this research is to study various IoT technologies available in the open domain, compare them, and recommend the most suitable one for military base operations. Further, this research is intended to study the effect of various input parameters on the performance of an IoT network by carrying out multiple simulations and analyzing them. Additionally, the research intends to propose a scenario with suitable input parameters to achieve optimum network efficiency and even recommend a deployment model for the IoT network. In this paper, we work towards proposing the optimum conditions required to achieve a high network performance in an IoT network. Finally, we have proposed a simulator to design an optimum IoT network based on certain input parameters. Also, we have recommended IoT deployment in Tactical Battle Area.

KEYWORDS

LoRa, LoRaWAN, Spreading Factor, Data Extraction Rate, Chirp Spreading Spectrum, Network Efficiency, Optimum Network Performance, Simulations, Network Energy Consumption, Tactical Battle Area


The Synergy of AI and IoT: Unlocking New Frontiers in Automation and Innovation

Gawande Krishna Ashok1 and Vandana B. Patil1, Gawande Krishna Ashok1 and Vandana B. Patil2, 1Dr. D. Y. Patil Institute of Engineering, Management and Research (DYPIEMR), 2School of Engg. Management and Research, D. Y. Patil International University, India

ABSTRACT

The convergence of Artificial Intelligence (AI) and the Internet of Things (IoT) represents a transformative paradigm shift in modern technology, unlocking unprecedented opportunities for automation, optimization, and innovation. AI empowers IoT devices to process and analyze vast streams of real-time data at scale, enabling advanced capabilities such as predictive maintenance, anomaly detection, intelligent decision-making, and seamless operational automation. This integration is driving disruptive applications across diverse sectors, including smart manufacturing, precision healthcare, industrial automation, smart cities, and environmental monitoring, revolutionizing workflows and enhancing efficiency. This paper provides a detailed exploration of the synergistic relationship between AI and IoT, focusing on their combined ability to improve system intelligence, adaptability, and operational reliability. It also highlights critical challenges, including data governance, privacy, cybersecurity, standardization, and the scalability of AI-IoT solutions in increasingly complex and interconnected ecosystems. By analyzing state-of-the-art advancements, innovative use cases, and emerging trends, this study aims to offer a comprehensive perspective on the transformative potential of AI-driven IoT, serving as a strategic guide for industrial stakeholders, policymakers, and researchers seeking to leverage these technologies for sustainable growth and competitive advantage.

KEYWORDS

Artificial Intelligence, Internet of Things, Automation, AI Models, Machine Learning, Neural Networks, Signal Optimization, Operational Reliability, Real-time Data Processing.


A Smart Innovative Emergency Response System for Managing Outdoor Autism-focused using Artificial Intelligence and IOT System (Internet of Things)

Ruibo Song and Andrew Park, California State Polytechnic University, USA

ABSTRACT

GuardianMap is an innovative emergency response system designed to improve safety in schools through real-time tracking and communication through its map. The system integrates wearable wristwatches, a mobile application, and a Firebase-backed infrastructure to provide continuous location tracking and instant emergency alerts. Additionally, it has an innovative AI feature that gives tips to improve school safety using tracking data it collects. Traditional safety measures, such as panic button apps and surveillance cameras, rely on human activation or manual monitoring, which can delay responses. GuardianMap enhances these efforts by offering automated location tracking and real-time map updates. Experimental testing revealed that GPS accuracy indoors can vary, necessitating additional tracking technologies, while response times were significantly faster on Wi-Fi networks compared to cellular data. The findings also show the need for optimized server capacity in high-traffic areas. By addressing key limitations in existing school safety solutions, GuardianMap provides a more comprehensive and effective approach to emergency response, ultimately enhancing coordination among law enforcement, school administrators, and first responders to reduce casualties in crisis situations.

KEYWORDS

Safety, Map, Tracking, Interactable, IoT


Framework for Data-Driven Spirulina Cultivation and Recommendations

Aakaash Kurunth, Adithya S Gurikar, Tejas B, Sean Sougaijam and Kamatchi Priya L, PES University, Bengaluru, India

ABSTRACT

Spirulina platensis, a microalgae known for its high nutritional value and sustainability, is widely used in food, pharmaceuticals, and bioenergy. Its growth depends on factors like temperature, irradiance, pH, and nutrients, but optimizing these conditions is challenging due to their complex interactions. To address this, we integrate predictive analytics with an intelligent recommendation system to optimize cultivation. We evaluate multiple regression models, including Stacking, xGBoost, CatBoost, Gradient Boosting Machine (GBM), Support Vector Regressor (SVR), and Neural Networks, to determine the most accurate predictor of Spirulina optical density. The best-performing model powers a hybrid recommendation engine that combines content-based filtering and rule-based logic. This system identifies optimal growth conditions and provides precise recommendations for farmers and researchers, enhancing efficiency in Spirulina cultivation. By leveraging machine learning, this approach ensures data-driven insights for maximizing yield and sustainability.

KEYWORDS

Spirulina Growth Prediction, Environmental Factors, Machine Learning, Sustainable Cultivation, Regression Models


Biobit: ML Secured Supply Chain Management and Drug Authentication Through Blockchain

Vaani Bansal, R Navaneeth Krishnan, Punith Anand, Aditya Kumar Sinha, and Prof.Sheela Devi, Department of Computer Science Engineering, PES University, Bangalore, India

ABSTRACT

Counterfeit medicines absolutely show a high threat to public health and non-functional areas in the pharmaceutical industry that do not have proper regulatory mechanisms in place. Such counterfeit drugs might contain the wrong doses or even some hazardous materials. Hence, they break the trust formed between the healthcare system and patients and expose patients to severe health risks. This project presents a complete solution integrated with blockchain technology and machine learning features to ensure drug authenticity and to protect the pharmaceutical supply chain. Blockchain module built on Hyperledger Fabric gives the tamper-proof, decentralized ledger for medicine logistics tracking. Each medicine has a unique QR code that links together with its whole manufacturing and regulatory information. This provides the scope for customers and employees to check medicines authenticity at the same time just by scanning the code. In other words, this encourages transparency and makes traceability, thus preventing counterfeit drugs entering the supply chain. The protection of the blockchain infrastructure is ensured by employing an anomaly detection model using XGBoost-based machine learning. Trained on the NSL-KDD dataset, the model is capable of identifying and nullifying network malicious activities such as unauthorized access attempt, thereby ensuring reliability and security of the system. By combining these technologies, an all-in-one, scalable solution for minimizing counterfeiting medicines is available. The data within the framework can be kept by blockchain integrity and accessibility while providing machine learning security and thus forming a complete counterfeiting regime. The system not only protects public health but also improves the culture of trust and transparency in the pharmaceutical supply chain, making it a feasible approach for large-scale implementation in the industry.

KEYWORDS

Blockchain, Machine Learning, Pharmaceutical Supply Chain, Counterfeit Drugs, Hyperledger.


Optimizing Graph Neural Networks Hyperparameters for Molecular Property Prediction Using Nature-Inspired Metaheuristic Algorithms

Ali Ebadi, Yaser Al Mtawa, and Qian Liu, Department of Applied Computer Science, The University of Winnipeg, Canada

ABSTRACT

Molecular property prediction is a critical step in discovery of new drugs or materials. Traditional computational methods models face limitations in computational efficiency, scalability, and reliance on handcrafted descriptors. Graph Neural Networks (GNNs) have recently emerged as a state-of-the-art solution by directly leveraging molecular graph structures, effectively capturing spatial and topological relationships. Despite their promise, GNNs are highly sensitive to hyperparameter configurations, posing challenges for their deployment in diverse applications. To address this, Nature-Inspired Metaheuristic Algorithms (NIMAs) based hyperparameter optimization offer a robust approach for navigating highdimensional, complex search spaces. This study systematically evaluates the performance of various NIMAs for optimizing GNN hyperparameters in molecular property prediction tasks. Benchmarking these methods on a large dataset, we assess their effectiveness in terms of prediction accuracy, computational efficiency, and scalability. Our findings demonstrate the potential of metaheuristic algorithms in enhancing GNN performance while addressing the challenges of traditional optimization methods.

KEYWORDS

Graph Neural Networks, Hyperparameter Optimization, Metaheuristic Algorithm, Evolutionary Algorithm, Molecule Property Prediction.


Operations Research-guided Graph Neural Networks for Multi-property Regression in Materials Science

Manpreet Kuar, Yaser AI Mtawa, Qian Liu, Department of Applied Computer Science, The University of Winnipeg, Winnipeg, Manitoba, Canada

ABSTRACT

Understanding the relationship between a material’s structure and its properties is key to improving existing materials and developing new ones for various applications. Graph Neural Networks (GNNs) offer a promising approach for this task by modeling material structures as graphs. However, they face challenges with hyperparameter optimization (HPO), particularly when accounting for atomic, bond, and global features while simultaneously predicting multiple properties. This study proposes the incorporation of Operations Research (OR) techniques, including Genetic Algorithms (GA) and Simulated Annealing (SA), into GNN HPO for multi-property prediction in materials. The results show that SA achieved 15.73% lower Mean Absolute Error than GA, demonstrating superior predictive accuracy. However, GA converged 10% faster, while both outperformed baseline Random Search (RS), which had the highest error despite the shortest optimization time. Ultimately, this study highlights that OR could provide an effective framework for enhancing the efficiency of HPO in predictive models for material science. The source code files have been made available at MEGNet HPO.

KEYWORDS

Material Science, Graph Neural Networks, Multi-property Prediction, Operation Research, Hyperparameter Optimization.


The Role of Power BI in Enterprise Reporting: a Testers Perspective

Swetha Talakola, Quality Engineer III at Walmart, Inc, USA

ABSTRACT

Enterprise reporting has evolved from basic spreadsheets to dynamic, interactive dashboards giving decision-makers real-time information in the modern data-centric business environment. Organizations expect not just data but also clarity, context, and accuracy—provided fast and precisely. Strong data visualization solutions have evolved from this shift; Microsoft Power BI leads the way. Power BI easily connects with modern data ecosystems so that businesses may combine disparate data sources, produce user-friendly visualizations, and provide thorough reports tailored to different corporate needs. Though data analysts and developers get a lot of attention, the role of the software tester is critical but frequently underappreciated. Rigorously testing metrics, cross-referencing data sources, verifying filters, and making sure representations faithfully depict the underlying data logic make testers quality gatekeepers. This paper investigates methods used to check data accuracy, evaluate report logic, replicate edge scenarios, and run performance stress tests on reports, thereby reflecting the testers point of view. Using actual testing scenarios, we show how testing approaches improve report reliability, build user trust, and guarantee regulatory compliance. Our findings show that early inclusion of testers in the Power BI development cycle helps to establish improved reporting strategies and promotes the quick discovery of significant issues. From automated validation scripts to user acceptability testing processes, the paper offers pragmatic approaches, insights acquired, and actionable tactics. This point of view supports a cooperative approach wherein testers, developers, and stakeholders coordinate efforts to ensure that corporate reports are not only visually beautiful but also quite dependable.

KEYWORDS

Power BI, enterprise reporting, report testing, business intelligence, data validation, dashboard testing, ETL validation, data quality, visual analytics, report automation, BI tools, real-time insights, data integrity, user acceptance testing, performance testing, interactive dashboards, data-driven decisions, tester’s role, report accuracy, reporting workflows.


Kubernetes Meets Legacy Systems: A Migration Playbook for Modern Infrastructure

Ali Asghar Mehdi Syed, IT Engineer at World Bank Group, USA

ABSTRACT

Modern businesses struggle sometimes between the necessity to grow and the security of past technologies. Originally fundamental to corporate operations, old systems currently create various issues including security weaknesses, restricted scalability, rising maintenance costs, and conflicts merging with modern technologies. Companies depending more and more on Kubernetes as a basic building component understand they need change. Kubernetess resilience and agility enable businesses to naturally grow, automate projects, and safeguard their operations for the future. This book provides companies ready to begin their changing path with a rational basis. Examining major migration choices to reduce risk and disturbance—hybrid environments, legacy application containerization, and incremental migration methods—it takes into account at least one. Important elements such stakeholder alignment, small-scale execution, and comprehensive monitoring strategies employing realistic ideas and knowledge gained from those who have essentially veers off course are also stressed. This playbook helps you to position modernizing your current systems as a strategic effort for continuous success by providing useful direction for evaluating migration readiness and planning using Kubernetes.

KEYWORDS

Kubernetes, Legacy Systems, Migration, Modern Infrastructure, Cloud-native, Microservices, Orchestration, Digital Transformation, Containerization, Infrastructure Modernization, Deployment Strategies, Hybrid Cloud.


Leveraging Merge Request Data to Analyze Devops Practices: Insights From a Networking Software Solution Company

Samah Kansab1, Matthieu Hanania1, Francis Bordeleau1, and Ali Tizghadam2, 1Ecole de technologie sup´erieure ( ´ ETS), Montr´eal, Canada, 2TELUS, Toronto, Canada

ABSTRACT

Context: DevOps integrates collaboration, automation, and continuous improvement in software development, enhancing agility and ensuring consistent software releases. GitLab’s Merge Request (MR) mechanism plays a critical role in this process by streamlining code submission and review. While extensive research has focused on code review metrics like time to complete reviews, MR data can offer broader insights into collaboration, productivity, and process optimization. Objectives: This study aims to leverage MR data to analyze multiple facets of the DevOps process, focusing on the impact of environmental changes (e.g., COVID-19) and process adaptations (e.g., migration to OpenShift technology). We also seek to identify patterns in branch management and examine how different metrics impact code review efficiency. Methods: We analyze a dataset of 26.7k MRs from 116 projects across four teams within a networking software solution company, focusing on metrics related to MR effort, productivity, and collaboration. The study compares the impact of process and environmental changes, and branch management strategies. Additionally, we apply machine learning techniques to examine code review processes, highlighting the distinct roles of bots and human reviewers. Results: Our analysis reveals that the pandemic led to increased review effort, although productivity levels remained stable. Remote work habits persisted, with up to 70% of weekly activities occurring outside standard hours. The migration to OpenShift showed a successful adaptation, with stabilized performance metrics over time. Branch management on stable branches, especially for new releases, exhibited effective prioritization. Bots helped initiate reviews more quickly, but human reviewers were essential in reducing the overall time to complete reviews. Other factors like commit’s number and reviewer experience also impact code review efficiency. Conclusion: This research offers practical insights for practitioners, demonstrating the potential of MR data to analyze and improve different aspects such as productivity, effort, and overall efficiency in DevOps practices.

KEYWORDS

Software process, DevOps, Merge request, GitLab, Code review.


Regulatory and Policy Discussions on LLM Auditing: Challenges, Frameworks, and Future Directions

Kailash Thiyagarajan, Independent Researcher, USA

ABSTRACT

The rapid rise of Large Language Models (LLMs) has revolutionized AI-driven applications but has also raised critical concerns regarding bias, misinformation, security, and accountability. Recognizing these challenges, governments and regulatory bodies are formulating structured policies to ensure the responsible deployment of LLMs. This paper provides a comprehensive analysis of the global regulatory landscape, examining key legislative efforts such as the EU AI Act, the NIST AI Risk Management Framework, and industry-led auditing initiatives. We highlight the gaps in current frameworks and propose a structured policy approach that promotes both innovation and accountability. To achieve this, we introduce a multi-stakeholder governance model that integrates regulatory, technical, and ethical perspectives. The paper concludes by discussing the future trajectory of AI regulation and the critical role of standardized auditing in enhancing transparency and fairness in LLMs. .

KEYWORDS

LLM Auditing, AI regulation, Ethical AI, Algorithmic Transparency, Bias and Fairness in AI, Explainability.


Enhancing LLM-assisted Translation: Optimizing Contextual Prompting and Pivot Strategies for Low-resource Languages with a Focus on Korean-to-english News Translation

WANG Wei and ZHOU Weihong, Beijing International Studies University, Beijing, China

ABSTRACT

The current research investigates the effectiveness of advanced prompting techniques in Large Language Model (LLM)-assisted translation from Korean, a low-resource language, to English, a high-resource language. The research explores two primary factors: the role of English as an intermediary language in the translation process and the influence of carefully refined contextual prompts on translation quality. Through a comprehensive empirical methodology, the study integrates multiple automated evaluation metrics—namely BLEU, METEOR, COMET, chrF++, and TER—to assess key aspects of translation performance, including accuracy, faithfulness, naturalness, and idiomatic expression. The findings contribute valuable insights into the optimization of prompt engineering, offering practical implications for improving LLM-driven translation models, particularly for low-resource languages. Furthermore, the study highlights potential avenues for enhancing translation workflows and addresses challenges associated with leveraging LLMs for less commonly studied languages.

KEYWORDS

LLM-assisted translation, low-resource languages, prompting strategies, translation quality, Korean-to-English translation, pivot translation, DeepSeek, ChatGPT-4o, Grok-2.


AI-enhanced Interactive Simulation for Scalable CPR and Lifeguard Training in Aquatic Emergency Response

Evan Mikai Lu and Tyler Boulom, California State Polytechnic University, USA

ABSTRACT

This project addresses the critical need for realistic, accessible, and scalable lifeguard and CPR training to better prepare responders for aquatic emergencies. To solve this, an AI-driven interactive simulation was developed using Unity, integrating first-person perspectives, checkpoint-based decision-making, and real-time AI-generated feedback to enhance skill retention and decision-making capabilities. The key technologies utilized include Unity for simulation environments and OpenAI’s natural language processing for context-specific response generation. Challenges included ensuring AI accuracy and optimizing the user interface across diverse platforms. These were mitigated by expanding AI training datasets and refining interface graphics. Experiments demonstrated high AI accuracy in matching authoritative standards, though minor inaccuracies indicated areas for dataset improvement. Interface responsiveness tests revealed the need for better optimization on mobile platforms. Ultimately, this project provides an effective training method combining interactive realism and intelligent feedback, making CPR and lifeguard training more engaging, effective, and widely accessible, ultimately improving emergency preparedness outcomes.

KEYWORDS

AI-driven training, Lifeguard simulation, CPR skill retention, Unity-based emergency preparedness


An Evolved Model for Online Content Filtering With Real-time AI Identification and Imagery Recognition

Chenghao Feng and Andrew Park, California State Polytechnic University, USA

ABSTRACT

Nexio Shield represents a significant advancement in online content moderation, leveraging AI to provide real-time protection against harmful material [1]. Our experiments demonstrate its effectiveness in detecting inappropriate content and highlight its user-friendly design. Comparative analysis with traditional moderation methods underscores its superiority in delivering immediate, unbiased, and consistent content analysis. While challenges such as reducing false negatives and enhancing customization features exist, ongoing improvements and user collaboration can enhance its effectiveness [2]. Overall, Nexio Shield contributes to creating safer online environments, addressing the limitations of traditional moderation approaches, and setting a new standard in digital safety.

KEYWORDS

Internet Security, Content Moderation, AI Detection, Image Recognition.


Evaluating Speech Recognition Algorithms for Linguistic Analysis in Hearing-Impaired Childrens Environments

Rafael Pintoa, Pedro Moraisa, Gustavo Tomaza, Shawn N Frasere, Ricardo Valentima and Joseli Brazorottoa, Federal University of Rio Grande do Norte, Brazil

ABSTRACT

Early language exposure is crucial for cognitive and linguistic development, especially for children with hearing loss who depend on family interactions for language acquisition. The ECO System was developed to capture and transcribe these interactions, enabling the assessment of verbal stimulation. This study compares the performance of SpeechRecognition and Whisper algorithms to determine their accuracy and reliability in this context. Verbal interactions were recorded in clinical and free-play settings, transcribed automatically, and compared to manual transcriptions by a speech-language pathologist. The Intraclass Correlation Coefficient (ICC) measured consistency and agreement, while a qualitative analysis assessed errors related to word substitution, omissions, background noise, and speaker recognition. Whisper demonstrated superior accuracy, achieving an ICC of 1.000 in some cases, while Speech Recognition exhibited lower consistency and struggled with sentence coherence and background noise. Despite these advantages, both models faced challenges in detecting conversational turns and child vocalizations. By integrating artificial intelligence and software engineering techniques into a real-world application, this research highlights the transformative potential of automated speech recognition for clinical and educational settings. The interdisciplinary nature of the ECO System allows for the development of cost-effective, scalable solutions that can improve language monitoring and early intervention strategies. Future advancements in speech recognition tailored to Brazilian Portuguese and noisy, natural environments could further enhance the systems capabilities, leading to new insights into language development and more effective rehabilitation approaches for children with hearing loss.

KEYWORDS

Automatic Speech Recognition, Natural Language Processing, Child Language Development.


Juicy or Dry? A Comparative Study of user Engagement and Information Retention in Interactive Infographics

Bruno Campos, Department of Design, MacEwan University, Edmonton, Canada

ABSTRACT

This study compares the impact of "juiciness" on user engagement and short-term information retention in interactive infographics. Juicy designs generally showed a slight advantage in overall user engagement scores compared to dry designs. Specifically, the juicy version of the Burcalories infographic had the highest engagement score. However, the differences in engagement were often small. Regarding information retention, the results were mixed. The juicy versions of The Daily Routines of Famous Creative People and The Main Chakras infographics showed marginally better average recall and more participants with higher recall. Conversely, the dry version of Burcalories led to more correct answers in multiple-choice questions. The study suggests that while juicy design elements can enhance user engagement and, in some cases, short-term information retention, their effectiveness depends on careful implementation. Excessive juiciness could be overwhelming or distracting, while well-implemented juicy elements contributed to a more entertaining experience. The findings emphasize the importance of balancing engaging feedback with clarity and usability. .

KEYWORDS

Infographics, Juiciness, Interactive, Engagement.


A Convenient Scooper with Sensors and Application to Help with Dog Waste Picking and Environmental Responsibility Management

Shilei Cao1, Jonathan Sahagun2, 1Arnold O. Beckman High School, 3588 Bryan Ave, Irvine, CA 92602, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620

ABSTRACT

Dog poop is a concern that people seem to get irritated by, and yet don’t take the time to consider, even if it is also a major issue for the environment. As the problem would simply be solved with the enforced responsibility of dog owners, we decided to create a marketable, smart robotic arm with a connected app that encourages dog owners to pick poop up while climbing up rankings on a mobile app. It solves the idea of insanitary rejection with the safe distance, and creates competition that will help owners pick up poop voluntarily. The main problems were design miscalculations, mechanical restrictions, and advanced code features [2]. In experimenting, tests were performed on the sensor’s accuracy to detect poop and the reliability of the app’s ranking system. In the end, the correct sensor that correlates with dog poop’s elements, and a method of spaced time for accurate and fair poop pick up counts was used. This idea is convenient, and methodically incorporates the responsibility of dogs onto owners.

KEYWORDS

Impactful, Methodical, Convenient, Smart Sensors.


The Signal is the System Scaling Real-time Systems for Planetary Intelligence

Stephen W. Marshall1 and Jurgen Valckenaere2, 1ora.systems, 2University of Western Australia

ABSTRACT

The infrastructure for capturing environmental data has rapidly advanced—networks of sensors, satellites, and telemetry now monitor planetary systems at unprecedented resolution. Yet architectures capable of translating that data into real-time signals remain critically underdeveloped. Climate informatics continues to rely on static, retrospective models—built to document, not respond. This paper introduces a framework for generative environmental intelligence: signal-based systems capable of detecting stress, issuing directives, and simulating future states across biospheric and geopolitical scales. Drawing from financial signal processing and autonomous feedback design, this model collapses the gap between ecosystemic volatility and coordinated action. The Los Angeles wildfires illustrate the potential of real-time signal architectures to enable anticipatory governance. Technologies such as HALO and PROTOSTAR—two generative modules designed for climate intelligence architecture—demonstrate how planetary signals can evolve from alerts into infrastructure. The systems exist. The challenge is not availability, but the willingness to evolve global signaling apparati.

KEYWORDS

Signal Processing Systems, Climate Informatics, Autonomous Feedback Networks, Predictive Modeling, Planetary Intelligence.


A Sophisticated Mobile Application to Determine Hairstyle and Locate Barbers using Artificial Intelligence Models and Web Scraping

Michael Zhang1, Rodrigo Onate2, 1Oakton High School, 2900 Sutton Rd, Vienna, VA 22181, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620

ABSTRACT

Many people struggle to find personalized hairstyle ideas and are seeking trusted barbers in their area. To solve this, we propose an AI-powered barber app that generates hairstyle images based on user preferences, showcases trending/celebrity styles, and connects users with nearby barbers with trusted reviews and more [7]. One of the major challenges was gathering up to date barber information which was solved using web scraping to automatically collect data such as reviews, hours, and locations. During experimentation, the application was put through prompt engineering experimentation, and zip code testing [8]. Results showed that overly specific prompts led to less accurate hairstyle images, with a surprising drop in quality despite higher detail. Meanwhile, zip code testing revealed that only 1.5% were missing, confirming a highly reliable barber database. These results helped identify blind spots and reinforced the importance of prompt balance and verified data sources. StyleSync is an idea that ultimately is something that people will use because the haircut options are limitless, and people are constantly looking for new styles.

KEYWORDS

Web Scraping, Prompt Engineering, Barber, Hair Style.


Quantum-consistent Adelic Integration and Structure of Egyptian Fractions

Julian Del Bel, Independent Researcher, Canada

ABSTRACT

Through diligent application of adelic integration and quantum arithmetic, we demonstrate the Egyptian fraction system’s remarkable anticipation of both number-theoretic profundities and quantum measurement theory. Our findings reveal a quantum-arithmetic framework revealing profound structural patterns in Egyptian fractions. We demonstrate these ancient decompositions exhibit: • Adelic balance through multiplicative normalization • Prime entanglement with optimized logarithmic spread (σlogs ≈ 0.17) • Dyadic quantization in Eye of Horus fractions • Computational validation via modern thresholding (< 10−12) Statistical analysis shows Egyptian σlogs values significantly below Erdős-Kac predictions (p < 0.001), evidencing non-random optimization. Our adelic unity framework connects these features through number-theoretic quantum analogs, revealing an ancient system anticipating modern mathematical principles.


Secure API-Driven Workforce Data Pipelines: Leveraging Oauth 2.0 and S3 for Real-time Compliance and Forecasting

Abdul Jabbar Mohammad ,UKG Lead Technical Consultant at Metanoia Solutions Inc, USA

ABSTRACT

Modern, fast-paced, data-centric companies have to make snap decisions depending on personnel dynamics—recruitment patterns, productivity statistics, compliance measures, and future staffing forecasts. This article looks at how safe, API-driven data pipelines could satisfy such objectives while guaranteeing scalability and compliance. Given increased concerns about data privacy and legal duties, we have to defend data at all levels; we cannot just pass it on. Under this scenario OAuth 2.0 and AWS S3 are relevant. We examine how a strong, token-based authorization system maintained by OAuth 2.0 allows secure access to worker data across applications, therefore preserving important credentials. According to contemporary corporate security guidelines, OAuth 2.0 integrates RESTful API to provide correct, context-sensitive permission’s S3 is needed to maintain highly accessible, durable, and under-control at scale data. Fit for predictive workforce analytics and compliance reporting, this forms the foundation of our suggested architecture and facilitates real-time data acquisition, transformation, and retrieval. OAuth 2.0 and S3 provide a great mix that provides total security across the pipeline, even while providing constant forecasting, regulatory compliance, and basic scalability in response to organizational needs. This work specifies architectural patterns, pragmatic design concerns, and real-world scenarios in which such a configuration improves operational agility and governance. The objective is to offer engineering executives, data architects, and compliance teams wishing to upgrade workforce data infrastructure a progressive but pragmatic road map preserving trust, openness, and security.

KEYWORDS

Workforce Data Pipelines, API Security, OAuth 2.0, AWS S3, Compliance Automation, Forecasting Models, Real-Time Data, Secure Integration



Data to Decisions: Using Powerbi and Spss for Real-time Quality Metrics in Pbm and Insurance Systems

Varun Varma Sangaraju1, Senior QA Engineer at Cognizant, USA

ABSTRACT

In today’s fast-paced healthcare landscape, especially in Pharmacy Benefit Management (PBM) and insurance systems, having access to real-time insights isn’t just helpful—it’s essential. Delayed or siloed data can hinder critical decisions that impact patient care, operational efficiency, and compliance. That’s where tools like PowerBI and SPSS come in. This study explores how combining the intuitive visualization capabilities of PowerBI with the statistical depth of SPSS can significantly boost how we interpret data and maintain quality control in PBM and insurance environments. We set out to create a framework that enables stakeholders—from analysts to executives—to make quick, informed decisions based on live data streams and deeper predictive insights. Our methodology involved integrating large datasets from operational systems into PowerBI dashboards for visual analysis, while using SPSS to perform rigorous statistical modeling and identify key trends or anomalies. The results were compelling: we were able to detect potential quality issues in near real-time, spot inefficiencies before they escalated, and back strategic decisions with data-driven evidence. What’s more, this hybrid approach empowered teams to move beyond static reporting and into a space of proactive management. By merging analytics with usability, this study underscores how the right tech stack can transform routine monitoring into a powerful decision-making engine. Ultimately, the insights generated proved invaluable not just for performance tracking but for shaping smarter, faster responses across various operational touchpoints.

KEYWORDS

PowerBI, SPSS, Pharmacy Benefit Management (PBM), real-time analytics, quality metrics, insurance systems, data-driven decision making, healthcare analytics, performance tracking, predictive modeling, data visualization, compliance reporting, claims analysis, operational efficiency, business intelligence



Service Cloud Optimization for Claims Processing: a Developer’s Perspective

Vasanta Kumar Tarra, Lead engineer at Guidewire software

ABSTRACT

Good control of insurance claims has always faced challenges because of complicated procedures, inadequate communication, and the need of accuracy within restricted time. Usually, these difficulties bring delays, unhappy consumers, and extra running costs. Service Cloud has lately evolved as a required response in the insurance sector for efficient claims processing with its complete case management, automation tools, and omnichannel support capacities. From a developers perspective, optimising Service Cloud to satisfy particular needs for claims processing is both a technical and a strategic endeavour. Developers using Flow to set automation, merge outside data systems, create dynamic screen flows for agents, and construct bespoke components as needed will help the platform to be tailored. Our research concentrated on exhaustive study of the claims process, conflict identification, and running optimisations combining specific designed solutions with declarative tools. Among the primary initiatives were more intelligent routeing based on claim complexity backed by automation of repeated operations, improvement of claim status visibility for consumers and agents, and automation of repeated activities itself. Notable outcomes were a 25% decrease in claim processing time, a 30% rise in first-contact resolution rates, and a general rise in agent efficiency. Most importantly, these gains increased consumer satisfaction within the first three months following deployment.Moreover, we found that better procedures enable agents to concentrate their attention from administrative tasks to important client contacts, therefore enhancing the service quality. By means of improved data access and communication efficiency, agents may proactively address claimants concerns instead of reactively resolving problems after an escalation. Our strategy was on scalability so that legal changes and future corporate expansion could be readily controlled without requiring a comprehensive system rebuild. Great potential value for optimization is predictive analytics for fraud detection, artificial intelligence-driven claims triaging, and client engagement customizing. Using tools like Einstein Bots and Next Best Action ideas, early client conversations can be automated and help to preserve a human element in key areas. Using IoT data streams and real-time collaboration technology is drawing more and more attention to help accelerate claim investigations and promote openness. As Service Cloud grows, developers will lead the way, constantly innovating and adjusting to improve claims processing, so converting not just more quickly but also more precisely and with a customer-focus. This article demonstrates how a deliberate, developer-driven optimization approach could convert conventional claim operations into a more agile, responsive, customer-centric process. The insurance industry may turn claims processing from a challenge into a significant difference for operational excellence and client loyalty by means of the quest of ongoing improvement and the expectation of new trends.

KEYWORDS

Service Cloud, Claims Processing, Salesforce, Developer Optimization, Workflow Automation, Customer Service Efficiency, Insurance Technology



Blockchain-backed Sla Enforcement in Multi-tenant Cloud Infrastructures

Parth Jani, Project manager at Molina Healthcare, USA

ABSTRACT

In the context of multi-tenant cloud infrastructures, the enforcement of Service Level Agreements (SLAs) plays a pivotal role in defining, monitoring, and maintaining the quality of service commitments between cloud service providers and their clients. SLAs typically specify critical performance metrics such as uptime, latency, and resource availability. However, traditional enforcement mechanisms often suffer from opacity, centralized dependency, and limited auditing capabilities. These limitations hinder trust, especially in dynamic, multi-tenant environments where multiple clients share and compete for resources in a virtualized setting This paper proposes a novel blockchain-backed framework that embeds SLA enforcement into a decentralized, transparent, and tamper-proof system. The core of the framework is the integration of smart contracts—self-executing contracts with encoded conditions that automatically monitor SLA compliance and trigger predefined penalties in case of violations. This shift from manual or semi-automated enforcement to smart contract-driven automation not only reduces human error and administrative overhead but also guarantees consistent and impartial adherence to agreed service terms. In our architecture, each SLA is instantiated as a smart contract on a blockchain network, with verifiable parameters sourced from trusted monitoring agents deployed within the cloud infrastructure. These agents collect real-time performance data and interact with the smart contracts to assess compliance continuously. If any deviation from SLA metrics is detected, the smart contract autonomously executes penalties—such as refunds, credit issuance, or notifications—without requiring third-party arbitration. The blockchain ledger records all transactions and events immutably, ensuring a permanent and auditable history of SLA interactions. The benefits of this approach are multi-fold. First, it introduces a high level of transparency, enabling both clients and providers to observe SLA status and actions in real time. Second, the traceability offered by the blockchain allows all stakeholders to audit compliance histories, detect patterns of service degradation, and identify systemic issues. Third, immutability ensures that SLA records cannot be altered or tampered with, thereby strengthening legal enforceability. Fourth, this solution promotes accountability by clearly attributing responsibility to either the provider or the tenant in the event of non-compliance.

KEYWORDS

Blockchain, SLA Enforcement, Cloud Computing, Smart Contracts, Multi-Tenant Systems, Decentralized Governance, Service Reliability, Cloud Service Agreements, SLA Automation, Performance Monitoring, Immutable Ledger, Trust Management, Cloud Accountability, Federated Cloud, Hybrid Cloud, Penalty Execution, SLA Compliance, Distributed Systems, Auditable Workflows, Real-Time Verification.



Cross-domain Vulnerability Management Using Unified Dashboards: a Metrics-based Approach to Compliance and Risk Remediation

Pavan Paidy1 and Krishna Chaganti2, 1AppSec Lead At FINRA, USA, 2Associate Director at S & P Global, USA

ABSTRACT

Companies fight more and more in the modern dynamic digital environment to solve vulnerabilities in many spheres, including cloud infrastructues, on-site systems, third-party integrations, IoT endpoints. Every industry has different risk profile, security requirements, and reporting complexity that influences security teams abilities to have a whole view or react quickly to attacks. Investigating a metrics-driven vulnerability management solution, this paper leverages consolidated, real-time dashboards as a single, real-time interface for corrective and compliance actions. These dashboards reduce data silos and provide cross-functional visibility by combining data streams from several security systems and platforms, therefore offering a consistent viewpoint. Most importantly, the application of tailored security metrics and key performance indicators (KPIs) helps companies to prioritize vulnerabilities depending on real risk impact, compliance urgency, and repair capacity—so transforming processable data into valuable knowledge. This approach speeds repairs, provides ongoing regulatory standard compliance including NIST, ISO 27001, and HIPAA, thereby improving decision-making and timeliness of compliance. Our study indicates that companies who use integrated, metrics-based dashboards are more qualified to evaluate development, identify structural flaws, and demonstrate accountability to stakeholders Apart from technological efficiency, this approach encourages openness and proactivity, so strengthening cooperation between teams of security and compliance. Finally, the paper presents a feasible framework for controlling the complex field of cross-domain vulnerabilities dependent on visibility, measurement, and strategic action.

KEYWORDS

Cross-domain security, vulnerability management, unified dashboards, compliance metrics, cybersecurity, risk remediation, security posture, risk scoring, real-time monitoring, metrics-based governance, threat visibility, centralized reporting, KPI-driven security, automated compliance tracking, security analytics, remediation prioritization, multi-environment risk management, security orchestration, audit readiness.



Multilingual Information Retrieval: Building a Scalable Social Media Search Engine with Apache Solr

Sai Prasad Veluru1 and Mohan Krishna Manchala2, 1Software Engineer at Apple Inc, USA, 2ML Engineer at Meta, USA

ABSTRACT

In the modern connected world, especially in the field of social media where vast content is generated in many languages, the ability to obtain relevant information in various languages is crucial. This work investigates, using Apache Solr, the challenges and solutions for multilingual information retrieval in a social media search engine. To control different & more dynamic social media content, we underline the need of fast crawling, accurate indexing & more advanced entity extraction techniques. We build a more scalable system that can crawl and analyze content from more numerous social media networks in actual time using Solrs strong full-text search capabilities. Our approach involves the extraction from posts, comments, and metadata of key elements—including people, places, and companies—from which more accurate search results may be obtained. The paper also discusses the inclusion of multilingual support into Solr, therefore ensuring that the search engine generates better results independent of the language. The main contributions of this work are the fresh methods developed to handle more linguistic problems, manage huge amounts of social media data & improve search accuracy by means of their entity extraction. This project aims to improve the relevance & more efficiency of social media search engines thereby increasing their accessibility & use for a global audience.

KEYWORDS

Multilingual Information Retrieval, Apache Solr, Social Media, Crawling, Indexing, Entity Extraction, Scalability, Named Entity Recognition (NER), Real-time Processing, Natural Language Processing (NLP), Search Engine Optimization (SEO), Distributed Systems, Information Retrieval Techniques, Big Data, Semantic Search.



Signal-based Anomaly Detection in Cloud Operations using Log Insights and Prometheus Metrics

LalithSriram Datla ,Cloud Engineer, USA

ABSTRACT

Maintaining dependability, performance, and security in modern cloud environments—characterised by scattered, dynamic, and extensively monitored services—requires timely anomaly detection. Many times, conventional monitoring systems generate an excessive amount of alerts—many of which are repeated or delayed—which causes alert fatigue and missed events. This work presents a signal-based approach for anomaly detection using the interaction of log data and system measurements. We generate a multi-dimensional view of system activity by extracting ordered signals from unstructured logs via Log Insights and aggregating them with real-time data collected by Prometheus. The approach detects subtle trends and anomalies that can indicate breakdowns or disruptions, hence surpassing threshold-based monitoring. This method distinguishes itself by combining the quantitative depth of Prometheus measurements with high-fidelity log signals, therefore enabling a more contextually aware and proactive detection system. While measurements provide consistent, time-series performance indicators, logs offer comprehensive contextual narratives. Together, they help to cross-validate anomalies and reduce false positives. Our system continuously gathers and analyses data streams using statistical methods based on rules to find abnormalities as they develop. Through the connection of reactive alerting with predictive knowledge, this hybrid monitoring system enhances observability. Moreover, it helps teams in cloud operations to see problems early, understand their main causes faster, and, if at all possible, automate solutions. We show by practical case studies and performance benchmarks that the integration of Log Insights with Prometheus metrics improves the accuracy, timeliness, and applicability of anomaly detection. The result is a strong but simplified operational intelligence layer that improves system resilience and reduces downtime in systems built on clouds. This paper describes our approachs design, implementation, and results, therefore supporting a shift to signal-based, integrated observability in cloud operations.

KEYWORDS

Cloud Operations, Anomaly Detection, Log Insights, Prometheus Metrics, Signal Processing, Distributed Tracing, Service Mesh, Telemetry Data, Root Cause Analysis, Incident Management, Time Series Analysis, OpenTelemetry, System Monitoring, Alerting Rules, Resource Utilization.



Driving Automation in Oracle ERP With FBDI, HDL, and Custom BI Reports

Anusha Atluri1 and Teja Puttamsetti2, 1SR Oracle Techno Functional consultant at Oracle, USA, 2Senior Integration Specialist at Caesar’s Entertainment, USA

ABSTRACT

In the current era of enterprise resource planning, one critical aspect of the development of automation for businesses is to be more efficient, reduce the manual workload, and increase data accuracy. Oracle ERP offers a number of modules and functionalities that are able to automate quite a lot and also help the organizations manage more complex data manipulations. File-Based Data Import (FBDI), HCM Data Loader (HDL), and Business Intelligence (BI) Reports are recognized to be the most transformative automation tools. FBDI allows end-users to import data in bulk into the Oracle ERP system by using predefined templates, thus they do not have to input every data item manually which would be time-consuming and could increase the number of errors in the file. On the same note, HDL is a must-have functionality inside the Oracle HCM Cloud that provides security and efficiency when loading HR data in huge amounts, which is mainly used by companies that have frequent employees changes or changes in structure, due to the benefit of the system. Finally, the intelligent use of BI reports empowers end-users in drawing conclusions from the numbers through eye-catching visualizations and data points, thus leaving the complexity of the work to the system while the users just watch and act as needed. Users can use these technologies as a whole, resulting in accurate, error-free, and speedy data operations in Oracle ERP modules including finance, procurement, and human resources. With FBDI, HDL, and BI, the dependence on IT teams is diminished as the tools can also be easily utilized by non-technical users for day-to-day data management, sparing the former time and energy. This text is an insight into these tools, some practical examples, and the perks of using it in a real-world scenario.

KEYWORDS

Oracle ERP, FBDI, HDL, Custom BI Reports, Automation, Data Migration, ERP Integration, Bulk Data Loads, Oracle Cloud, ERP Automation Tools, Data Transformation, Process Automation, Operational Efficiency, Data Management, Custom Report Development, FBDI Templates, HDL Best Practices, BI Publisher, Automated Data Validation, Oracle ERP Cloud.



A Comprehensive B2b2b Multi-tenant Saas Solution for Agency and Client Management with Stripe Integration

Rahul Ambekar, Atharv Agharkar, Lalit Bagul, Niraj Bade, Department of Computer Engineering, A. P. Shah Institute of Technology, Thane, India

ABSTRACT

The increasing demand for well organized client and project management solutions has led to the rise of SaaS-based platforms that simplify business operations. This research presents a scalable SaaS solution that enables agencies to manage clients, payments, and projects through three core features: a Stripe-integrated dashboard, a Kanban-based project management system, and a no-code funnel builder. The Stripe integrated dashboard automates subscription management, transactions, and revenue tracking using Stripe Connect, ensuring seamless financial operations for agencies, clients, and the SaaS provider. The Kanban board simplifies task organization, team collaboration, and workflow tracking, improving project efficiency. The drag-and-drop funnel builder allows non-technical users to create sales funnels, integrate custom checkouts, and capture leads effortlessly. Built with Next.js 14 for frontend efficiency, Bun for runtime optimization, and Prisma for seamless database management, the platform ensures a futureproof architecture.

KEYWORDS

Software as a Service (SaaS), Business-to-Business-to-Business (B2B2B), Stripe Integration, Agency Management, Client Management, Kanban Board, Funnel Builder, MultiTenant Platform.


Lessons Learned and Achievements in the Development and Testing of on-board Software for the Nahid 2 Satellite

Shirin Ranjbaran and Shahrookh Jalilian, Satellite Research Institute, Iranian Space Research Centre, Tehran, Iran

ABSTRACT

The Nahid2 satellite is a telecommunications satellite designed for a two-year mission, characterized by its intricate design and versatile capabilities. These capabilities encompass three-axis attitude control and determination, simultaneous telephone communication, and utilization of standard packet services. Notably, this satellite will pioneer orbital transfer and simultaneous telephone communication within the nation. The Nahid2 satellites software architecture serves as a platform for executing pre-defined satellite scenarios, monitoring the operational status of various satellite components, managing data acquisition, and executing subsystem-specific algorithms, including managing both nominal operational states and anomalous events. Considering the aforementioned capabilities and the critical missions assigned to this satellite, coupled with the complexity, multiplicity, and importance of its objectives, the operational reliability of all subsystems, particularly the command and data management subsystem (CDMS) and its associated hardware and software, is paramount. Consequently, meticulous design and performance management of the CDMS on-board software are crucial to mitigating potential operational disruptions and ensuring the successful execution of the satellites mission. On-board software assumes a heightened level of importance compared to other satellite subsystems due to its inherent complexity and distinctive attributes. The design, development, and rigorous testing of the Nahid2 satellites on-board software has served as an invaluable learning experience for the broader satellite on-board software domain. Therefore, this article aims to present a concise overview of the key achievements and lessons learned during this process. These insights, derived from the satellites development and testing phases, are intended to provide a valuable and effective foundation for future satellite development and testing projects at the Satellite Research Institute.

KEYWORDS

On-board software, software architecture, detailed design phase, implementation.


An AI-powered Mobile App to Democratize Tennis Skill Development Through Pose Estimation and Video Comparison

Yixuan Liu1, Jonathan Sahagun2, 1TVT Community Day School, 5200 Bonita Canyon Dr, Irvine, CA 92603, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620

ABSTRACT

This app TennisCoach assists individuals in improving their tennis skill [1]. In communities with low incomes, it is difficult for players to find affordable classes and courts to play on. With prices of coaches and lessons rising higher and higher nowadays, it is extremely difficult to improve for people who cannot afford them. Our app provides an easily accessible way for people to improve their tennis skills. It allows the user to upload two videos: one of themselves, and one of a professional demonstrating the same move. This app has a visual representation of the mistakes of the player, a side-by-side comparison of the player’s and the professional’s stroke, and a detailed breakdown of the player’s movements. The video processing part is through pose estimation and K-means algorithm [2]. Users can review previously uploaded videos on the history page. Users’ data are safely stored in Firebase guarded by Firebase Authentication. With our app, tennis will become more available for players from many different backgrounds.

KEYWORDS

Tennis training app, Pose estimation, Accessible sports technology, K-means motion analysis.



Smart Diagnostics: Integrating AI-powered Biosensors for Early Detection in Oncology

Varun Varma Sangaraju, Senior QA Engineer at Cognizant, USA

ABSTRACT

Its very unfortunate that cancer is one of the major causes of deaths globally. Despite the fact that treatment has come a long way, the most effective way to improve survival rates and treatment outcomes is still early detection. The conventional diagnostic methods are usually able to find cancer only in its later stage hence the disease has already (further) developed, thus limiting the therapeutic options. However, the fusion of artificial intelligence (AI) with biosensor technology is currently being seen as a major solution and way to go. AI-enabled biosensors are made to recognize small biological changes like specific proteins, gene mutations, or abnormal cell activities which may point out cancer. These intelligent systems with the help of machine learning algorithms process biological data in huge volumes and convert it into information that is not noticeable as early as now. Using biosensing techniques along with AI in real-time application and without physical intervention can make a very accurate diagnostic tool for early stage cancer. Our research concludes that cancer diagnostics become more sensitive and specific with the integration of AI, especially those cancer types which are difficult to be detected such as pancreatic and ovarian. In addition, AI improves the learning ability of the biosensors and thus makes them more intelligent with each input. The potential implications of this new technology are huge as it can lead to its wide adoption for regular health checks or it can be made possible to wear it as a small device to monitor body functions continuously. Consequently, this allows for a much earlier identification and better health outcomes of the person. It is believed that once scalable AI-powered biosensors equipped with user-friendly bioinformatic interfaces become available, cancer screening could shift from a traditional reactive agenda to a more natural proactive one. The next steps include the material used for sensors, AI learning models, and the testing of prototypes for validation and clinical safety thus bringing this innovation to the general market. Undoubtedly, AI-powered biosensors are a positive step leading to the development of smarter, faster, and more personalized cancer diagnostics that have the potential to save millions of lives just around the corner.

KEYWORDS

AI in Oncology, Biosensors, Smart Diagnostics, Early Detection, Cancer Diagnosis, Machine Learning, Healthcare Technology, Personalized Medicine, Non-Invasive Diagnostics, Biomarker Detection, Real-Time Monitoring, Point-of-Care Testing, Wearable Health Devices, Deep Learning, Biomedical Engineering, Predictive Analytics, Clinical Decision Support, Precision Oncology, Medical IoT, Digital Health, Cancer Biometrics, Neural Networks, Data-Driven Diagnostics, Health Informatics, Cancer Prognostics, Early Intervention, Sensor-Based Diagnostics, Next-Generation Diagnostics.



menu
Reach Us

emaildkmp@ccsea2025.org


emaildkmpconf@yahoo.com

close