TRACK CHAIR
Rick Kazman
Shidler College of Business
University of Hawaii at Manoa
2404 Maile Way
Honolulu HI 96822
Tel: +1-808-956-6948
kazman@hawaii.edu
Tor-Morten Grønli
School of Economics, Innovation and Technology
Kristiania University College
Kirkegata 24
0153 Oslo, Norway
Tel: +47 48 15 64 76
tor-morten.gronli@kristiania.no
The Software Technology track at HICSS is about methods, tools and techniques related to software, as distinct from the context in which it is deployed or its applications. Software Technology is among the oldest tracks at HICSS and has provided a central point of interaction among all participants in the conference, as well as a natural forum to foster new technologies. Among the topics that the Software Technology track has covered are: software engineering, security, networking, software-based product-lines, open source software, pervasive computing, artificial intelligence, agile methods, mobile/ad hoc networking, cloud computing, virtualization, parallel and distributed computing, and software assurance. The Software Technology track continues to invite novel and emerging areas of research in what remains a dynamic and exciting field.
The increasing adoption of software in safety-critical domains such as healthcare, aerospace, robotics, and autonomous vehicles underscores the urgent need for rigorous assurance, verification, and testing methodologies. As these systems grow in scale, adaptivity, and interconnectedness, the challenges of ensuring reliability, correctness, and fault tolerance become more acute. These challenges are compounded by the rise of AI/ML-enabled systems, which introduce dynamic behaviors and evolving requirements into already complex environments.
This minitrack explores the intersection of software verification, safety assurance, fault tolerance, and advanced testing methodologies, with an emphasis on resilient, safety-critical software systems, especially in healthcare (e.g., wearables, health tracking software), aerospace (e.g. flight software, guidance software), government (e.g. military systems), and related technologies (e.g. robotics and autonomous vehicles). We aim to bridge the gap between theoretical research and real-world industrial applications. Our goal is to foster a collaborative dialogue between academia, government, and industry, advancing the state-of-the-art in ensuring safe, robust, resilient, and reliable software systems. Key themes include:
- Advanced fault detection
- Software and hardware testing techniques
- Scalable testing for modern architectures
- CI/CD pipelines
- Formal methods and verification
- Regulatory compliance and standards
- Advances in testing tools and applications
- AI-augmented assurance and verification
- Safety assurance for AI-enhanced and autonomous systems
- Digital twins for testing software systems
- Real-world case studies and empirical evaluations in verification, assurance, and testing
- Emerging challenges and future directions in verification, assurance, and testing
Topics include but are not limited to:
- Advances in software testing techniques: Innovations in fuzzing, mutation testing, concolic testing, symbolic execution, combinatorial testing, automated test case generation, code coverage analysis, bug triage, test optimization, and prioritization for fault detection.
- Automated testing frameworks and strategies: Testing solutions for microservices, serverless architectures, embedded systems, wearables, and large-scale distributed, cloud-native, and edge computing systems.
- Continuous testing in DevOps and CI/CD: Scalable and efficient continuous integration testing techniques and solutions that seamlessly integrate into DevOps pipelines while balancing test coverage and execution time.
- Performance and resilience testing: Tools and methodologies for stress testing, load testing, performance benchmarking, and resilience metrics in software powering wearables, cyber-physical systems, and high availability systems.
- Automated fault detection and isolation: Techniques for anomaly detection, automated fault localization, root cause analysis in distributed logs and telemetry, and fault recovery.
- Lessons learned from large-scale test automation: Success stories, case studies, and lessons from deploying testing frameworks and methodologies in enterprise and mission-critical environments such as healthcare and aerospace.
- Architectural patterns and reliability engineering: Fault-tolerant design, real-time error detection and recovery, redundancy, failover mechanisms, and robustness strategies for embedded and safety-critical software.
- Ensuring robustness in constrained and safety-critical systems: Robust design principles for interoperable and resource-constrained devices, handling edge cases, outlier scenarios, and integrating resilience into the development lifecycle.
- Software assurance and verification methodologies: Development, evaluation, and deployment of methodologies and tools for safety-critical system design, risk assessment, verification, and assurance, including empirical studies and open-source benchmarks.
- Innovations in formal methods: Scalable approaches to formal verification (e.g., model checking, theorem proving, formal specification) for complex software systems, including automated toolchains integrating formal methods into development workflows.
- Runtime monitoring and adaptive safety mechanisms: Real-time verification techniques for detecting and mitigating safety violations, ensuring system robustness under dynamic operational conditions.
- Testing and assurance of AI/ML systems: Frameworks and methodologies for fairness, robustness, explainability, and bias mitigation in AI-powered systems, particularly under adversarial conditions.
- Verification and regulatory compliance: Approaches for meeting standards (e.g., ISO 26262, DO-178C, FDA guidelines), safety case development, assurance argumentation, and case studies on achieving regulatory compliance.
- New paradigms in safety-critical system design: Strategies for balancing innovation and safety in fast-evolving industries like autonomous transportation and healthcare wearables, addressing emerging verification and assurance challenges.
We welcome a broad spectrum of contributions, including:
- Theoretical advances in analytic methodologies.
- Design and evaluation of novel tools and frameworks for automated software analysis.
- Empirical studies and benchmarks showcasing the impact of novel techniques.
- Real-world insights and challenges from deploying advanced solutions.
- Critical reviews of theoretical advances, practical tools, methodologies, etc.
Minitrack Co-Chairs:
Ryan Karl (Primary Contact)
Software Engineering Institute, Carnegie Mellon University
rmkarl@sei.cmu.edu
Shen Zhang
Software Engineering Institute, Carnegie Mellon University
szhang@sei.cmu.edu
Yash Hindka
Software Engineering Institute, Carnegie Mellon University
yhindka@sei.cmu.edu
Carmen Quatman
Ohio State University
Carmen.Quatman@osumc.edu
As software systems evolve in complexity, interconnectedness, and importance to critical societal infrastructure, ensuring their resilience, reliability, and security has never been more vital. Resilience in software engineering refers to a system’s ability to maintain acceptable service levels under faults, recover gracefully from disruptions, and adapt effectively to evolving requirements and environments. Security, as a cornerstone of resilience, must be intricately woven into all facets of testing and quality assurance (QA) to safeguard against vulnerabilities and threats.
This minitrack is dedicated to pushing the boundaries of research and practice in software testing, QA, and metrics, particularly in the context of resilience engineering. By fostering collaboration and innovation, the minitrack aims to address the pressing challenges of ensuring software systems can withstand, adapt to, and recover from disruptions in a wide range of operational environments. We encourage submissions that span theoretical advances, methodological developments, empirical studies, and industrial case studies, highlighting the multifaceted aspects of resilience-focused engineering.
We invite original contributions that present novel ideas, frameworks, methodologies, tools, and experiences related to software resilience engineering. Topics of interest include, but are not limited to:
- Automated Testing and Fault Resilience:
• Advanced testing techniques for detecting, diagnosing, and recovering from faults
• Tools and frameworks for resilience evaluation, including chaos engineering and fault injection methodologies
• Scalable approaches to regression testing for resilience in large, interconnected systems - Quality Assurance in Critical Systems:
• QA methodologies for safety-critical, mission-critical, and real-time systems
• Formal verification and validation techniques to ensure resilience under strict regulatory constraints
• QA practices for ensuring resilience in hybrid and distributed systems, including cloud-native architectures - Resilience Metrics and Analytics:
• Novel metrics and benchmarks for quantifying software resilience, reliability, and adaptability
• Techniques for real-time resilience monitoring and proactive quality enhancement
• Application of AI and machine learning to predict and improve resilience - Emerging Domains and Resilience Challenges:
• Resilience engineering in AI-driven, autonomous, or cyber-physical systems
• Testing and QA strategies for edge computing, IoT ecosystems, and quantum software
• Addressing the challenges of resilience in multi-domain, cross-platform, and legacy-integrated environments - Security as a Pillar of Resilience:
• Security testing integrated into resilience-focused QA workflows
• Threat modeling and simulation tools for resilience under cyber-attacks
• Resilience engineering practices for supply chain security and third-party components
• Behavioral aspects of information security and their impact on software resilience - Process and Practices for Resilient Development:
• Embedding resilience-focused testing and QA into Agile, DevOps, and CI/CD workflows
• Best practices for secure coding and its impact on system resilience
• Innovative approaches to test data generation, management, and resilience simulation
• Resilience considerations in distributed and collaborative development environments - Human Factors and Organizational Perspectives:
• The role of human error in software resilience and QA strategies to mitigate it
• Developer education, training, and secure coding practices for resilience
• Organizational lessons learned from industrial adoption of resilience-focused practices
• Employee compliance with security policies and its effect on overall system resilience - Empirical Studies and Case Studies:
• Insights from industrial deployments, showcasing challenges and solutions in real-world scenarios
• Success stories and failures in achieving software resilience
• Comparative studies on the effectiveness of testing, QA, and resilience metrics across domains
• Analysis of social and organizational factors influencing software resilience implementation - LLM generated software:
• Resilience Testing for LLM-Generated Code
• Security Implications and Vulnerability Mitigation in LLM-Assisted Programming
• Quality Assurance and Metrics for LLM-Generated Software
• Human-AI Collaborative Development for Enhanced Software Resilience
This minitrack seeks to unite researchers, practitioners, and industry leaders who are dedicated to the pursuit of resilient software systems. It aims to create a forum for sharing innovative methodologies, practical implementations, empirical findings, and visionary perspectives. Contributions that combine theoretical rigor with practical relevance are especially encouraged, as are interdisciplinary perspectives that incorporate elements of systems engineering, human factors, and organizational dynamics. By participating in this minitrack, contributors will help define the roadmap for resilience engineering in software development, ensuring systems that are robust, secure, and prepared for the uncertainties of the future.
Minitrack Co-Chairs:
Philipp Zech (Primary Contact)
University of Innsbruck
philipp.zech@uibk.ac.at
Irdin Pekaric
University of Liechtenstein
irdin.pekaric@uni.li
Katinka Wolter
Free University
Berlin katinka.wolter@fu-berlin.de
Tom Mattson
University of Richmond
tmattson@richmond.edu
The integration of artificial intelligence (AI) into program analysis and software synthesis is reshaping the landscape of software engineering. AI-powered techniques are revolutionizing defect detection, performance optimization, security assurance, and software generation, enabling more intelligent, efficient, and scalable approaches. As modern software systems grow in complexity—spanning distributed architectures, microservices, and AI/ML-driven components—traditional program analysis techniques struggle to maintain scalability, precision, and adaptability. AI methodologies, including machine learning (ML), deep learning, and large language models (LLMs), are emerging as transformative forces that enhance program analysis frameworks and drive the automation of software development, modification, and evolution.
This minitrack provides a platform for discussing the latest advances at the intersection of AI and program analysis, with a focus on innovative methodologies, automated software synthesis techniques, practical implementations, and real-world case studies. The goal is to foster collaboration between academia and industry, bridging theoretical research with applied solutions to advance the state of the art in AI-augmented software engineering.
Key Themes include:
- AI-Powered Static Analysis: Leveraging machine learning and deep learning models to improve the accuracy, efficiency, and scalability of static code analysis, including vulnerability detection, type inference, and performance profiling.
- Machine Learning for Dynamic Analysis: Using AI techniques for runtime monitoring, anomaly detection, predictive debugging, and intelligent test case generation.
- AI-Augmented Software Synthesis and Modification: Exploring AI-driven techniques for automated code generation, program transformation, refactoring, and patch synthesis.
- AI-Driven Software Generation: Investigating generative AI models for automated software creation, including domain-specific code generation, AI-assisted software design, and automated system architecture synthesis.
- Integration of AI and Program Analysis Frameworks: Investigating how AI models can be seamlessly integrated into traditional program analysis tools, enhancing their adaptability and effectiveness.
- Applications of AI-Augmented Analysis and Software Synthesis: Examining use cases in software verification, automated repair, security assessment, and performance optimization.
- Benchmarks and Datasets for AI-Augmented Analysis: Discussing the need for standardized datasets, benchmark suites, and evaluation metrics to advance research in AI-driven program analysis.
- Real-World Case Studies and Challenges: Showcasing industry implementations, lessons learned, and the challenges of deploying AI-powered program analysis tools at scale.
- Emerging Trends and Future Directions: Identifying open research problems, novel AI techniques, and future possibilities for enhancing software engineering practices through AI.
We welcome contributions that explore theoretical advancements, algorithmic innovations, tool and framework development, empirical evaluations, and case studies related to AI-augmented program analysis and software synthesis. Topics include, but are not limited to:
- AI-assisted bug detection, vulnerability identification, and security assurance
- Large language models (LLMs) for software analysis, synthesis, and code generation
- AI-driven program transformation, refactoring, and automated patching
- Intelligent test case generation and automated debugging techniques
- Neural program synthesis and reinforcement learning for software engineering
- AI-enhanced compiler optimizations and performance tuning
- Automated reverse engineering and decompilation using AI techniques
- Hybrid approaches combining formal methods with AI-driven analysis
- Ethical, interpretability, and reliability concerns in AI-augmented program analysis
- Tools, datasets, and benchmark suites for evaluating AI-driven software engineering solutions
- AI-driven software generation and evolution, including automated code completion, software design pattern synthesis, and program synthesis for specialized domains
- AI-powered automated software architecture design, enabling the generation of modular and scalable software systems
- Code generation techniques leveraging transformer-based models, generative adversarial networks (GANs), and other deep learning approaches
This minitrack is designed to facilitate cross-disciplinary dialogue, encouraging collaboration between researchers, software engineers, and industry professionals who are leveraging AI to transform software engineering. By bridging AI research with practical software development challenges, we aim to advance the capabilities of AI-powered program analysis and software synthesis, ultimately improving the reliability, security, and efficiency of modern software systems.
Minitrack Co-Chairs:
Ryan Karl (Primary Contact)
Software Engineering Institute, Carnegie Mellon University
rmkarl@sei.cmu.edu
Shen Zhang
Software Engineering Institute, Carnegie Mellon University
szhang@sei.cmu.edu
Yash Hindka
Software Engineering Institute, Carnegie Mellon University
yhindka@sei.cmu.edu
As artificial intelligence (AI) continues to evolve, cyber threats are becoming more sophisticated, automated, and difficult to detect. Adversaries increasingly leverage AI to enhance attack strategies, including phishing, malware generation, adversarial machine learning, and evasion techniques. In particular, AI-driven malware is rapidly emerging, capable of autonomously adapting to security defenses, modifying attack vectors in real-time, and bypassing traditional detection mechanisms. Meanwhile, AI-powered defense mechanisms are being developed to counteract these threats, leveraging machine learning for threat intelligence, anomaly detection, and automated mitigation.
This minitrack aims to bring together researchers and practitioners to discuss emerging AI-powered threats and novel defense strategies. The focus is placed on the dual role of AI in cybersecurity—both as a weapon for cyber attackers and as a tool for defenders. We invite research contributions on, but not limited to, the following topics:
- AI-generated phishing, social engineering, and deepfake attacks
- AI-driven malware development, obfuscation, and polymorphism
- Automated malware detection and classification using AI
- AI-enhanced vulnerability discovery and exploitation
- Large Language Models (LLMs) in cyber attack automation
- AI-based threat intelligence and intrusion detection
- Countermeasures against AI-powered cyber threats
- Explainability and robustness of AI-driven cybersecurity solutions
Minitrack Co-Chairs:
Junggab Son (Primary Contact)
University of Nevada, Las Vegas
junggab.son@unlv.edu
Zuobin Xiong
University of Nevada, Las Vegas
zuobin.xiong@unlv.edu
As networks and devices will rely heavily on AI machine and deep learning algorithms, these networks and devices should be able to resist the various AI cyber-attacks such as hacking of networks, devices, theft of information, disruption, etc. and be able to continue performing under severe environmental conditions. Through machine learning algorithms over the transmitted information, the miscoding and leaking of information during transmission channels must monitor any loss, miscoding and leaking of data. Timely adjustments of information with falling quality and automatic switching to the best routing IoT systems by making uses of multi-directional routing is also warranted. AI security will need to provide the principles and technologies to unify these systems to deliver the end-state goal of secure systems for greatly enhanced interoperability, scalability, performance, and agility.
The objective of this minitrack is to serve as a platform for presenting innovative research, discussing emerging trends, and setting the agenda for future exploration in the field of AI security. It will address the intersection of technology, safety, trustworthiness, responsibility, and policy, ensuring a comprehensive approach to AI security. Recommended topics include, but are not limited to, the following:
- System-Theoretic Process Analysis for Security (STPA-SEC) for AI Systems
- AI Safety and Trustworthiness
- AI Responsibility
- AI Risk Models
- AI Explainability
- AI Ethics and Algorithmic Bias
- AI Cybersecurity, Offensive, and Defensive Operations
- Secure Machine Learning Operations (MLOps)
- Secure AI Algorithm and Machine Learning Algorithms
- Red Teaming AI systems and algorithms
- Robustness and Resilience of AI Systems
- Threat Detection and Response in AI Systems
- Ethical and Regulatory Considerations in AI Security
Minitrack Co-Chairs:
Tyson Brooks (Primary Contact)
National Security Agency and Syracuse University
ttbrooks@syr.edu
Shiu-Kai Chin
Syracuse University
skchin@syr.edu
William Young
Syracuse University
weyoung@syr.edu
Erich Devendorf
Air Force Research Laboratory
erich.devendorf.1@us.af.mil
As technology is incorporated into more aspects of daily life, cyber operations, defenses, and digital forensics solutions continue to evolve and diversify. This encourages the development of innovative managerial, technological, and strategic solutions. Hence, a variety of responses are needed to address the resulting concerns. There is a need to research a) technology investigations, b) technical integration and solution impact, c) the abuse of technology through attacks, and d) the effective analysis and evaluation of proposed solutions. Identifying and validating technical solutions to secure data from new and emerging technologies, investigating these solutions’ impact on the industry, and understanding how technologies can be abused are crucial to the viability of commercial, government, and legal communities.
We welcome new, original ideas from academia, industry, government, and law enforcement participants interested in sharing their results, knowledge, and experience. Topics of interest include but are not limited to:
- Human aspects of cyber security operations and defense
- The impact of AI and Generative AI on cyber security operations and defense
- Research efforts that intersect cyber security, operations, defense, and counterterrorism
- Cybersecurity, operations, and defense research that impacts critical infrastructure sectors
- Applying machine learning tools and techniques in terms of cyber operations, defenses, and forensics
- Case studies surrounding the application of policy in terms of cyber operations, defenses, and forensics
- Approaches related to threat detection and Advanced Persistent Threats (APTs)
- “Big Data” solutions and investigations – collection, analysis, and visualization of “Big Data” related to cyber operations, defenses, and forensics
- Malware analysis and the investigation of targeted attacks
- Digital evidence recovery, storage, preservation, memory analysis, and network forensics, including anti-forensics techniques and solutions
- Forensic investigations of current and emerging domains, including mobile devices, the Internet of Things, industrial control systems, SCADA, etc.
- Research in security incident management, including privacy, situational awareness, and legal implications
The above list is suggestive, and authors are encouraged to contact the mini-track chairs to discuss related topics and their suitability for submission to this minitrack.
Accepted papers will be offered the opportunity to extend their submission by 50% and submit to a special issue of the Association for Computing Machinery (ACM) Digital Threats: Research and Practice (DTRAP) Journal.
Minitrack Co-Chairs:
William Glisson (Primary Contact)
Louisiana Tech University
glisson@latech.edu
Todd McDonald
University of South Alabama
jtmcdonald@southalabama.edu
This minitrack focuses on how to enable development and application of these foundations. We ask: How should research and development move us toward a solid basis in understanding and principle? The goal is to develop science foundations, technologies, and practices that can improve the security and dependability of complex systems.
This minitrack will bring together researchers in cybersecurity and software assurance in a multidisciplinary approach to these problems. We invites work embracing multiple perspectives, levels of abstraction, and evaluation of best practices and policies that help us to understand and assure the security of complex systems. We welcome papers about tools and techniques in that apply scientific and rigorous approaches or reveal underlying commonalities and constructs.
Minitrack Co-Chairs:
Luanne Chamberlain (Primary Contact)
Johns Hopkins University Applied Physics Lab
luanne.chamberlain@jhuapl.edu
Thomas Llanso
John Hopkins University Applied Physics Lab
thomas.llanso@jhuapl.edu
Richard George
Johns Hopkins University Applied Physics Lab
richard.george@jhuapl.edu
Affective computing involves the development of emotion-aware information systems that are capable of recognizing, interpreting, processing, and simulating human emotions. This interdisciplinary field combines computer science, engineering, psychology, cognitive science, education and sociology to endow machines with emotional intelligence, enabling more natural and intuitive human-computer interactions. The integration of generative AI (GenAI) into affective computing could represent a significant advancement, offering innovative approaches to the design, implementation, and application of impactful emotionally intelligent solutions. This emerging fusion has the potential to disrupt traditional paradigms, leading to more adaptive, responsive, and personalized user experiences.
We seek to attract a cadre of research that both delineates and critiques the concepts, methods, frameworks, architectures, functionalities, and broader implications of applying and integrating GenAI in the design and development of emotion-aware information systems. The scope of this minitrack includes, but is not limited to, the following key areas:
- Emotionally Intelligent Conversational Agents: Investigating the development of GenAI-driven chatbots and virtual assistants capable of understanding and responding to user emotions, thereby enhancing engagement and satisfaction.
- Emotion-Aware Content Generation: Exploring how GenAI can create personalized content, such as text, music, or art, that aligns with users’ emotional states or preferences, providing more tailored and impactful experiences.
- Adaptive Learning Systems: Examining the application of GenAI in educational technologies that adjust instructional content and strategies based on learners’ emotional responses and engagement levels, thereby improving learning outcomes.
- Healthcare Support Tools: Assessing the use of GenAI in developing emotion-sensitive applications for mental health support, such as virtual therapists or monitoring systems that can detect emotional distress and provide appropriate interventions.
- Emotion-Aware Real-Time Voice Interactions: Exploring the development of GenAI-powered systems that facilitate real-time conversations with natural, emotionally expressive voices, creating immersive and persuasive interactions.
- Emotion-Aware Gaming Experiences: Investigating how GenAI can be utilized to create adaptive gaming environments that respond to players’ emotional states, enhancing immersion and personalization.
- Emotion-Driven Human-Robot Interaction: Exploring the role of GenAI in enabling robots to perceive and appropriately respond to human emotions, improving collaboration and cohabitation.
- Emotion-Sensitive Marketing Strategies: Analyzing how GenAI can tailor marketing content to consumers’ emotional states, potentially increasing engagement and conversion rates.
- Para-Social Relationships Enabled by GenAI: Examining the emergence of one-sided emotional bonds between users and GenAI-powered systems, such as virtual companions or influencers, and their psychological and social implications.
- Emotion Tracking in Social Media: Investigating how GenAI can monitor and analyze emotional expressions across social media platforms, providing insights into public sentiment and informing strategies for content creation, user engagement, and mental health interventions.
- Impact Assessment of Emotion-Aware Systems: Evaluating the impact and effectiveness of GenAI-enhanced emotion-aware systems in various applications, and assessing their performance compared to traditional methods.
- Ethical and Societal Implications: Analyzing the ethical considerations, potential biases, and societal implications of deploying GenAI in emotion-aware systems.
This minitrack aspires to be a platform for rigorous scholarly inquiry into the multifaceted applications of GenAI in affective computing, emphasizing both the innovative potential and the consequential ethical, legal, and operational challenges.
Minitrack Co-Chairs:
Johnny Chan (Primary Contact)
University of Auckland
jh.chan@auckland.ac.nz
Brice Valentin Kok-Shun
University of Auckland
brice.kok.shun@auckland.ac.nz
David Sundaram
University of Auckland
d.sundaram@auckland.ac.nz
Ghazwan Hassna
Hawaiʻi Pacific University
ghassna@hpu.edu
Sustainable Development Goals of United Nations invite action from all levels of the society to help solving the problems of the world together. Toward these goals, games can offer potential solutions with their ability to mimic, contain, or sample real or plausible scenarios and systems in a readily accessible simulation. They are inherently player centric; therefore, player’s perspective and involvement for the intended experience derives the success of a game. From this central role of the player comes the power of games to educate, rehabilitate, recreate and rejoice with entertainment.
Games for Impact minitrack intends to draw attention to the use of games and game technology for special purposes and positive outcomes where the created experience reaches beyond entertainment. Recognizing that games are a powerful vehicle to make emerging technologies accessible to society, this minitrack creates a space to explore the many factors that influence the design, development, application, adoption, use, and impact of games and game technology.
The exploration of Games for Impact minitrack falls under the umbrella of recent games research fields such as games for health/rehabilitation/therapy, games for learning, games for empathy, games for social innovation, and citizen science games. Potential subtopics or areas including but not limited to the following are listed below:
- Case study on designing, developing, using, and evaluating games for special purposes
- Best practices and guidelines on game design, study design, interaction design, user experience (UX) and user interface (UI)
- The role and application of games and game technology in creating, disseminating, and evaluating social innovation
- The application and impact of games and game technology in education and its accessibility
- The application and impact of games for training, learning, and personal development (habit building, empathy, social skills, etc.)
- Evaluation approaches, criticality, quality measures, and ethics of using and adapting games and games technology in other fields such as health, rehabilitation, education, social innovation, citizen science
- Use of novel interaction modalities, platforms and/or controllers, IoT, VR-AR-MR
- Analysis for socio-cultural context of games for impact
- Demographics, persona studies, and ethics of the application, adoption, and impact of games and game technology for purposes beyond entertainment
We welcome contributions on design and development methods, technical studies that focus on implementation and development guidelines, case studies with novel interaction modalities including platforms (mobile, AR-VR-MR) and/or controllers, user experience (UX) approaches, user interface (UI) techniques, analysis for socio-cultural context of games for impact, demographics and persona studies, and ethical studies for the aforementioned research fields.
Minitrack Co-Chairs:
Aslihan Tece Bayrak (Primary Contact)
Media Design School
tece.bayrak@mediadesignschool.com
Dan Staines
Torrens University Australia
daniel.staines@torrens.edu.au
Generative AI is a type of artificial intelligence (AI) that generates new and original content based on patterns learned from existing data. This can include a range of media such as images, videos, audio, and text. Conversational AI focuses on enabling natural language interactions between humans and AI systems. It uses NLP (Natural Language Processing) and machine learning algorithms to understand and respond to human language input. The growth of Generative and Conversational AI and the development of large language models like ChatGPT, Google Gemini, and DeekSeek has created new possibilities for applications in various fields, including Information Systems (IS) research and education.
This minitrack aims to explore the use of Generative and Conversational AI models in information systems, including natural language processing, recommendation systems, and personalization. The minitrack will be of particular interest to researchers and practitioners from the fields of IS and Generative AI, as well as those teaching information systems courses interested in incorporating Generative AI into the IS curriculum. Topics of interest include:
- Applications of Generative AI in IS, such as data generation, image generation, text generation, video generation, and simulation
- Applications of Conversational AI in IS, such as chatbots, virtual assistants, and voice assistants, among others
- Best practices for integrating Generative and Conversational AI into the IS curriculum
- Developing educational materials and resources for teaching Generative and Conversational AI
- Preparing students for careers in Generative AI and IS
- Integrating Generative AI with other AI technologies for improved results
- Real-world applications of Generative AI in IS research and education
- Generative AI for personalized recommendations and decision-making
- Generative AI for improving the efficiency of IS
- Generative AI for enhancing user experience in IS
- Generative AI in various domains, such as healthcare, finance, and entertainment
- Use of ChatGPT in natural language processing and information retrieval
- Ethics, privacy, and regulatory considerations for the use of Generative AI in IS
- Methodologies and frameworks for evaluating the effectiveness and impact of Generative and Conversational AI in IS research
- The impact of Generative and Conversational AI on society and the economy
- The effects of generative AI, such as ChatGPT, as an enabler versus a disabler
- Challenges and limitations in the implementation of Generative and Conversational AI in IS.
- Detection of AI generated content
- The platform governance relating to Generative and Conversational AI
Minitrack Co-Chairs:
Nargess Tahmasbi (Primary Contact)
Pennsylvania State University
nvt5061@psu.edu
Elham Rastegari
Creighton University
elhamrastegari@creighton.edu
Aaron French
Kennesaw State University
afrenc20@kennesaw.edu
Guohou Shan
Northeastern University
g.shan@northeastern.edu
As artificial intelligence (AI) continues to transform various industries, its profound impact on software engineering cannot be understated. This mini track aims to explore the intersection of AI and software engineering, focusing on the innovative ways in which AI technologies are influencing software development, testing, maintenance, and overall software lifecycle management. The proposed mini track invites researchers and practitioners to delve into the multifaceted implications of AI on software engineering practices, providing a platform for insightful discussions and the exchange of cutting-edge research findings.
The integration of AI into software engineering processes is rapidly reshaping the landscape of how software is conceived, developed, and maintained. Understanding the implications, challenges, and opportunities that arise from this potentially symbiotic relationship is crucial for researchers, practitioners, and educators in the field. This minitrack seeks to foster a collaborative environment where participants can engage in meaningful dialogue, share their experiences, and contribute to the evolving discourse on the impact of AI on software engineering. Topics of Interest include, but are not limited to:
- AI-driven Software Development Processes:
* Automated code generation and optimization
* Impact on Software Design
* Intelligent code completion and suggestion systems
* AI-assisted requirement analysis and specification - AI in Software Testing and Quality Assurance:
* Automated testing using machine learning algorithms
* AI-driven fault prediction and localization
* Quality assurance in AI-infused software systems - AI for Software Maintenance and Evolution:
* Predictive maintenance and malfunction detection
* Intelligent bug tracking and resolution
* Adaptive software evolution with AI assistance - Ethical and Social Implications of AI in Software Engineering:
* Bias and fairness in AI-enhanced software systems
* Responsible AI practices in software development
* Societal impact of AI-driven software solutions
* How are the roles reshaped in software development teams - Educational Initiatives in AI and Software Engineering:
* Integration of AI concepts into software engineering curricula
* Training programs for AI-aware software engineers
* Challenges and opportunities in AI education for software developers
Minitrack Co-Chairs:
Stefan Wittek (Primary Contact)
Clausthal University of Technology
switt@tu-clausthal.de
Peter Salhofer
FH JOANNEUM
peter.salhofer@fh-joanneum.at
Sandra Gesing
San Diego Supercomputing Center
sgesing@ucsd.edu
Intelligent Edge Computing focuses on the synergy of software, algorithms, computing, and devices at, or near, the source of data generation and decision making. Edge devices are ever expanding in ubiquity and reach, and include autonomous systems, smartphones, Internet of Things (IoT) / Internet of Everything (IoE), robotics, sensors, networks, human-machine teaming, wearables, and people in constantly changeable environments. However, edge intelligence capabilities need to catch up with expectations and demands, especially in autonomous systems and Industry 5.0. Many edge use cases have unique challenges related to Size, Weight, and Power (SWaP), hardware, specialized architectures, communication standards, and other limitations and constraints. Intelligence edge solutions include the advances in low-SWaP neuromorphic computing, continual learning, reconfigurable systems, FPGAs, predictive/learning software technologies, generative AI, federated/adversarial learning, and democratized AI solutions. Given the computational demands of training commercial advanced AI algorithms, such as LLMs, this minitrack also looks at AI+X, where adapting methods (AI) for deployment (X) constraints (such as sustainability, energy efficiency) is key. Topics of interests include, but are not limited to:
- Hardware Solutions for Edge Applications
• Assessing computational edge technology landscape and its potentials for creating edge computational paradigms
• Edge computing hardware solutions, including neuromorphics, tensor processing units (TPUs), FPGAs, Raspberry Pi, microcontrollers, programmable logic controllers, and other dynamically programmable edge devices
• Edge processing of sensor generated data, autonomous systems, their development, architectures, and use
• Examples of computing paradigms suitable for localized computing and its extensions towards Fog/Cloudlets/Clouds. - Creating Intelligent Edge Computing
• Roadmap for AI at the computational edge, challenges, and opportunities
• AI+X solutions where AI methods are developed for specific edge related applications
• Software architectures for supporting computing edge intelligence in automation, manufacturing, businesses, medicine, healthcare delivery, education, and governance.
• Adversarial AI, federated learning, collaborative AI for data fusion and decision making at the edge
• Potential convergence of humans,” things” and AI in creating edge intelligence - Generative AI and Sustainable Edge Computing
• Hardware/software solutions for developing and deploying generative AI at the edge.
• Sustainability of AI enabled edge computing
• Optimizing edge performance
• Software architectures for managing distributions and tuning of and inference in generative AI models at the edge.
• Analyzing investments of hardware manufacturers in generative AI and energy sustainable edge devices, including neural and tensor processors - Edge Robotics and Cognitive IoT
• Edge robotics and augmentation of humans with machines and vice versa.
• Intelligent engineering for robots, cognitive devices and wearables, supported by collaborative IoT solutions
• Dynamics of edge robotics: from neuromorphic and lightweight NN computing for robotic control, to motion planning, knowledge/predictive inference sharing in decision making.
• Software architectures for swarm and multirobot intelligence at the computational edge
• Low-code platforms for edge robotics/devices with MLOPs for optimized edge computing
• Industry specific challenges with edge enabled solutions in robotics/the IoT in general
Minitrack Co-Chairs:
Trevor Bihl (Primary Contact)
Air Force Research Laboratory
Trevor.bihl.2@us.af.mil
Radmila Juric
ALMAIS Consultancy
radjur3@gmail.com
Elisabetta Ronchieri
INFN CNAF
elisabetta.ronchieri@cnaf.infn.it
Filippo Sanfilippo
University of Agder
Filippo.Sanfilippo@uia.no
This minitrack invites research on the real-time simulation of cyber-physical systems (CPSs). The ongoing digitalization and cyberization of modern societies have led to an increasing reliance on CPSs in various domains, including infrastructure, industry, and services. As CPSs integrate computational and physical capabilities with real-time functionality, their economic and societal impacts are substantial. Moreover, failures in CPSs can have immediate and severe consequences, underscoring the need for robust testing and development methodologies.
Real-time simulation is a powerful tool for developing and evaluating CPSs. Unlike offline simulations, which depend on the processing speed of the host computer and cannot interact seamlessly with real-time devices, real-time simulation allows for direct integration with physical components. By leveraging digital twins, real-time simulation enables safe and cost-effective testing of systems without placing stress on the actual infrastructure. Applications of real-time simulation span multiple domains, including:
- Industrial automation (e.g., simulation of manufacturing processes)
- Transportation (e.g., evaluation of autopilot systems)
- Healthcare (e.g., simulating patient conditions in real time)
- Critical infrastructure (e.g., cybersecurity simulations for attack-defense scenarios)
Given the increasing reliance on CPSs, real-time simulation is essential for enhancing system resilience, safety, and efficiency. Additionally, understanding the socio-technical and organizational implications of real-time simulation in CPSs is crucial for effective adoption. We welcome papers that explore real-time simulation in CPSs from multiple perspectives, including:
- Real-time simulation for risk management (e.g., cybersecurity, resilience, safety)
- Real-time simulation and digital twins of CPSs
- Real-time simulation for prototyping CPSs
- Real-time simulation in user experience and service development
- Real-time simulation and collaboration in CPSs
- Real-time monitoring and adaptive control in CPSs
- Real-time simulation for learning and training in CPSs
- Applications of real-time simulation in various domains (e.g., critical infrastructure, smart cities)
- Virtualization of CPSs through real-time simulation
- CPS innovations driven by real-time simulation
- Real-time simulation of socio-technical phenomena
- Smart manufacturing and CPSs
- Managing CPS development with real-time simulation
- CPS services enhanced through real-time simulation
- AI-driven and data-driven approaches for real-time simulation and testing
- Holistic approaches to real-time simulation and CPS testing
We encourage submissions employing a variety of research methods, including design science, quantitative, and qualitative approaches.
Minitrack Co-Chairs:
Tero Vartiainen (Primary Contact)
University of Vaasa
tero.vartiainen@uwasa.fi
Mike Mekkanen
University of Vaasa
mike.mekkanen@uwasa.fi
Eric Veith
OFFIS
eric.veith@offis.de
Jirapa Kamsamrong
OFFIS
jirapa.kamsamrong@offis.de
Information security and privacy are a non-negotiable factor in the design and operation of information systems. Especially users – the so-called human factor – are a pivotal role in information security and user-privacy concepts. Often, their knowledge about security aspects and ways of user-manipulation tactics are the last line of defense against cyber-attacks. However, they are also the primary target of attackers and need to be sensitized about security-compliant behavior.
In addition to the traditional forms of user-computer-interactions in the form of mouse-keyboard-input-devices, new ways of system-interactions, e.g., physiological data from fitness-trackers, eye-tracking devices or even pupillary responses indicating cognitive-load-levels, are increasingly feasible as everyday HCI-components. With the interest in data privacy increasing, are users aware how valuable those personal input data is and how do they value data privacy measures?
Therefore, we have identified two main aspects relevant to researchers within the domain of Software Technology: 1) how to securely deal with input data (also focusing on privacy aspects) and 2) how this data can be utilized to increase secure behavior or to raise awareness among users (help the users to make better security-related decisions).
In this minitrack, we seek papers that explore concepts, prototypes, and evaluations of how users interact with information systems and what implications these interactions have for information security and privacy. Further, we welcome new and innovative ways of human-computer-interaction and security-related concepts currently examined in the field. Topics of interest include but are not limited to:
- Security related devices
- Physiological sensors
- Human-Computer-Interaction
- (Conversational) Artificial intelligence
- Blockchain applications
- Sensor analysis
- Data visualization
- Biometrics authentication
- Security and privacy awareness
- Accessibility
- Usable security design
- Privacy and security by design
- Privacy and smart contracts
- User valuation of privacy
- Validation of user data
Minitrack Co-Chairs:
Tobias Fertig (Primary Contact)
Technical University of Applied Sciences Würzburg-Schweinfurt
tobias.fertig@thws.de
Nicholas Müller
Technical University of Applied Sciences Würzburg-Schweinfurt
nicholas.mueller@thws.de
Paul Rosenthal
University of Rostock
paul.rosenthal@uni-rostock.de
The Software Sustainability: Research on Usability, Maintainability, and Reproducibility minitrack at HICSS continues to address the evolving landscape of research software, now increasingly shaped by artificial intelligence (AI) and machine learning (ML). As AI becomes a foundational component of scientific research, additional challenges emerge in ensuring software usability, sustainability, and reproducibility. The integration of AI-driven workflows, automated code generation, and large-scale foundation models introduces complexities in software maintainability, explainability, and ethical considerations. This minitrack explores how research software can adapt to these advancements, ensuring long-lasting, reusable, and trustworthy tools that support diverse scientific communities.
The focus on software usability, sustainability, and reproducibility is more critical than ever, including new trends initiated via AI and ML spanning diverse scientific domains and receiving significant investment in the U.S., Europe, the U.K., and beyond. Research software remains a fundamental driver of discovery, with over 90% of researchers relying on software and more than 65% indicating that their work would be impossible without it. As AI and ML become deeply embedded in research software, new usability challenges arise, such as ensuring transparency in AI-driven decision-making, designing intuitive interfaces for complex models, and supporting reproducibility in evolving AI ecosystems. The computational landscape has shifted from system-centered design to user-centered approaches, and now, increasingly, AI-assisted software development, where automation plays a role in code generation, debugging, and optimization. The prominence of AI-powered research software raises important concerns about long-term maintainability, ethical AI practices, and the ability to reproduce computational results in a rapidly changing technological environment. Addressing these issues is essential for enabling researchers to build on existing work, validate findings, and accelerate scientific progress.
The three concepts of usability, sustainability, and reproducibility are deeply interconnected and span all stages of the research software lifecycle. AI-powered tools introduce new dimensions to these challenges, from enabling reproducible experiments through automated workflows to ensuring model transparency and interpretability. Techniques such as containerization, automated machine learning, and AI-driven software testing are increasingly used to enhance application portability and maintainability. Such concepts are also relevant in the building of Science Gateways (also known as virtual laboratories or virtual research environments), which by definition serve communities with end-to-end solutions tailored specifically to their needs.
As research software continues to evolve, this minitrack will highlight novel methodologies, case studies, and best practices that ensure research software remains usable, sustainable, and reproducible in the long term. Consequently, we anticipate submissions not limited to but in the scope of the following topics:
- Web-based solutions (web sites, science gateways, virtual labs, etc.)
- Application Programming Interfaces (APIs)
- Computational and Data-Intensive Workflows
- Novel approaches in containerization
- Sustainability practices in software development
- System architectures for testing and continuous integration
- Emerging best practices in Machine Learning software
- Best practices and Key Success Factors for usability, sustainability and reproducibility
- Community building practices
- Sustainability practices in software development, with a focus on AI applications
- System architectures for testing and continuous integration in AI systems
- Emerging best practices in AI and Machine Learning software
- Addressing ethical considerations in AI-related software
- Best practices and Key Success Factors for usability, sustainability, and reproducibility in the context of AI
- AI-assisted software development (automated code generation, debugging, and optimization)
- Explainability and transparency in AI-driven software
- Automated reproducibility in AI workflows (versioning, benchmarking, and validation of ML models)
Minitrack Co-Chairs:
Maytal Dahan (Primary Contact)
Texas Advanced Computing Center
maytal@tacc.utexas.edu
Joe Stubbs
Texas Advanced Computing Center
jstubbs@tacc.utexas.edu
Sandra Gesing
San Diego Supercomputing Center
sgesing@ucsd.edu
The development of software has provided many opportunities for research, provides many opportuni-ties, and likely will provide many opportunities. Not long ago, the proliferation of mobile computing opened a new stream of research, then the same happened with the Internet-of-Things (IoT) and Cyber-Physical Systems (CPS). Fog, edge, and dew computing and the convergence of technologies likely will continue this trend. All these topics seemingly provide completely new endeavors; with a closer look, however, they can draw from what is already known – both regarding typical problems and regarding solutions.
Experiences and methods from classical software development can only be utilized to some degree when solving challenges that arise from new applications, changing environment, and demanding do-mains. Development is complicated by the often faced need to develop for a multitude of platforms. With the emergence of multi-platform and multi-device, the new golden standard are applications not only across software ecosystems, but across hardware platforms such as laptop, mobile, tablets, em-bedded devices, sensors, and wearables. Therefore, new threads of research are needed to tackle these issues and to pave the way for improved software development, better business producibility and improved user experience (UX).
Further, there are novel developments in machine learning and analysis, and the emergence of multi-faceted aspects of artificial intelligence (AI), ranging from algorithms to ethical AI, secure AI and sus-tainable AI perspectives. This creates new opportunities for groundbreaking research through distribut-ed machine learning, federated learning, edge analytics and computational collaboration between sev-eral heterogenous systems and device forms.
This minitrack is devoted to the technological background while keeping an eye on business value, user experience, and domain-specific issues. We invite researchers from software engineering, human-computer interaction, information systems, computer science, electrical engineering, and any other discipline that contributes to how software is designed, implemented, tested, and deployed. Contributions may take a sociotechnical view or report on technological progress. We are particularly interested in applied software technology but also welcome theoretical work. Topics of interest include the full spectrum of research on software development, but are not limited to:
- Case studies of development
- Development methods, software architecture, and specification techniques
- Economic and social impact, behavioral aspects
- Software engineering education
- User interface (UI) design and user experience (UX) research
- Hybrid and cross-platform development
- Web technology
- Security, safety, and privacy
- Energy-efficient computing
- Machine learning on device
- The convergence between mobile devices, IoT, and CPS
- Fog, edge, and dew computing and their computational applications
Minitrack Co-Chairs:
Tim A. Majchrzak (Primary Contact)
University of Agder
tmaj@ieee.org
Tor-Morten Grønli
Kristiania University College
tor-morten.gronli@kristiania.no
Hermann Kaindl
TU Wien
kaindl@ict.tuwien.ac.at
New approaches, methods, and tools (some of which are AI-enabled) for facilitating more responsive organizations are proliferating rapidly. Among the evolving methods are approaches such as Agile, lean, DevOps, BizDevOps. In addition, low-code and co-pilot platforms are being increasingly used to accelerate software development but can create organizational challenges.
The organizational challenges are numerous. For example, Agile was initially designed for co-located, on-site teams, but organizations today cope with scaling issues and remote and hybrid work. Low code software development enables non-software development personnel to create applications, but those personnel may lack sufficient knowledge of good software development practices. The challenges with artificial intelligence-developed code are similar, but perhaps even more extreme. Lean business models assume collocated access to the customer, but often startups now serve geographically dispersed customers.
In this minitrack, we seek research papers and experience reports that explore practices, tools, and techniques for rapid development. We also seek to explore how these concepts can be leveraged in other contexts (such as data science or physical product development). Practitioners interested in submitting an experience are welcome to reach out to a minitrack co-chair for support and guidance, if desired. Our minitrack seeks to answer questions such as:
- How can emerging technologies like AI and machine learning be seamlessly integrated into existing software development practices to enhance efficiency and effectiveness?
- How to balance team autonomy and decentralized decision-making with the need for organizational control and alignment in large-scale agile development?
- How can agile and lean can be integrated within a single coherent approach?
- Which metrics help enterprises, teams and individuals adapt and improve? What common behaviors do we see in agile or lean teams and how do those behaviors affect outcomes?
- How do organizations implement, monitor and improve hiring, coaching, training and mentoring?
- How to scale agile (how to effectively manage dependencies, teams, stakeholders, processes, technologies, and tools)including comparative results on the use of different agile scaling frameworks?
- How can agile be implemented within other contexts (e.g., data science, BizDevOps)?
- What organizational structures are required to enable shared leadership in self- managed teams?
- How to balance the need for effective coordination and focused work in an agile team?
- How do agile and lean principles extend to DevOps environments? Is there a difference between agile and lean before and after deployment? How are post- deployment issues and opportunities in software projects impacting planning and development of software development projects?
- What organizational structures and novel tools are required to leverage AI, low-coding, and rapid-prototyping as part of the project management process?
- What are the best practices for maintaining efficiency and effectiveness in remote or hybrid agile teams?
- How can agile teams ensure inclusivity and leverage diversity to enhance team performance and innovation?
Possible additional topics for the mini-track include but are not limited to:
- AI-enabled code development tools
- AI-enabled team collaboration and communication tools
- New frontiers in agile or lean management – going beyond software development.
- Forecasting, planning, testing, measurement, and metrics
- Exploring the fit between agile (or lean) organizations and their environmental context
- Agile and lean requirements engineering, and risk management
- Agile in hybrid digital/physical contexts
- What cultures, team norms and leadership characteristics lead to sustained agility?
- Empirical studies of agile or lean organizations
- Impact of tool use on agile or lean management
- Education and training –new approaches to teaching and coaching agile
- Global software development and offshoring/multi-shoring
- Rapidly reconfigurable multi-sided platform ecosystems
- Project management methods, low code development
Minitrack Co-Chairs:
Jeffrey S. Saltz (Primary Contact)
Syracuse University
jsaltz@syr.edu
Viktoria Stray
University of Oslo
stray@ifi.uio.no
Edward Anderson
University of Texas at Austin
Edward.Anderson@mccombs.utexas.edu
Alex Sutherland
Scrum, Inc.
alex.sutherland@scruminc.com
While one might start with the discovery of the electron by J. J. Thompson at Cambridge in 1897 that ultimately led to the co-invention of the integrated circuit by Jack Kilby and Robert Noyce 61 years later, it seems reasonable to start in the 1950’s when large scale computers were on the near horizon and John McCarthy coined the term Artificial Intelligence in a proposal for a workshop at Dartmouth in 1956. That workshop is considered the birthplace of AI.
The research in computing that is heralded by the press, and published in journals does not include concurrent, often unpublished histories, inventions, and discoveries that preceded, and are a necessary part of the recognized results. Transformative research is pyramiding, and the unrecognized bricks of the pyramid are what this minitrack is seeking. That is the somewhat hidden or not necessarily well-known computer science technology, design, standard and work having meaningfully shaped our current state of the art digital society.
Because the contributions to this minitrack cover the broad spectrum of computer science research history from 1950 to 2010, we give 10 foundational, historical examples to describe the theme for this minitrack. Contributors may or may not base their research contributions on the following areas, but not limited to the following:
1. User Interfaces:
a. 1960’s: Data Center systems with keypunch input software and batch processing.
b. Mid 1970’s-mid 1980’s: TTYs with mini-computers; terminal connected to time sharing mainframes; early bit mapped displays, mice and windows,
c. Mid 1980’s: Bitmapped displays and windows as desktop systems with ethernet connections to servers for email access, file storage, etc.
2. Pre-Internet networks and their associated protocols.
a. 1969: ARPANET
i. The first hosts were UCLA, SRI, UC Santa Barbara, and the University of UTAH.
ii. Early 1970’s: First globally connected data centers
b. 1972: TYMNET as part of the Tymshare corporation.
c. 1973: TELENET by BBN
3. Ethernets
a. 3mbps Ethernet-1
i. Local area networks
ii. Mainframes with ethernet connectivity replacing modem dial-ins.
iii. Multiple protocol router technology in 1980 at Stanford University.
b. 10mbps ethernet-2.
i. Invented at Xerox PARC in the late 1970’s
ii. It became wide spread at Universities in 1982.
c. ARPANET / Internet split in 1983.
i. Wide area networks
ii. 1986: NSFNET Coast to coast backbone network.
d. All the Internet protocols, etc., etc.
4. Email
a. 1985: IMAP created by William Yeager and Mark Crispin at Stanford University.
b. 1984: POP or Post Office Protocol for mail access.
c. 1982: SMTP or Simple Mail Transfer Protocol by Jon Postel.
5. Continuous since the 1950’s: Hardware advances on all fronts.
a. Exponential increases in CPU speeds.
i. 1981-82: Andy Bechtolscheim’s MC68000 8MHz CPU mother board first used in SUNet
ii. 1990’s: MHz CPUs became prevalent
iii. 2000: AMD 1GHz CPU clock speeds
iv. 2010: Intel i7-980X 3.33GHz up to 3.6GHz with Turbo Boost.
b. Disk sizes.
i. 1956: IBM 3.75 megabytes
ii. 1980: IBM 2.52 gigabytes
iii. 2007: Hitachi 1TB hard disk drive
iv. 2010: Seagate 3TB Serial Attached Salable system.
c. 1962: Hardware page tables and virtual memory,
6. Databases:
a. Early 1970’s: SQL or Structured Query Language created as an IBM research project.
b. 1977: Oracle founded
c. 1984: Sybase founded
7. 1999: Cloud Computing when Salesforce introduced Software as a Service (SaaS) over the Internet.
a. 2002: Amazon introduced Amazon Web Services.
b. 2004: Hadoop invented by Google project.
c. 2006: Amazon introduced Elastic Cloud (EC)
d. 2008: Google and Microsoft – Google App Engine and Microsoft Azure.
8. 2009: Apache Sparc invented at UC Berkeley
9. Early 2010’s: Big Data Analytics.
10. 2010: Data Warehouses use Big Data Analytics.
a. 2010: Google Big query
b. 2010: Amazon Elastic MapReduce (EMR)
11. Artificial Intelligence
a. 1960’s: Joseph Weizenbaum’s Eliza natural language processing system that used pattern matching techniques to simulate conversations.
b. 1965: Stanford University’s DENDRAL project designed to replicate human decision making in chemistry. Researchers include Dr. Joshua Lederberg (Nobel Laureat), Ed Feigenbaum, Bruce Buchanan, and Carl Djerassi.
c. 1970’s: Edward Shortliffe’s MYCIN designed to diagnose and prescribe antibiotics for bacterial infections
d. 1970’s-80’s: MOLGEN by Peter Friedland assists molecular biologists and geneticists in planning complex genetic experiments.
e. 1970’s-80’s: Knowledge Representation .
f. A note on these expert systems: expert systems are considered precursors to intelligent systems that integrate machine learning, big data analytics, and neural networks for decision-making across diverse fields.
g. Mid-2000’s: The ability to pre-train many-layered neural networks one layer at a time.
12. Computer Languages
a. Machine & Assembly (1940s–1970s): Early computers used direct machine code and assembly languages for basic instructions. Von Neumann’s stored-program concept revolutionized computing.
b. High-Level Languages (1950s–1960s): FORTRAN (scientific computing), LISP (AI & recursion), COBOL (business applications), and ALGOL (structured programming) laid the groundwork for modern programming.
c. Structured & Systems Programming (1960s–1970s): Simula introduced OOP, C became the dominant systems language, and Prolog pioneered logic programming.
d. Object-Oriented & Modular Programming (1980s): Smalltalk fully embraced OOP, C++ added object-oriented features to C, and Ada emphasized modular and safe programming.
e. Internet & Scripting (1990s): Python, Java, JavaScript, and PHP enabled web development, scripting, and platform-independent enterprise applications.
f. Modern Multi-Paradigm Languages (2000s–Present): C#, Scala, Go, Rust, and Swift improved performance, concurrency, and memory safety.
g. AI & Data-Driven Programming (2010s–Present): R, Julia, and AI-focused DSLs (TensorFlow, PyTorch) revolutionized machine learning and data science.
Minitrack Co-Chairs:
William Yeager (Primary Contact)
Knowledge Systems Lab, Stanford University, Retired
byeager@fastmail.fm
Jean-Henry Morin
University of Geneva
Jean-Henry.Morin@unige.ch
With the advancement of AI technology, AI algorithms start to match human performance for certain tasks (e.g. ChatGPT) and discover loopholes in systems that were not previously found. AI in general and ML methods specifically are increasingly used with scientific data and applied with great promise to solve a large variety of problems. These can be as diverse as control of operations at facilities and computing and data centers, for sifting through the millions of combinations that can produce viable candidates for experiments, showing potential for autonomous experiments and experimental design.
However, researchers need to understand how Artificial Intelligence (AI) and Machine Learning (ML) results are obtained in order to gain new insights and to establish confidence in the validity of these results. Otherwise, the promises of AI/ML will not be realized if scientists cannot trust the results, understand how they were obtained, gain transparency into what datasets, models and model parameters have been used or what features of the data lead to these results. Like any good scientific results, also AI/ML pipelines should be reproducible to the most possible extent.
With the increased use of AI comes an increase in the inherent complexity of the models. Deep Learning (DL) models with millions of nodes and degrees of freedom operating with large data volumes obscure their inner workings to human understanding. Unlike traditional ML algorithms such as rule-based decision trees or linear-regression models where the decision boundary is clear, the DL models consist of billions or even trillions of learned parameters. Therefore, interpreting a learned model is difficult.
The incredible growth in scale of AI training models, the use of heterogeneous architectures, the development of generative adversarial models, and the need for transparency are compounded by the need to avoid bias in predictions. Numerous examples of bias have been discovered in image recognition, classification, and text generation. Thus, formal explanations of how models achieve results, explicit representations of data, comprehensiveness and diversity of datasets used for training are crucial to foster trust in AI. Additionally, as AI models are inherently stochastic, experiments show that results obtained with AI may not be reproducible even within given error bounds. While reproducibility may not be needed for some uses of AI (e.g. when AI is used for the purpose of preliminary triage in drug discovery) in other uses, reproducible AI is paramount. Therefore, reproducibility is a component of trustworthiness in some cases.
This minitrack will explore a number of themes related to explainable, reproducible and trustworthy AI. This includes but is not limited to the following topics:
- Computational and foundational methods for explaining AI models (XAI)
- Computational and foundational methods to ensure reproducibility of machine learning predictions
- Computational and foundational methods to measure the variability of AI predictions within pre-set error bounds and determine the influential factors on this variability (inc. Uncertainty Quantification (UQ) methods applied to models) – new topic this year
- Approaches discussing how Findable, Accessible, Interpretable, Re-usable (FAIR) principles can be applied to AI/ML including models – new topic this year
- Mental models for interpreting AI results
- Mental or other models for trusting AI
- Computational and foundational methods and algorithms for detecting bias in AI/ML models
- Approaches, tools and best practices to perform sanity checks on data transformation pipelines
- Approaches, tools and best practices to keep track of experiments, inc. provenance
- Use cases of explainability and reproducibility with AI models (XAI, RAI)
- Case studies of when AI models introduce bias in results
- Definitions and examples of trustworthy AI
Minitrack Co-Chairs:
Line Pouchard (Primary Contact)
Sandia National Laboratories
lcpouch@sandia.gov
Peter Salholer
FH JOANNEUM – University of Applied Sciences
peter.salhofer@fh-joanneum.at
This minitrack attempts to provide methods and techniques to address the multifaceted challenges of maintaining safety, trustworthiness and security in cyber-physical systems (CPS) in general and for autonomous vehicles in particular. With the increasing incorporation of AI methods into many CPS, this mini-track looks to bring research from a diverse group of scholars, industry leaders, and policymakers to delve into the critical aspects of behavior assurance through verifying characteristics of safety, security and trustworthiness.
The objective of this minitrack is to serve as a nexus of researchers, regulators and practitioners from academia, industry and government, to establish the guardrails for responsible and assured incorporation of AI into decision-making software employed on autonomous systems. It will address the intersection of technology, safety, trustworthiness, responsibility, and policy, ensuring a comprehensive approach to AI security. The spread of AI-based methods in various CPS has been complimented by a rapid surge of adversarial methods capable of negatively impacting the operation of the CPS itself. In this context therefore it becomes vital to identify vulnerabilities, develop means to mitigate them in order to ensure reliable and trustworthy operations of the CPS. In the span for a few years, much effort and resource has been devoted to establishing desirable characteristics of AI software including safety, security, transparency, explainability among others. The challenge now is to make these into verifiable metrics, at the system level, that can be used in designing CPS with AI-enabled capabilities.
This minitrack seeks to expose security characteristics of AI that are common in all application domains and could provide frameworks to establishing assurance and trustworthiness usable in other disciplines.
This minitrack is targeted at researchers, policy-makers, industry professionals and students working in the field of AI software, cyber-security, and autonomous system design (aviation and automobile). Through the research identified for this track, graduate students, researchers, and academics who want to contribute their knowledge of security concerns and vulnerabilities within AI tools and techniques and help industry and regulators become aware of these concerns. Conversely, regulators and policy makers as well as systems integrators can share their obstacles and issues in developing certification relevant artifacts in systems with learning enabled components. Of particular relevance is to bring industry practitioners who have to deal with the challenges of security certification of system, both from a technology perspective as well as from regulatory and policy perspectives.
This minitrack will focus on identifying pathways to help develop and implement methodologies and techniques that can enable higher degree of assurance in cyber-physical systems that incorporate AI software, to operate in a trustworthy manner. Recommended topics for this track include, but are not limited to, the following:
- Regulatory challenges to certifying autonomous system software
- Generating test artifacts for security characterization of AI based CPS
- System Engineering based processes to ensure desired behavior (STPA, STPA-SEC etc.)
- Safety of CPS with AI components
- System Trustworthiness measures and verification
- Formal methods that can provide behavioral specifications
- Adversarial (Red Teaming) methods including model stealing, evasion, data poisoning
- Robustness and Resilience of AI Systems
- Threat Detection and Response in AI Systems
- AI in Cybersecurity, Offensive and Defensive Operations
It is intended that this minitrack can offer a forum for attendees to collaborate across the industry/academia/government communications divide, to foster a more unified basis of trustworthiness of advanced autonomous systems.
Minitrack Co-Chairs:
Sriprakash Sarathy (Primary Contact)
Northrop Grumman
sriprakash.sarathy@ngc.com
Tyson Brooks
National Security Agency and Syracuse University
ttbrooks@syr.edu
Shiu-Kai Chin
Syracuse University
skchin@syr.edu
Mohammad Al Faruque
University of California Irvine
alfaruqu@uci.edu