Equitable and Inclusive Digital Systems in Society and Institutions Track

Track Chairs

K.D. Joshi

Cameron School of Business
University of North Carolina Wilmington
601 S College Rd
Wilmington, NC 28403
joshik@uncw.edu

Xuefei Nancy Deng

California State University, Dominguez Hills
College of Business Administration and Public Policy
1000 E. Victoria Street
Carson, California, 90747
ndeng@csudh.edu

The digital technologies are increasingly entangled with and constitutive of everyday work and life. The favorable and unfavorable effects of digital systems are not distributed equally or uniformly across all contexts or populations in our society and institutions. Underrepresented, vulnerable, and underserved communities, institutions, and contexts often bear the greatest burdens of technological change. As a result, societal and institutional inequities embedded in data, algorithms, platforms, structures, practices, and system design can not only reinforce or amplify existing disparities but also generate new realities that are not inclusive. On the brighter-side, digital systems also afford powerful ways of safeguarding and improving institutions and humanity to create a more equitable, just, and inclusive future.

This track focuses on equitable and inclusive digital systems and their role in shaping access, participation, and outcomes across individuals, organizations, and communities. The track presents studies of socio-technical issues to not only uncover digital inequities, exclusion, and injustices (e.g., the problem of bias in algorithmic systems, which gives rise to various forms of digital discrimination, disparities in digital infrastructure and skills that hinder SMEs), but also find ways to investigate and build systems of empowerment through technology (e.g., building technologies via culturally responsive, value-sensitive designs).

This track invites research that advances digital futures where technological spaces, digital applications, and machine intelligence reflect the diversity of society and expand opportunity, while mitigating the risks of constructing a future that mirrors a narrow and privileged vision of society with its biases and stereotypes. We welcome research that examines both the emancipatory and oppressive potentials of digital technologies. The track further invites studies on how digital systems can be designed, implemented, built, used, and governed to promote fairness and inclusion rather than exclusion or harm. In this track, we create an outlet for all scholars across various disciplines to conduct research that deeply engages with equitable and inclusive digital systems in society and institutions. We welcome papers from a range of perspectives, including conceptual, philosophical, behavioral, and design science and beyond.

Opportunities for Fast Track to Journal Publications: Authors of the accepted conference papers by this track will be invited to submit a significantly extended version (min. +30%) of their paper for consideration to be published in one of the following journals. Submitted papers will be fast-tracked through the review process.

Our prior invited track has already led to two special issues at The DATA BASE for Advances in Information Systems. Papers fast-tracked from our 2024 invited track has been published in the November 2025 issue, and selected papers from our 2025 invited track will appear in the November 2026 issue.

Accessibility, Justice, and Critical AI in Sociotechnical Systems Minitrack

As computational and information systems become increasingly embedded across organizational, social, and everyday contexts, persistent inequities in access, representation, and power are reproduced or amplified by these technologies. Sociotechnical systems, including data infrastructures, algorithmic models, information systems, and the communicative and organizational environments they shape, are embedded in social structures that result in differential impacts across individuals and communities. Patterns of exclusion in metadata practices, model behavior, interaction design, policy frameworks, or information governance can reinforce inequity along lines of ability, socio-economic status, race, gender, language, and more.

This minitrack provides a forum for research that critically examines how information systems and AI mechanisms intersect with accessibility and justice in sociotechnical environments. We accept a diverse range of research approaches, including conceptual and theoretical frameworks, empirical studies, design science research, case studies, mixed methods, and critical analyses.

We seek work that not only diagnoses barriers and injustices across system life cycles, including the construction and use of metadata, training data bias, design assumptions, deployment impacts, and governance practices, but also proposes pathways toward systems that embody equitable access, representation, and empowerment. Contributions may interrogate systems from theoretical, empirical, philosophical, methodological, ethical, or design-oriented perspectives. We particularly welcome research that advances understanding in, but not limited to, the following areas:

  • Usability barriers of underrepresented and marginalized populations in sociotechnical and AI systems
  • Accessibility challenges embedded in data schemas, metadata standards, and information infrastructures
  • Critical assessments of model behavior, fairness evaluations, explainability, and accountability in AI
  • Socio-technical dynamics of participatory and inclusive system design
  • Governance, policy, and ethical frameworks that promote justice in information systems and AI deployment
  • Role of critical theory, information science, and systems thinking in shaping accessible and just technologies
  • Methods for auditing, measuring, or mitigating inequities, misinformation, or disinformation in algorithmic and information systems

This minitrack aims to bring together scholars and practitioners from disciplines including information science, information systems, computer science, human-computer interaction, science and technology studies, ethics and philosophy of technology, public policy, and related fields to foster interdisciplinary discourse on the construction of more accessible, just, and critically informed sociotechnical systems. Accepted papers will advance knowledge on how technologies can support equitable participation and benefit while reducing harm and exclusion.

Minitrack Co-Chairs:

Manika Lamba (Primary Contact)
University of Oklahoma
manika@ou.edu

Kyrie Zhixuan Zhou
University of Texas at San Antonio
kyrie.zhou@utsa.edu

Ece Gumusel
University of Illinois Urbana-Champaign
eceg@illinois.edu

AI and Digital Discrimination Minitrack

As artificial intelligence systems increasingly mediate access to employment, credit, healthcare, education, and information, concerns about algorithmic bias and digital discrimination have moved from theoretical debate to urgent societal challenge. This minitrack focuses on research involving understanding, identifying, and mitigating discrimination that arises in the design, development, deployment, and governance of systems that incorporate artificial intelligence.

A technology is biased if it unfairly or systematically discriminates against certain individuals by denying them an opportunity or assigning them a different and undesirable outcome. As we delegate more and more decision-making tasks to computer autonomous systems and algorithms, such as using artificial intelligence for employee hiring and loan approval, digital discrimination is becoming a serious problem. In her New York Times best-seller book “Weapons of math destruction: How big data increases inequality and threatens democracy,” Cathy O’Neil refers to those math-powered applications as “Weapons of Math Destruction” and provides examples to show how such mathematical models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed and harmed our lives.

Artificial Intelligence (AI) decision making can cause discriminatory harm to many vulnerable groups. In a decision-making context, digital discrimination can emerge from inherited prejudices of prior decision makers, designers, engineers or reflect widespread societal biases. One approach to addressing digital discrimination is to increase transparency of AI systems. However, we need to be mindful of the user populations that transparency is being implemented for. In this regard, research has called for collaborations with disadvantaged groups whose viewpoints may lead to new insights into fairness and discrimination.

Addressing the problem of digital discrimination in AI requires a cross-disciplinary effort. For example, researchers have outlined social, organizational, legal, and ethical perspectives of digital discrimination in AI . In particular, prior research has called for our attention to research the three key aspects: how discrimination arises in AI systems; how design in AI systems can mitigate such discrimination; and whether our existing laws are adequate to address discrimination in AI.

This minitrack welcomes papers in all formats, including empirical studies, design research, theoretical framework, case studies, etc. from scholars across disciplines, such as information systems, computer science, library science, sociology, law, etc. Potential topics include, but are not limited to:

  • Agentic AI, Decision Making, and Governance
  • AI-based Assistants: Opportunities and Threats
  • AI Auditing and Algorithmic Accountability
  • AI Explainability and Digital Discrimination
  • AI Governance and Regulatory Compliance
  • AI Literacy of Users
  • AI Systems Design and Digital Discrimination
  • AI Use Experience of Disadvantaged / Marginalized Groups
  • Biases in AI Development and Use
  • Digital Discrimination in Online Marketplaces
  • Digital Discrimination and the Sharing Economy
  • Digital Discrimination with Various AI Systems (LLM based AI, AI assistants, etc.)
  • Effects of Digital Discrimination in AI Contexts
  • Ethical Use/ Challenges/ Considerations and Applications of AI systems
  • Erosion of Human Agency and Generative AI Dependency
  • Generative AI (e.g., ChatGPT) Use and Ethical Implications
  • Organizational Perspective of Digital Discrimination
  • Power Dynamics in Human-AI Collaboration
  • Responsible AI Practices to Minimize Digital Discrimination
  • Responsible AI Use Guideline and Policy
  • Societal Values and Needs in AI Development and Use
  • Sensitive Data and AI Algorithms
  • Social Perspective of Digital Discrimination
  • Trusted AI Applications and Digital Discrimination
  • User Experience and Digital Discrimination

Minitrack Co-Chairs:

Sara Moussawi (Primary Contact)
Carnegie Mellon University
smoussaw@andrew.cmu.edu

Jason Kuruzovich
Rensselaer Polytechnic Institute
kuruzj@rpi.edu

Minoo Modaresnezhad
Minoo Modaresnezhad University of Maryland
mmodares@umd.edu

AI and Graceful Aging: Reimagining Later-Life Transitions in an AI-Enabled World Minitrack

Artificial intelligence (AI) is increasingly embedded in the social, organizational, and informational infrastructures through which individuals plan, decide, learn, and sustain participation across the life course. Population aging has rendered later life a consequential yet insufficiently theorized stage of digital transformation. This period is characterized by a series of structurally and psychologically significant transitions, including retirement, role reconfiguration, changing learning trajectories, and evolving sources of expertise. These transitions raise fundamental questions regarding how digital technologies shape 2 autonomy, continuity, and meaningful engagement in later life, positioning graceful aging as a critical concern for Information Systems (IS) research.

IS scholarship has long examined how digital technologies support decision-making, knowledge creation, and participation. Much of this literature, however, has been grounded in implicit assumptions of relatively stable user roles, cognitive capacities, and learning orientations. Recent advances in AI—particularly generative, adaptive, and data-driven systems—challenge these assumptions by actively mediating how individuals access information, develop competencies, and make sense of complex environments. In later life, such AI-enabled mediation intersects with age-related changes in experience, cognition, motivation, and identity, thereby reshaping the conditions under which individuals adapt, contribute, and exercise agency over time.

Research on aging and digital technologies has predominantly emphasized technology acceptance, health-oriented interventions, or aggregated well-being outcomes. While these streams have generated important insights, they often conceptualize aging as a static contextual attribute rather than as an unfolding process of transition and adaptation. A graceful aging perspective foregrounds aging as a dynamic and heterogeneous life-course process, emphasizing continuity, psychological resilience, social connectedness, dignity, adaptive growth, and a sense of purpose and autonomy. Within this perspective, AI functions not merely as an assistive artifact, but as a socio-technical arrangement that reconfigures learning processes, forms of contribution, and the distribution of expertise and responsibility across laterlife transitions.

This minitrack advances IS research by positioning AI-enabled graceful aging as a central analytical lens. It invites studies that examine how AI shapes later-life transitions, how learning and unlearning processes unfold in AI-mediated environments, and how digital systems can be designed and governed to support autonomy, role continuity, and sustained participation in later life. By treating aging and later life as core theoretical concerns rather than peripheral user characteristics, this minitrack seeks to extend IS theories of digital transformation, learning, and agency into an increasingly salient stage of the human life course. The minitrack welcomes theoretical, empirical, experimental, and design-oriented studies employing qualitative, quantitative, or mixed methods. Examples of possible topics of interest include, but are not limited to:

  • AI-Enabled Graceful Aging as a Life-Course Process
    1. Examining how AI supports autonomy, resilience, dignity, companionship, subjective well-being, and selfagency (e.g. perceived control) as core dimensions of graceful aging
    2. Investigating how AI contributes to continuity of self, identity coherence, and sustained purpose across later life
  • Later-Life Transitions and AI-Enabled Adaptation
    1. Exploring how AI supports adaptation during major later-life transitions, such as retirement, role exit, and social reconfiguration
    2. Examining how AI assists older adults in navigating uncertainty, reorientation, and sensemaking across life stages
  • AI-Supported Lifelong Learning and Cognitive Engagement
    1. Investigating how AI enables learning, reskilling, and intellectual engagement in later adulthood
    2. Examining the role of AI in sustaining curiosity, cognitive vitality, and adaptive learning across aging transitions
  • Meaning-Making and Psychological Growth in Later Life
    1. Exploring AI-enabled reflection, life review, and narrative reconstruction in later life
    2. Examining how AI supports meaning-making, wisdom development, and emotional maturation across the aging process
  • Social Participation and Relational Dimensions of Aging
    1. Investigating how AI enables social participation, belonging, and community engagement in later life
    2. Examining AI-supported pathways to maintaining or reconfiguring social roles and intergenerational connections
  • Socio-Technical Arrangements Supporting Older Adults’ Collaboration with AI
    1. Investigating organizational, community, or platform-level arrangements that enable older adults to collaborate effectively with AI systems
    2. Examining coordination among older adults, AI systems, families, organizations, and institutions across later-life transitions
  • Resistance, Boundary Work, and Reinterpretation of AI in Later Life
    1. Examining how older adults resist, reinterpret, or strategically appropriate AI systems to maintain control and autonomy
    2. Investigating boundary work between human agency and machine authority in later-life human – AI collaboration

High-quality and relevant papers from this mini-track will be selected for fast-tracked development towards Internet Research, an international and refereed journal that is indexed and abstracted in major databases (e.g., SSCI, SCI, ABI/INFORM Global), with an impact factor 6.8 in 2024. Selected papers must expand in content and length in line with the requirements for standard research articles published in the journal. Although the mini-track co-chairs are committed to guiding the selected papers towards final publication, further reviews may be needed before a final publication decision can be made.

Minitrack Co-Chairs:

Christy M.K. Cheung (Primary Contact)
Hong Kong Baptist University
ccheung@hkbu.edu.hk

Matthew K.O. Lee
Hong Kong Metropolitan University
mkolee@hkmu.edu.hk

Lawrence X. Liu
Hong Kong Baptist University
lawrence_xl@hkbu.edu.cn

AI Psychometrics for Equitable and Inclusive AI Systems Minitrack

AI Psychometrics is an emerging field at the intersection of psychology, cognitive science, neuroscience, and artificial intelligence, dedicated to evaluating and interpreting the psychological traits and internal processes of artificial intelligence. While traditional psychometrics and cognitive science have accumulated extensive methodologies to rigorously evaluate human intelligence, personality, and cognitive processes, AI Psychometrics adapts these frameworks to assess large language models and autonomous agents. Researchers in this field not only apply human-behavioral methodologies—such as questionnaires, standardized tests, experiments, and interviews—but also increasingly incorporate neural network-level analyses, including Neuron Activation Pathway Analysis, Probing, and Steering Vectors. By examining the activations and weights within a model’s neural architecture, these methodologies seek to map specific cognitive properties directly to their underlying computational structures. This multi-layered approach provides a holistic understanding of AI behavior, offering the insights necessary to enhance the development, deployment, and ethical management of complex systems by grounding “psychological” findings in “neural” reality.

As AI systems become increasingly integrated into everyday life—from healthcare and education to hiring and criminal justice—the need to ensure these systems are equitable, inclusive, and trustworthy has never been more pressing. Without rigorous psychometric evaluation, biases embedded in AI systems may go undetected, fairness claims may remain unverified, and vulnerable populations may bear disproportionate risks from poorly understood AI behaviors. AI Psychometrics offers a pathway for addressing these concerns by enabling the systematic assessment of AI systems’ cognitive processes, biases, and reasoning patterns through validated psychometric instruments. Furthermore, platforms such as the PsyCogMetrics™ AI Lab aim to democratize access to rigorous AI evaluation, providing these assessment tools to researchers, policymakers, and stakeholders.

This minitrack welcomes researchers from all disciplines, including psychology, cognitive science, computer science, information systems, education, social sciences, and beyond. We invite submissions that employ a variety of methodologies, including but not limited to psychometric testing, cognitive science experiments, design science research, computational modeling, empirical studies, surveys, theoretical analyses, and case studies. Our goal is to cultivate a vibrant dialogue across a wide range of disciplinary and methodological perspectives, thereby advancing understanding and fostering innovation in the evaluation and governance of AI systems. Our scope of interest spans a wide range of topics, which will be detailed in the following sections.

  • Personality, Emotional Intelligence, and Theory of Mind Assessment of Large Language Models
  • Psychometric Validity, Reliability, and Measurement Quality of AI Evaluations
  • Adaptive Testing, Item Response Theory, and Novel Benchmarking Approaches for AI
  • Bias Detection, Fairness, and Equity Evaluation Using Psychometric Methods
  • Psychometric Framework Development, Evaluation Platforms, and Design Science for AI Assessment
  • Neuron Activation Pathway Analysis and Mechanistic Interpretability of AI Systems
  • Probing and Representation Analysis of Emotions, Personality, and Cognitive Traits in Neural Networks
  • Activation Engineering and Steering Vectors for AI Personality and Behavioral Control
  • Sparse Autoencoders and Feature Disentanglement for Psychological Trait Extraction in LLMs
  • Circuit-Level Analysis of Reasoning, Decision-making, and Cognition in Large Language Models

Minitrack Co-Chairs:

Yibai Li (Primary Contact)
University of Scranton
yibai.li@scranton.edu

Zhiye (Norman) Jin
PsyCogMetrics™ AI Lab
zjin@m.marywood.edu

AI-enabled Digital Transformation for SMEs Minitrack

While large organizations are rapidly advancing toward AI‑enabled operations and practices, small and medium‑sized enterprises (SMEs) often lag significantly behind. More specifically, SMEs face structural, processual and relational constraints that limit their AI adoption, including limited financial capacity, access to advanced technologies, insufficient data infrastructures, and a shortage of digital and analytical skills within their workforce. This disparity risks creating a widening digital divide between large firms and SMEs, which may undermine economic dynamism, digital equity, and long‑term sustainability across sectors, countries and regions.

Given that SMEs form the backbone of most global economies, their difficulty in participating meaningfully in today’s AI-enabled business and technological economy through effective organizational transformation poses challenges not only for firm‑level competitiveness but also for inclusive and equitable digital transformation at the ecosystemic, societal, and institutional levels. Ensuring that SMEs can access, adopt, and benefit from AI-enabled technologies and practices is therefore critical for fostering resilience, innovative, and fair economic systems.

This minitrack provides a forum for discussing and exchanging research ideas, lessons from case studies and best practices in an emergent and disruptive context. We welcome all types of contributions to theoretical, empirical, design‑oriented, technical, and interdisciplinary research addressing the opportunities, challenges, strategies, and outcomes associated with AI-enabled transformation and use in the SME context. The topics include, but are not limited to:

  • AI adoption drivers, strategies, and readiness models for SMEs’ digital transformation
  • AI barriers, challenges, and failures for SMEs’ digital transformation
  • Skill gaps, workforce development, organizational learning and managerial capabilities for AI‑enabled practices in SMEs
  • Ethical and societal issues surrounding AI in SMEs and their environments
  • Case studies of AI adoption and use in SMEs and their business ecosystems
  • AI innovation ecosystems inclusive of SMEs
  • Design of SME-suitable AI artefacts
  • SME practitioner perspectives on AI
  • Implementing human-centered AI for SMEs
  • AI-enabled business model innovation pathways in SMEs
  • AI governance frameworks for SMEs
  • AI-driven shifts in SME strategies for competition, co-opetition, co-creation, and open innovation
  • Development, configuration, and management of AI ecosystems for SMEs

This minitrack partners with Electronic Markets, a leading information systems journal ranked in the top quartile (Q1) of Scimago journals and classified as ‘A’ journal in many rankings. Authors of papers that are accepted in this minitrack and align with the journal scope may be invited to submit a substantially extended version of their work to the journal for a fast‑tracked review process.

Minitrack Co-Chairs:

Yao Shi (Primary Contact)
University of North Carolina Wilmington
shiy@uncw.edu

Lukas Fitz
Brandenburg University of Technology Cottbus – Senftenberg
fitz@b-tu.de

Claudia Pelletier
Université du Québec à Trois-Rivières
claudia.pelletier@uqtr.ca

Fanny-Eve Bordeleau
Dalhousie University
fe.bordeleau@dal.ca

Blockchain, Cryptocurrency, and FinTech: Responsible Deployment and Governance Minitrack

Introduced in 2008 by Satoshi Nakamoto, Bitcoin marked the beginning of the first peer-to-peer currency, igniting a recent wave of interest in Blockchain, Cryptocurrency, and FinTech. Despite significant interest in Blockchain, Cryptocurrency, and FinTech, more than a decade after their emergence, they have yet to become everyday technologies for consumers. These technologies still face numerous technical challenges, including scalability, security, privacy, interoperability, and energy consumption, along with challenges in business adoption, social trust, ethical, environmental, and regulatory controversies, and the potential for illicit activities.

As digital technologies become increasingly entangled with everyday work and life, Blockchain, Cryptocurrency, and FinTech afford powerful ways to promote financial inclusion, empower underserved communities, and build more equitable, just, and inclusive digital systems. However, the favorable and unfavorable effects of these technologies are not distributed equally across all contexts or populations. Underrepresented, vulnerable, and underserved communities often bear disproportionate burdens of these technological changes, including the digital divide, lack of infrastructure, regulatory uncertainties, and limited digital literacy. Societal and institutional inequities embedded in data, algorithms, and platform design can further reinforce or amplify existing disparities.

This minitrack welcomes researchers from all disciplines, ranging from the ‘hard sciences’ such as engineering and computer science, to social sciences, finance, management, and beyond. We invite submissions that employ a variety of methodologies, including but not limited to algorithm/system design, experiments, simulation, theoretical analysis, empirical research, surveys, design science, development of theoretical frameworks, qualitative inquiries, and case studies. In particular, we encourage research grounded in or engaging with the Actor-Network Theory-based Responsible Development Methodology (ANT-RDM) and the STEADI principles, which offer structured approaches to ensuring that emerging technologies are developed and deployed responsibly. Our goal is to cultivate a vibrant dialogue across a wide range of methodological perspectives, thereby advancing understanding and fostering innovation in this field. Our scope of interest spans a wide range of topics, including, but not limited to:

  • Technical Aspects
    1. AI and Machine Learning Applications in Blockchain, Cryptocurrency, and FinTech
    2. Smart Contract Design, Verification, and Automation
    3. Scalability, Interoperability, and Cross-chain Solutions in Blockchain Systems
    4. Decentralized Data Architecture, Distributed Storage, and Self-sovereign Identity
    5. Integration of Blockchain with IoT, Cloud Computing, and Enterprise Systems
  • Social Aspects
    1. Fairness, Bias, and Inequality in Digital Asset Markets and Algorithmic Systems
    2. Bridging the Digital Divide and Promoting Financial Inclusion through Blockchain, Cryptocurrency, and FinTech
    3. Cultural, Cognitive, and Institutional Barriers to Adoption in Underserved Communities
    4. Enhancing Education and Digital Literacy for Blockchain, Cryptocurrency, and FinTech
    5. Community Empowerment and Cooperative Platform Models in Web3 Ecosystems
  • Governance Aspects
    1. Decentralized Autonomous Organizations (DAOs): Governance Design, Voting Mechanisms, and Decision-making
    2. Regulatory Frameworks and Policy Design for Blockchain, Cryptocurrency, and FinTech
    3. Enterprise Blockchain Governance Archetypes and Platform Governance Transformation
    4. Compliance, Anti-Money Laundering (AML), and Identity Management in Decentralized Systems
    5. Data Governance, Digital Identity, and Privacy Regulation in Blockchain, Cryptocurrency, and FinTech
    6. Cross-border Regulatory Coordination and International Governance of Digital Assets
  • Business and Economic Aspects
    1. Web3 Business Models, Token Economics, and Decentralized Platform Ecosystems
    2. Decentralized Finance (DeFi) and Blockchain-based Financial Instruments
    3. Non-Fungible Tokens (NFTs), Digital Asset Ownership, and Tokenization
    4. Central Bank Digital Currencies (CBDCs) and Cryptocurrency Market Dynamics
    5. Blockchain Applications in Supply Chain Management and Enterprise Integration
  • Environmental and Ethical Aspects
    1. Energy Efficiency and Sustainability in Blockchain, Cryptocurrency, and FinTech
    2. Blockchain for Circular Economy and Sustainable Development
    3. Ethics, Transparency, and Accountability in Algorithmic and Decentralized Systems
    4. Combating Illicit Activities and Fraud Detection with Blockchain, Cryptocurrency, and FinTech

Minitrack Co-Chairs:

Yibai Li (Primary Contact)
University of Scranton
yibai.li@scranton.edu

Wanli Liu
Guangzhou Xinhua University
liuwlariel@xhsysu.edu.cn

Kaiguo Zhou
Capital University of Economics and Business
zhoukg@cueb.edu.cn

Changing Nature of Work – Expanding Labor Markets and Work Practices through Digital Transformation Minitrack

Across the work force new developments in collaboration tools, digital labor platforms and artificial intelligence are changing the nature of work. Large-scale remote work activities have spread widely and are likely to remain an integral part of how many companies manage work. Additionally, rapid growth of artificial intelligence (AI) tools is transforming work tasks, career readiness, organizational structures and career pipelines. The changing nature of work presents both challenges and opportunities to building more expansive labor markets.

On the one hand, the changing nature of work allows a variety of tasks to be completed remotely, expanding access to work opportunities by individuals who may face limited opportunities due to distance, access to reliable transportation or care responsibilities. Greater access to artificial intelligence tools support may also allow individual workers to complete a broader spectrum of tasks and may boost productivity. In this manner, broader adoption of collaborative tools and digital platforms may enable meaningful employment opportunities to individuals who would otherwise be excluded from the digital workforce. On the other hand, underlying obstacles in labor markets, derived from factors such as differing wage rates, lack of access to education, job loss due to automation or augmentation, differences in power among stakeholders, varying digital infrastructure across geography or regulatory variability, may be amplified and codified as work processes evolve. Further technological development, including new AI capabilities or robotics, may also automate tasks, disrupting the number and nature of opportunities for future employment.

This minitrack is focused on issues relating to how the changing nature of work can function as a pathway for enabling more expansive work practices. This objective takes many forms, including examinations of the socio-technical factors that enable expansive employment as well as those factors that create barriers to the digital workforce. We welcome submissions examining factors at any level of analysis, spanning from global or national level factors influencing labor markets, to individual or team level factors influencing work practices. Growing scholarly and societal attention to these issues underscores their relevance in contemporary labor market debates. Increasing oversight by regulatory bodies further highlights the need for both academia and policy makers to not only understand emerging work conditions, but to also articulate the impact of proposed interventions to the changing nature of work on labor markets.

As discussed above, technology is reshaping labor markets and work practices. While digital technologies may enable greater employment access, they may also foster environments characterized by power asymmetries. For example, new technologies may privilege the platform owners who have the power to control the digital work environments (such as the sourcing models, compensation models, and work policies) but disadvantage workers (Deng, Joshi, and Galliers, 2016). Accordingly, this minitrack calls for research that critically examines how contemporary work conditions and policies on the changing nature of work and propose new work processes, platform designs and polices to enhance the digital work environments and foster expanded workforce access.

Finally, it is important for both academia and industry to better understand the impact of post-pandemic transformations on the changing nature of work. As remote and hybrid arrangements become institutionalized, technological developments at the intersection of remote work platforms and AI can potentially shape work at different levels. Research on the future of work—including the skills, capabilities, and organizational arrangements required of the future workforce—can broaden our visions and understanding of the next generation of workforce. Potential issues and topics on the changing nature of work and inclusive labor markets and work practices include, but are not limited to:

  • Employment relations in distributed digital organizations
  • Ethical and regulatory issues in the labor relations in changing work environments
  • The changing nature of work in developing economies
  • Algorithmic based discrimination in technology centered work environments
  • AI impact work labor markets and career pathing
  • Wanted/unwanted consequences of AI and ML on Work, i.e., work displacement; skill degradation
  • AI complementarities/substitution
  • Algorithmic Management
  • Changing work conditions
  • Impacts of the digital divide on labor markets
  • The changing nature of collective bargaining in a global workforce
  • Worker identity and engagement in the changing nature of work
  • Psychological aspects of emerging work environments on workers (e.g., Technostress, Well-being)

Minitrack Co-Chairs:

Joseph Taylor (Primary Contact)
California State University, Sacramento
joseph.taylor@csus.edu

Phoebe Pahng
California State University, Sacramento
phoebe.pahng@csus.edu

Nura Jabagi
Université Laval
nura.jabagi@fsa.ulaval.ca

Digital Democracy and Societal Resilience: Information Systems under Pressure Minitrack

Digital technologies have become integral to democratic governance, public discourse, and the formation of public opinion. At the same time, democratic societies around the world are experiencing growing pressure – from polarization and declining trust to Information manipulation and foreign influence operations. In this context, digital platforms, algorithmic systems, and other AI-based technologies no longer function merely as neutral infrastructures. They actively shape how information circulates, how publics form opinions, and how collective decisions are made.

This minitrack approaches digital democracy from a techno-realist information systems perspective. Rather than assuming that digitalization inherently strengthens democratic participation, we conceptualize digital democracy as a socio-technical condition that is conditional, fragile, and increasingly contested. Information systems structure attention, visibility, and knowledge production and thereby interact with power, legitimacy, and democratic self-rule—sometimes stabilizing democratic processes, sometimes contributing to their erosion.

For information systems research, this raises a fundamental challenge. Beyond questions of efficiency or adoption, there is a growing need to understand how digital systems contribute to societal stress or resilience, and how they can be analyzed, designed, and governed under conditions of manipulation, polarization, and institutional pressure.

The vision of this minitrack is both analytical and solution-oriented. We invite contributions that advance IS research along three closely connected dimensions. First, we seek work that develops data-driven and computational approaches to measure and analyze phenomena such as polarization, exclusion, erosion of trust, or coordinated information manipulation at the level of societies, platforms, or specific technologies. Second, we encourage design-oriented and evaluative research that proposes and assesses concrete technical interventions addressing information manipulation (including disinformation and deepfakes), algorithmic bias, or exclusionary dynamics. Third, we welcome theoretical and conceptual contributions that refine how digital democracy, governance, and societal resilience are understood in contemporary, platform-based information environments. Relevant topics include, but are not limited to:

  • Data analytics and computational methods for measuring polarization, societal fragmentation, or democratic stress
  • Detection and analysis of disinformation, FIMI, coordinated influence campaigns, and hybrid digital threats
  • Design science research on information systems that foster societal and institutional resilience
  • Platform architectures, recommender systems, and AI applications shaping public discourse and governance
  • Evaluation frameworks for assessing the societal and democratic impact of digital technologies
  • Digital government systems beyond efficiency, including legitimacy, accountability, and democratic quality
  • Generative AI and its implications for public knowledge, discourse, and political decision-making
  • Theoretical perspectives on digital democracy, socio-technical power, and resilience

The minitrack welcomes quantitative, qualitative, computational, experimental, and design-oriented research, as well as theoretical contributions. Interdisciplinary work at the intersection of computer science, information systems, and computational social science is particularly encouraged. By bringing together research on measurement, design, and theory, this Minitrack aims to advance an information systems research agenda that takes democratic pressures seriously and contributes to a deeper understanding of how digital systems can be designed and governed to support societal and democratic resilience under real-world conditions.

Minitrack Co-Chairs:

Jonas Fegert (Primary Contact)
Karlsruhe Institute of Technology and FZI Research Center for Information Technology
fegert@fzi.de

Karine Nahon
Reichman University
knahon@runi.ac.il

Marten Risius
Neu-Ulm University of Applied Sciences
marten.risius@hnu.de

Christof Weinhardt
Karlsruhe Institute of Technology
weinhardt@kit.edu

Digital Literacy to AI Literacy: Advancing Digital Fairness and Ethical Futures Minitrack

The ever-changing landscape of information and communication technologies (ICTs), and their growing importance in everyday life, are widely recognized as integral to academic, economic, and civic participation. While the first-level digital divide (physical access to ICTs) has been decreasing worldwide, the second-level digital divide (digital literacy skills) and third-level digital divide (outcomes of ICT use) remain prevalent and require more nuanced examination. Digital literacy encompasses both the cognitive and technical abilities needed to use digital devices and to access, navigate, evaluate, and apply information across digital environments. Using these abilities to achieve consequential outcomes, and reduce outcome disparities, represents a critical step toward advancing digital fairness across social, educational, and civic contexts and towards increasing ethical futures without the digital divide.

Advancing digital fairness and ethical futures have been further complicated by the rapid integration of artificial intelligence (AI) into work, education, and other aspects of everyday life. The widespread adoption of AI compels researchers and society to confront complex questions of control, agency, accountability, and responsibility. Increasingly, questions have been raised around personal agency and control over individual’s data and how large language models (LLMs) are using it. Automated feedback systems, conversational agents, and LLMs increasingly shape how individuals learn, work, and participate in civic life, embedding assumptions and values that may not align with ethical or equity-centered priorities. As AI systems increasingly mediate access to information, services, and opportunities, AI literacy has become an essential component of digital literacy, shaping how individuals understand, interact with, and critically assess digital technologies, their limitations, and their social consequences.

Issues related AI and digital technologies, such as algorithmic bias, surveillance, trust, data ownership, and opacity, demand urgent attention as AI becomes embedded in education, work, and public services. These challenges carry significant social, economic, and political implications and risk further widening barriers to participation in digital society, especially for aging populations, people with disabilities, rural residents, veterans, and other underrepresented and underserved communities and groups in the Global South.

This minitrack explores the critical role of digital literacy, including AI literacy, in empowering such communities facing persistent and emerging challenges such as poverty, discrimination, immigration barriers, illness, climate change, crises, and broader societal, technological, and political shifts. It also examines how digital and AI transformations in work, education, and social interaction shape efforts to advance digital fairness and ethical futures.

We seek contributions that examine how digital literacy can foster resilience, agency, and meaningful change for individuals and communities. This minitrack aims to surface research insights that advance understanding of improved outcomes, such as digital fairness and ethical futures, without assuming onesize- fits-all approaches, recognizing that challenges and solutions vary across regional, cultural, and institutional contexts.

We welcome contributions that highlight digital literacy education and workforce interventions and developments that improve these outcomes and advance digital literacy learning in underserved populations. We welcome submissions from scholars across diverse disciplines—including information science, information systems, computing, human-computer interaction, education, learning sciences, public health, urban and rural studies, agricultural technology (agtech), financial technology (fintech), sociology, anthropology, and related fields.

This call invites original research papers, case studies, and review articles that examine digital literacy, including AI literacy, and its implications for underrepresented and underserved populations, as well as initiatives and interventions aimed at advancing digital fairness and ethical futures. Topics of interest include, but are not limited to:

  • Expanding, redefining, and critically examining digital literacy and digital fairness, in the age of AI
  • Digital and AI literacy competencies for socially and ethically responsible practices
  • AI literacy, and its role and connection to digital divide, fairness, and ethical futures
  • Digital literacy and the future of work
  • Digital infrastructures for advancing digital literacy and fairness
  • Assessment frameworks for measuring digital literacy, AI literacy, and digital fairness
  • The evolving role of digital navigators and intermediaries in communities
  • Digital literacy education, training, and capacity-building initiatives
  • Digital fairness policies and strategies: best practices and lessons learned
  • New trends in digital literacy education and workforce development

Minitrack Co-Chairs:

Mega Subramaniam (Primary Contact)
University of Maryland
mmsubram@umd.edu

Shanton Chang
University of Melbourne
shanton.chang@unimelb.edu.au

Marc Cheong
University of Melbourne
marc.cheong@unimelb.edu.au

Lauren Rhue
University of Maryland
lrhue@umd.edu

Emerging Research and Perspectives: Equitable and Inclusive Digital Systems Minitrack

This minitrack provides an open and inclusive outlet for high-quality research aligned with the theme of the Equitable and Inclusive Digital Systems in Society and Institutions track that does not fit neatly within other minitracks. We particularly welcome emerging, interdisciplinary, and agenda-setting work that advances understanding of equity, inclusion, justice, and harm in digital systems.

Minitrack Co-Chairs:

K.D. Joshi (Primary Contact)
University of North Carolina Wilmington
joshik@uncw.edu

Nancy Deng
California State University, Dominguez Hills
ndeng@csudh.edu

Equitable and Inclusive Digital Systems in Criminal Justice and Administration of Justice Minitrack

Digital ecosystems, including data infrastructures, algorithmic decision‑making tools, artificial intelligence (AI), and platform technologies, are increasingly embedded within justice institutions and shaping how rights, risks, and resources are allocated. Innovations in Information Communication Technologies (ICT) have transformed these ecosystems in ways that can amplify harm, exclusion, and victimization, often at scale. At the same time, ICTs provide new avenues for illicit actors to reach, manipulate, and exploit marginalized groups who are already vulnerable to diverse forms of exploitation.

Law enforcement agencies and governments have begun updating outdated laws and policies in response to these developments, yet regulatory systems often struggle to keep pace with rapid technological change and evolving criminal practices. Justice institutions now depend heavily on digital ecosystems for investigation, adjudication, and governance, raising important questions about transparency, accountability, and equity. In parallel, legitimate organizations increasingly seek ways to deploy ICT to detect and mitigate criminal misuse of their products and services, protect stakeholders, and reduce societal harm.

Criminal groups often act as unexpected drivers of innovation by developing, adapting, or repurposing technologies and organizational practices. These dynamics require governments and law enforcement agencies not only to continually adapt but also to strengthen their capacity for strategic anticipation. In an era of heightened global conflict and geopolitical rivalry, criminal disruptions may be instrumentalized, directly or indirectly, to destabilize political or economic systems. Such disruptions can also reinforce dominant positions within digital ecosystems, blurring the lines between organized crime, influence operations, and power politics, with disproportionate consequences for vulnerable populations.

Digital ecosystems now play a central role in the administration of justice. Emerging ICT tools, particularly AI, are reshaping legal systems with profound implications. On one hand, AI‑driven applications offer promise for improving judicial decision‑making, streamlining legal procedures, and expanding access to justice, especially for underserved and marginalized communities. On the other hand, these technologies introduce risks related to bias, discrimination, surveillance, and privacy violations, which can disproportionately affect vulnerable groups. As AI tools become more deeply embedded in judicial and administrative functions, the need for critical examination of their societal and institutional impacts becomes increasingly urgent.

This minitrack explores criminal justice and the administration of justice as a high‑stakes domain for examining equitable and inclusive digital ecosystems. We especially welcome research addressing:

  • The Dual‑Edged Impact of ICT and AI Technologies in Justice: Studies examining how ICT and AI can enhance efficiency, accessibility, and fairness in legal processes while also introducing new risks, including bias, discrimination, and privacy violations.
  • Evolving Criminal Behavior: Research on how digital platforms, datafication, and AI systems enable illicit behaviors; how cybercriminals initiate and develop cybercrime “careers”; and how technological platforms are misused to target vulnerable populations.
  • Law Enforcement and Policy Responses: Analyses of how justice institutions, law enforcement agencies, NGOs, and businesses use ICT and AI to detect and disrupt illicit networks, and how these systems influence equity, accountability, and rights protections.
  • Legal and Institutional Reforms: Work that evaluates the adequacy of current laws and policies amid technological change, focusing on how ICT and AI can safeguard rights and ensure access to justice for vulnerable groups.
  • Victim Support and Access to Justice: Investigations into how ICT and AI can support victims of crime and exploitation, particularly marginalized groups requiring legal assistance and protection.
  • Criminal Innovation and Crisis‑Induced Destabilization: Research on how criminal groups drive technological and organizational innovations that generate systemic disruptions, expose institutional vulnerabilities, and challenge the resilience of digital justice infrastructures.
  • Digital Crime and Strategic Destabilization: Studies on how ICT‑enabled crime contributes to institutional instability, crisis dynamics, and geopolitical or geoeconomic manipulation, blurring boundaries between organized crime, influence operations, and digital governance.

Criminal justice encompasses the laws, procedures, institutions, and policies operating before, during, and after a crime. Central to this system is the protection of rights for suspects, convicted individuals, and victims. This minitrack highlights how digital ecosystems shape these protections, often unevenly. We invite conceptual, theoretical, empirical, and methodological contributions that deepen understanding of how ICT and AI are transforming justice institutions and how their design, governance, and accountability can promote a more equitable system.

Minitrack Co-Chairs:

Carlos Torres (Primary Contact)
Baylor University
carlos_torres@baylor.edu

Daniel Pienta
University of Tennessee, Knoxville
dpienta@utk.edu

Michael Dinger
Baylor University
Michael_dinger@baylor.edu

Christine Dugoin-Clement
Sorbonne Business School
christine.dugoin-clement@iae.pantheonsorbonne.fr

From Digital Divide to Digital Equity and Inclusion Minitrack

Digital divide refers to the gap between those who have access to and can effectively use digital technologies and those who cannot. While significant progress has been made globally in the diffusion of information and communication technologies (ICTs), digital inequity remains persistent and multifaceted. Beyond technological infrastructure and connectivity, digital exclusion may also be associated with and manifested in social, economics, educational, and corporeal basis to shape who benefits from digital services and who is excluded or disadvantaged. These disparities disproportionately affect vulnerable populations, including but not limited to youth, older adults, persons with disabilities, low-income communities, rural and indigenous communities, sexual minorities, marginalized castes, migrants and refugees, stateless individuals, and underserved regions in both developing and developed contexts.

Emerging Artificial Intelligent (AI) technologies further highlight the double-edged nature of digitalization in shaping digital divides and digital equity. While AI offers opportunities to enhance inclusion through improved access to education and digital technologies, it may also deepen inequalities due to uneven access to data, infrastructure, and skills, as well as algorithmic bias and the concentration of AI capabilities. Examining this dual role of AI is essential for advancing inclusive and equitable digital futures.

Moreover, the use of AI in the workplace is also creating a new future of work. There is a heightened need to ensure this new digitally-enabled future of work unfolds in a manner that promote digital equity and inclusion, especially for those who are at risk of being digitally disenfranchised. This entails not only the changing demands of digital literacy and skills, but also the transformation of individual identities and social relations in the AI-driven digital society. Given that inequitable distribution of AI impact on different countries and population groups, intersectional or feminist approach to digital equity and inclusion would be welcome. Furthermore, alternative and human-centric imaginaries for the digital future, as well as embodied and affective approaches to digital inclusion are also worth exploring.

As societies and institutions become increasingly digitalized and cities transform into smart cities, the urgency of moving from diagnosing digital divides to enabling digital equity and inclusion has never been greater. This minitrack invites research papers and review articles that elucidates the theory and practice of digital divides and advances digital equity and inclusion. Design science studies on digital social innovation, especially participatory design and community-based approaches that promotes digital equity and inclusion are much welcomed. Submissions may address, but not limited to, the following topics:

  • Beyond access: social, economic, educational, and corporeal dimensions of digital exclusion
  • Digital equity and inclusion practices, policies and strategies
  • Role of AI in digital divide or digital equity and inclusion
  • Digital skills, reskilling, and unequal transitions into AI-mediated work
  • Datafication of cities and uneven civic participation
  • Digital citizenship, surveillance, and differential inclusion/exclusion
  • Inclusive digital platforms for health, education, and social care
  • Identity, dignity, and subjectivity in AI-enabled workplaces
  • Intersectional, feminist, or decolonial perspectives on digital inequity
  • Digital inequality as a dynamic, relational, and political process
  • From “digital divide” to “digital justice” and “digital rights”
  • Technology use, well-being, and psychosocial exclusion
  • Lived experiences of “being left behind” in digital societies
  • Digital inclusion/exclusion in crisis, conflict, or humanitarian contexts
  • Participatory, co-design, and community-based digital innovation
  • Ethical, political, and philosophical foundations of inclusive digital societies

Minitrack Co-Chairs:

Calvin M.L. Chan (Primary Contact)
Singapore University of Social Sciences
calvinchanml@suss.edu.sg

Tianjian (TJ) Zhang
California State University Dominguez Hills
tzhang@csudh.edu

Yingqin Zheng
University of Essex
y.zheng@essex.ac.uk

Gender and Technology Minitrack

Gender and technology provide a powerful lens for examining how information systems shape, reproduce, and sometimes challenge social inequalities. Gender equity in technology is not merely a diversity concern; it is a fundamental issue of social justice. As information systems increasingly mediate work, governance, care, participation, and everyday life, those who design, deploy, and govern digital technologies play a decisive role in shaping whose values, identities, and interests are legitimised—and whose are marginalised.

This minitrack seeks to advance rigorous and impactful research on gender and technology within the information systems discipline. It foregrounds scholarship that conceptualises gender as a socially constructed, relational, and intersectional phenomenon, rather than as a fixed or binary biological category. We particularly encourage work that draws on gender-based and critical theories—such as Individual Differences Theory of Gender and IT, Gender Role Theory, feminist theories, intersectionality, and care ethics—to deepen theoretical understanding and provide nuanced explanations of gendered dynamics in socio-technical systems.

The minitrack welcomes conceptual, empirical, and design-oriented research that examines gendered processes and outcomes at individual, organisational, institutional, and societal levels, including studies situated in marginalised or underrepresented contexts. Contributions may explore how technologies create gendered opportunities and risks, how power and legitimacy are negotiated through digital systems, and how interventions in design, governance, policy, or practice can help attenuate persistent gender inequities. Topics include, but are not limited to:

  • Intersectional analyses of gender, race, class, sexuality, disability, and technology
  • Feminist, critical, and decolonial perspectives in information systems research
  • Gender, power, and legitimacy in the design, development, and governance of IS
  • Gendered histories of technology and technological work
  • Gender, identity, and sensemaking in technology use
  • Gendered implications of digital platforms, algorithms, and AI systems
  • Bias, exclusion, and discrimination embedded in data, models, and digital infrastructures
  • Care ethics, relationality, and responsibility in digital design and use
  • Gender and technology in marginalised, low-resource, or Global South contexts
  • Hegemonic masculinity and norms of technical competence in IS and IT professions
  • Gendered leadership, authority, and decision-making in technology organisations
  • Gender, work, and careers in technology-intensive settings
  • Gendered opportunities and risks associated with emerging technologies
  • Imposter syndrome, belonging, and legitimacy in technical domains
  • Gendered pathways into, through, and out of STEM and IS careers
  • Digital entrepreneurship and gendered access to resources and opportunity
  • Novel conceptualisations and operationalisations of gender in IS research
  • Organisational, institutional, and policy interventions for advancing gender equity in technology

Papers accepted to this minitrack will be published in a special Issue of the Information Systems Management Journal.

Minitrack Co-Chairs:

Regina Connolly (Primary Contact)
Dublin City University
regina.connolly@dcu.ie

Mina Jafarijoo
Stockton University
Mina.Jafarijoo@stockton.edu

Cliona McParland
Dublin City University
Cliona.McParland@dcu.ie

ICT and Social Justice Minitrack

This minitrack provides a forum for open and vibrant discussion for research related to the use of Information and Communication Technologies (ICT) in understanding and promoting social justice. Social justice has become particularly relevant to information systems (IS) researchers in light of the emergence of increasingly autonomous and sophisticated ICT, including biometrics and deep learning-based AI systems.

Social justice is the belief that everyone deserves fair and equal treatment and serves as a theoretical grounding for burgeoning research related to the oppressive and dehumanizing nature of modern ICT. Such technologies are developed and deployed based on the mass acquisition and curation of humancentric data, in some cases without consent from individuals, which serve as an affront to human dignity and very essence of being humans in a just society. Thus, ICT and Social justice research refer to studies about actions that promote equal rights, equal opportunities, and equal treatment between individuals, organizations, and the technologies themselves, as well as studies that highlight the use of ICT to uncover social injustice.

The guiding principles of social justice are human rights, access to basic elements such as food, water, shelter, safety, education, and opportunities, equal participation in decision-making, and equity to reduce systemic barriers to ensure every individual is treated fairly and equitably.

So why is social justice part of our remit as IS researchers? Walsham (2005) says that ICTs are involved in the way that we as individuals carry out our work and leisure activities, in the way that we organize ourselves in groups, in the forms that our organizations take, in the types of societies we create, and thus in the future of the world. ICTs are therefore deeply implicated in social justice, as IS inscribe our understanding of the world, and our attendant prejudices. Emergent ICT such as biometrics and modern AI systems are often, by design, developed through the collection and extraction of increasing amounts of human data, and in turn can unilaterally shape our perceptions of the world, and thus pose imminent and existential threats to social justice and humanity.

This minitrack invites submissions of original work concerning the intersection of IS research with social justice. Studies about the uses of ICT to uncover inequalities and injustice, and to promote justice at all levels (i.e. racial, climate, age, etc.) and equality and equity for those with fewer privileges such as people of color (POC), refugees and asylum seekers, unhoused, and people with disabilities. We also welcome critical approaches to these topics. Our goal is to spur discussion through research explorations that can enhance understanding and enliven new opportunities to derive novel ways of preserving and improving individual and societal well-being. The relevant topics for the minitrack include, but are not limited to, the following areas:

  • ICT and social inclusion
  • ICT and racial injustice
  • ICT and equality and equity
  • ICT and climate justice
  • ICT and voting rights.
  • ICT and income gap
  • ICT and ageism
  • ICT and individuality, humanness, and human dignity
  • Feminist perspectives in data justice

Minitrack Co-Chairs:

Jay Killoran (Primary Contact)
University of Victoria
jkilloran@uvic.ca

Andrew Park
University of Victoria
apark1@uvic.ca

Jan Kietzmann
University of Victoria
jkietzma@uvic.ca

Neurodiversity at the Core: Rethinking Digitalized Environments and AI for Experiential Pluralism Minitrack

This minitrack highlights experiential pluralism, the recognition that human engagement with the world— through cognition, affect, sensory processing, and social interaction—is inherently diverse. Neurodiversity encompasses natural variations in neurological functioning that influence perception, communication, and behavior. This includes, but is not limited to, autism, ADHD, dyslexia, dyspraxia, and other neurodevelopmental variations, as well as cognitive, affective, sensory, and social processing styles that do not necessarily align with specific diagnoses.

Digitalized workplaces, technology-mediated services, and AI-driven decision-making processes are often built on unspoken assumptions about how individuals think, communicate, and process information. Yet, human experiences are fundamentally diverse—no two individuals perceive, process, or interact with the world in the same way, and digitalized environments should reflect this diversity.

This minitrack invites research that places neurodiversity at the core of discussions on the evolution of digital services, AI, digitalized workplaces, and online platforms across various domains. We encourage contributions that investigate:

  • • Neurodiversity in digitalized environments: How do digital workspaces, health services, educational platforms, and online communities include or exclude diverse cognitive, affective, sensory, and social styles? What roles do organizational structures and digital tools play in shaping neuro-inclusive experiences?
  • • AI, automation, and neurodiversity: How do AI-driven systems (e.g., hiring algorithms, content recommendations, automation tools) reflect or fail to reflect experiential pluralism, and how can they be adapted?
  • • Time, attention, and cognitive rhythms in digital interactions: How do different cognitive, affective, sensory, and social styles interact with expectations around synchronicity, responsiveness, and multitasking in digital environments?
  • • Beyond accessibility: rethinking digital system design through experiential pluralism: How can digital platforms, from dating apps to healthcare portals, be designed from inception with neurodiverse participation rather than merely adapted for inclusivity?

Minitrack Co-Chairs:

Maylis Saigot (Primary Contact)
University of Queensland
m.saigot@uq.edu.au

Rob Gleasure
Copenhagen Business School
rg.digi@cbs.dk

Claudia Lemke
Berlin School of Economics and Law (HWR Berlin)
claudia.lemke@hwr-berlin.de

The Bright and Dark Side of Social Media in the Marginalized Contexts Minitrack

Social media facilitate social interactions, collaboration, and communication between other individuals and/or technical systems. Social media include (among others) Twitter (X), Facebook, Reddit, blogs, social network services, and wikis. In today’s digital age, individuals use social media to attempt to combat loneliness or emotional distress, to form virtual social relationships, to collaborate with others (individuals or technical agents), to socialize, or to seek information. Spending time on social media is potentially a double-edged sword. Positively, social media connects individuals worldwide to facilitate learning, the spread of creative ideas, inclusivity, and access to resources. Negatively, however, social media marginalizes individuals and groups through manipulation, exclusion, and exploitation across all groups and demographics.

Marginalized contexts refer to any situation or context where certain individuals or groups are treated differently based on (among many others) their genders, political ideologies, belief systems, religion, sexual orientation, and physical or mental disabilities. It is any situation with an unequal power dynamic among members of different groups. Academic research addressing social media in marginalized contexts is needed to help information systems research be an agent for social change. In this space, there are many important, yet unanswered, research questions.

We invite papers on all types of social media, investigating their positive and negative aspects in marginalized contexts. We welcome empirical, theoretical, or position papers. Topics of interest include, but are not limited to, the following:

  • Entrepreneurs experiencing biases when discussing ideas on social media
  • Unfairness associated with rating systems on social commerce platforms
  • Spread of hatred and racism on social media
  • Biases associated with de-platforming and re-platforming on social media
  • Generative artificial agents responding differentially on social media
  • How social media may be used to promote or stifle sustainable initiatives through (un)civil discourse
  • Spear phishing attacks and other security threats targeted towards vulnerable groups based on their social media activity
  • The use of analytics on social media to hinder or facilitate digital (in)equity and social (in)justice
  • The negative unintended consequences of using artificial intelligence on social media
  • Social media use that facilitates or inhibits the spread of human trafficking
  • Cyberbullying on social media and defense mechanisms
  • The spread of gender inequities and gender equality on social media
  • How social media provides emotional support for marginalized groups
  • How perceived inequities in the judicial systems are communicated and discussed on social media
  • Ethical, legal issues, and freedom of speech issues on social media
  • How social media might spread social (in)justice
  • The impact that social media has on law enforcement or other government agencies, which may be both positive and negative
  • The role that social media plays in the dissemination of fake news, disinformation, and conspiracy theories
  • Crowdfunding for marginalized groups and differential patterns of lending
  • The role that social media plays in promoting or inhibiting the cancel culture
  • How social media facilitates or inhibits different types of social movements
  • The differential role that social media plays in depression, isolationism, and disconnectedness for underrepresented groups

The above list of suggested topics is not an all-inclusive list. We encourage authors to define marginalized contexts broadly. We welcome all theoretical and methodological approaches.

Minitrack Co-Chairs:

Qin Weng (Primary Contact)
Baylor University
qin_weng@baylor.edu

Tom Mattson
University of Richmond
tmattson@richmond.edu

Jie Ren
Fordham University
jren11@fordham.edu

Values in Design: Equity, Ethics, and Justice Minitrack

Across technology, design, and engineering fields, recent focus on justice, equity, and fairness in political discourse has galvanized critical interrogations of established (and often uncontested) methods and frameworks that reify harmful power structures. This minitrack will provide a platform for researchers, designers, and engineers engaging with critical design theory and methods to influence (1) the design of our modern technology systems, (2) the education of future designers in this space, and (3) to interrogate the very structures within which these technology systems operate.

This minitrack focuses on issues of equity, ethics, and justice in research in the fields of Engineering Design, Computer Supported Cooperative Work (CSCW), and Information Systems (IS). While work across these areas could influence the design of our modern technology systems, ambiguity remains about how to develop, measure, and enact just systems, limiting progress in this space.

This minitrack will bring together researchers from Engineering Design, CSCW, and IS to explore and bridge conceptual, empirical, and practical barriers in assessing just designs and just design processes in engineering and computing. This minitrack will create a space for the formalization of concepts and intellectual discourse surrounding design justice and values in design for those working in related fields, including but limited to 1) Engineering design researchers who develop theories, methods, and tools for increasing the effectiveness of engineering design processes, 2) Social science and philosophy researchers who study the application of ethical theories and frameworks to modern human endeavors, 3) Practitioners from engineering and computing industries who use next-generation design tools, and 4) those who use Research through Design (RtD) and other speculative frameworks (such as AfroFuturism and Posthumanism) to challenge injustices and advance equity and inclusion goals.

Papers that include design justice, ethics, and equity as the focal point of inquiry from across the engineering, computing, and information systems application areas are welcome. Papers in all formats, using a breadth of intellectual traditions, methods, and epistemologies are encouraged, including empirical studies, design research, theoretical frameworks, case studies, ethnography, and research through design. In addition, all accepted papers to this minitrack will be considered for expedited review at the Journal of Mechanical Design as a special issue in the Design Theory and Methodology topic area.

Potential topics include, but are not limited to:

  • Theories of design justice and ethical issues in emerging technology (e.g., Artificial Intelligence)
  • Methods for addressing design justice, equity and inclusion throughout all phases of research
  • Operationalization of justice, inclusion, equity and ethics in design outcomes, processes, designers, or pedagogies
  • Insights about Design Justice coming from practice and lived experiences
  • Applications of frameworks such as Values in Design, Values Sensitive Design, and other valuesdriven approaches to the design of technology (e.g., Artificial Intelligence, Robotics, etc).
  • Community-collaborative approaches such as Community Engaged Participatory Research and Action Research, among others
  • Challenges and opportunities for participatory and collaborative design approaches to contribute to Design Justice and ethical design practices

Minitrack Co-Chairs:

Christine Toh (Primary Contact)
James Madison University
wdcmt8@jmu.edu

Andrea Grover
University of Nebraska at Omaha
andreagrover@unomaha.edu

Jaime Snyder
University of Washington
jas1208@uw.edu

Julia Kramer
University of Michigan
kramerju@umich.edu