TRACK CHAIRS
K.D. Joshi
The College of Business
University of Nevada, Reno
1664 N Virginia St
Reno, NV 89557
kjoshi@unr.edu
Nancy Deng
College of Business Administration & Public Policy
California State University, Dominguez Hills
1000 E. Victoria Street
Carson, California, 90747
ndeng@csudh.edu
The latest developments in Information and Communication Technologies (ICT) such as automation and artificial intelligence have transformed our work, workplaces, institutions, societies, and communities. However, the favorable and unfavorable effects of ICTs are not distributed equally or uniformly across all contexts or populations in our society. Marginalized populations such as underrepresented, vulnerable, and underserved communities often bear the greatest burdens of technological change. Simultaneously, technology also provides powerful ways of safeguarding and improving humanity. This track focuses on socio-technical issues in marginalized contexts to not only uncover digital inequities and social injustices (e.g., the problem of bias in algorithmic systems, which gives rise to various forms of digital discrimination), but to find ways to build systems of empowerment through technology (e.g., designing and building technologies via value-sensitive designs).
This track calls for research that mitigates the risks of constructing a future where technological spaces, digital applications, and machine intelligence mirror a narrow and privileged vision of society with its biases and stereotypes. In this track, we create an outlet for all scholars across various disciplines to conduct research that deeply engages ICTs in marginalized contexts. We welcome papers from a range of perspectives, including conceptual, philosophical, behavioral, and design science and beyond.
Opportunities for Fast Track to Journal Publications: Selected minitrack authors of the accepted conference papers by this track will be invited to submit a significantly extended version (min. +30%) of their paper for consideration to be published in one of the following journals. Submitted papers will be fast-tracked through the review process.
This minitrack attracts and presents research on understanding and addressing the discrimination problems arising in the design, development and use of artificial intelligent systems. A technology is biased if it unfairly or systematically discriminates against certain individuals by denying them an opportunity or assigning them a different and undesirable outcome. As we delegate more and more decision-making tasks to computer autonomous systems and algorithms, such as using artificial intelligence for employee hiring and loan approval, digital discrimination is becoming a serious problem. In her New York Times best-seller book “Weapons of math destruction: How big data increases inequality and threatens democracy,” Cathy O’Neil refers to those math-powered applications as “Weapons of Math Destruction” and provides examples to show how such mathematical models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed and harmed our lives.
Discrimination is defined as treating a person or particular group of people differently, especially in a worse way from the way in which you treat other people, because of their race, gender, sexuality, etc., according to Cambridge Dictionaries Online. Digital discrimination refers to discrimination between individuals or social groups due to lack of access to Internet-based resources or in relation to biased practices in data mining and inherited prejudices in a decision-making context. It is a form of discrimination where users are treated unfairly, unethically or just differently based on their personal data such as income, education, gender, age, ethnicity, religion, or even political affiliation during the process of automating decision making. Digital discrimination in AI refers to the systematic disadvantages that algorithms impose on certain groups due to biases emerging throughout the algorithm’s development lifecycle.
Artificial Intelligence (AI) decision making can cause discriminatory harm to many vulnerable groups. In a decision-making context, digital discrimination can emerge from inherited prejudices of prior decision makers, designers, engineers or reflect widespread societal biases. One approach to addressing digital discrimination is to increase transparency of AI systems. However, we need to be mindful of the user populations that transparency is being implemented for. In this regard, research has called for collaborations with disadvantaged groups whose viewpoints may lead to new insights into fairness and discrimination.
Another approach to mitigating digital discrimination in AI is algorithmic justice, which seeks to ensure fairness, equity, and accountability in AI-driven decision-making. Machine learning models often inherit biases from historical data, leading to unfair outcomes that disproportionately impact marginalized groups. Despite AI’s perceived neutrality, research has shown that it can reinforce and even amplify systemic biases, underscoring the need for governance frameworks that promote fairness, transparency, and accountability in AI deployment.
Potential ethical concerns also rise in the use of generative AI that builds on Large Language Models (LLM) such as ChatGPT, the virtual AI chatbot that debuted in November 2022 by the startup OpenAI and reached 100 million monthly active users just two months after its launch. Professor Christian Terwiesch at Wharton found that ChatGPT would pass a final exam in a typical Wharton MBA core curriculum class, which sparked a national conversation about ethical implications of using AI in education. While some educators and academics have sounded the alarm over the potential abuse of ChatGPT for cheating and plagiarism, industry practitioners from legal industry to travel industry are experimenting with ChatGPT and debating on the impact of the AI on the business and future of the work. In essence, a Large Language Model is a deep learning algorithm that trains on large volumes of text. The bias inherited in the data can lead to emerging instances of digital discrimination especially as various LLM based models are trained on data from different modalities (e.g. images, videos, etc.). Furthermore, the lack of oversight and regulations can also prove to be problematic. Given the rapid developments and penetration of AI chatbots, it is important for us to investigate the boundaries between ethical and unethical use of AI, as well as potential digital discrimination in the design, development and use of LLM applications.
Addressing the problem of digital discrimination in AI requires a cross-disciplinary effort. For example, researchers have outlined social, organizational, legal, and ethical perspectives of digital discrimination in AI . In particular, prior research has called for our attention to research the three key aspects: how discrimination arises in AI systems; how design in AI systems can mitigate such discrimination; and whether our existing laws are adequate to address discrimination in AI.
This minitrack welcomes papers in all formats, including empirical studies, design research, theoretical framework, case studies, etc. from scholars across disciplines, such as information systems, computer science, library science, sociology, law, etc. Potential topics include, but are not limited to:
- AI-based Assistants: Opportunities and Threats
- AI Explainability and Digital Discrimination
- Algorithmic justice
- AI Systems Design and Digital Discrimination
- AI Use Experience of Disadvantaged / Marginalized Groups
- Biases in AI Development and Use
- Digital Discrimination in Online Marketplaces
- Digital Discrimination and the Sharing Economy
- Digital Discrimination with Various AI Systems (LLM based AI, AI assistants, etc.)
- Effects of Digital Discrimination in AI Contexts
- Ethical Use/ Challenges/ Considerations and Applications of AI systems
- Erosion of Human Agency and Generative AI Dependency
- Generative AI (e.g., ChatGPT) Use and Ethical Implications
- Organizational Perspective of Digital Discrimination
- Power Dynamics in Human-AI Collaboration
- Responsible AI Practices to Minimize Digital Discrimination
- Responsible AI Use Guideline and Policy
- Societal Values and Needs in AI Development and Use
- Sensitive Data and AI Algorithms
- Social Perspective of Digital Discrimination
- Trusted AI Applications and Digital Discrimination
- User Experience and Digital Discrimination
Minitrack Co-Chairs:
Sara Moussawi (Primary Contact)
Carnegie Mellon University
sara7@cmu.edu
Jason Kuruzovich
Rensselaer Polytechnic Institute
kuruzj@rpi.edu
Minoo Modaresnezhad
University of North Carolina Wilmington
modaresm@uncw.edu
The ever-changing landscape of information and communication technologies (ICTs) and their increasing importance in life are widely recognized as integral to academic, economic, and civic advancement in society. However, the digital divide in ICT usage persists. The digital divide refers to the gap between those who have access to ICTs and those who do not. In recent years, while the first-level digital divide (physical access to ICTs) has been decreasing worldwide, the second-level digital divide (digital skills to use ICTs) and third-level digital divide (outcomes of ICT use) remain prevalent and require a more nuanced examination by researchers.
The digital divide has long been a topic of discussion in both scholarship and policy. A tradition of monotopical measurement and access-oriented thinking has shaped conversations around the digital divide since the 1990s. While access-oriented discussions—such as broadband accessibility—remain necessary, it is increasingly evident across diverse fields that meaningful participation in a digital society requires not just access but also digital literacy.
Digital literacy encompasses both the cognitive and technical abilities needed to use digital devices and to access, navigate, and utilize information from various online sources. Being able to apply digital literacy skills to achieve a desired outcome marks the final stage and the ultimate goal of advancing digital fairness in different digital environments. Even when advances have been made to reduce the second- and third-level digital divides, disparities in digital competence persist among aging individuals, people with disabilities, minorities, rural residents, veterans, and other marginalized populations. These divides have significant social, economic, and political implications and further widen barriers to participation in ICT.
This minitrack will explore the crucial role of digital literacy to empower and transform marginalized communities that face various challenges, including crises, poverty, discrimination, immigration struggles, illness, climate change, and other societal, technological, and political shifts. It will also examine the opportunities emerging from changes in the landscape of work, education, and social interaction, and how these changes impact the attainment of digital fairness and ethical futures.
Several critical areas demand a deeper investigation to enhance our understanding of digital technologies in promoting socially and ethically responsible practices. A key avenue for further exploration is how digital platforms, mobile applications, and online networks can serve as tools for social intermediation, effectively connecting marginalized populations with essential resources such as education, healthcare, and employment. Addressing barriers such as affordability, accessibility, and digital literacy is essential in leveraging technology to level the playing field and ensure full societal participation.
We seek contributions that discuss how digital literacy can drive meaningful change and resilience among marginalized groups and propel progress toward achieving digital fairness and ethical futures. The goal is to share research insights while acknowledging that these challenges vary across different regions of the world and that no universal solutions exist.
We welcome submissions from scholars in diverse disciplines—including information science, computing, agricultural technology (agtech), financial technology (fintech), human-computer interaction, education, public health, urban studies, rural studies, and other related fields—who conduct digital fairness research and engage with marginalized communities.
This call for papers invites original research papers, case studies, and review articles that investigate digital literacy and its impact on marginalized populations, as well as initiatives that address these vulnerabilities, moving towards digital fairness and ethical futures. We welcome submissions that align with this focus and offer the following examples of topics of interest, which are intended to be illustrative rather than exhaustive:
- Expanding, redefining, and critically examining digital literacy and digital fairness
- Digital literacy competencies for socially and ethically responsible practices
- AI Literacy and its role and connection to digital divide, fairness, and ethical futures
- Digital literacy and the future of work
- Digital trust, privacy, and cybersecurity
- Digital citizenship
- Digital infrastructures for advancing digital literacy and fairness
- Assessment frameworks for measuring digital literacy and fairness
- The evolving role of digital navigators in communities
- Navigating digital challenges through the mastery of digital literacy
- Digital fairness policies and strategies: best practices and lessons learned
- ICT adoption and use: barriers, opportunities, and challenges
- Unintended consequences as a result of ICT use or efforts to bridge the digital divide
Minitrack Co-Chairs:
Mega Subramaniam (Primary Contact)
University of Maryland, College Park
mmsubram@umd.edu
Shanton Chang
University of Melbourne
shanton.chang@unimelb.edu.au
Social media facilitate social interactions, collaboration, and communication between other individuals and/or technical systems. Social media include (among others) Twitter (X), Facebook, Reddit, blogs, social network services, and wikis. In today’s digital age, individuals use social media to attempt to combat loneliness or emotional distress, to form virtual social relationships, to collaborate with others (individuals or technical agents), to socialize, or to seek information. Spending time on social media is potentially a double-edged sword. Positively, social media connects individuals worldwide to facilitate learning, the spread of creative ideas, inclusivity, and access to resources. Negatively, however, social media marginalizes individuals and groups through manipulation, exclusion, and exploitation across all groups and demographics.
Marginalized contexts refer to any situation or context where certain individuals or groups are treated differently based on (among many others) their genders, political ideologies, belief systems, religion, sexual orientation, and physical or mental disabilities. It is any situation with an unequal power dynamic among members of different groups. Academic research addressing social media in marginalized contexts is needed to help information systems research be an agent for social change. In this space, there are many important, yet unanswered, research questions. We invite papers on all types of social media, investigating their positive and negative aspects in marginalized contexts.
The goal of our mini-track is to combine social media and marginalized contexts. We invite papers on all types of social media platforms and different marginalized contexts. We strive to have an intellectual conversation about the positive and negative aspects of social media in marginalized contexts. We aim to facilitate a scholarly discussion of social media to identify problems and innovative solutions to maintain safe and productive social media environments. We welcome empirical, theoretical, or position papers incorporating broad definitions of both social media and marginalized contexts. Topics of interest include, but are not limited to, the following:
- Entrepreneurs experiencing biases when discussing ideas on social media
- Unfairness associated with rating systems on social commerce platforms
- Spread of hatred and racism on social media
- Biases associated with de-platforming and re-platforming on social media
- Generative artificial agents responding differentially on social media
- How social media may be used to promote or stifle sustainable initiatives through (un)civil discourse
- Spear phishing attacks and other security threats targeted towards vulnerable groups based on their social media activity
- The use of analytics on social media to hinder or facilitate digital (in)equity and social (in)justice
- The negative unintended consequences of using artificial intelligence on social media
- Social media use that facilitates or inhibits the spread of human trafficking
- Cyberbullying on social media and defense mechanisms
- The spread of gender inequities and gender equality on social media
- How social media provides emotional support for marginalized groups
- How perceived inequities in the judicial systems are communicated and discussed on social media
- Ethical, legal issues, and freedom of speech issues on social media
- How social media might spread social (in)justice
- Impact that social media has on law enforcement or other government agencies, which may be both positive and negative
- The role that social media plays in the dissemination of fake news, disinformation, and conspiracy theories
- Crowdfunding for marginalized groups and differential patterns of lending
- The role that social media plays in promoting or inhibiting the cancel culture
- How social media facilitates or inhibits different types of social movements
- The differential role that social media plays in depression, isolationism, and disconnectedness for under-represented groups
The above list of suggested topics is not an all-inclusive list. We encourage authors to define digital equity, social justice, and marginalized contexts broadly. We welcome all theoretical and methodological approaches.
Minitrack Co-Chairs:
Jie Ren (Primary Contact)
Fordham University
jren11@fordham.edu
Tom Mattson
University of Richmond
tmattson@richmond.edu
Qin Weng
Baylor University
qin_weng@baylor.edu
Across the work force new developments in collaboration tools, digital labor platforms and artificial intelligence are changing the nature of work. Large-scale remote work activities have spread widely during the COVID pandemic and are likely to remain an integral part of how many companies manage work. Additionally, ongoing economic uncertainty and crises have accelerated adoption of a wide range of tools and practices that are altering how workers engage with stakeholders. The changing nature of work presents both challenges and opportunities to building fairer labor markets.
On the one hand, the changing nature of work allows a variety of tasks to be completed remotely, expanding access to work opportunities by individuals who may face limited opportunities due to distance, access to reliable transportation or care responsibilities. In this manner broader adoption of collaborative tools and digital platforms may enable meaningful employment opportunities to individuals who would otherwise be excluded from the digital workforce. On the other hand, underlying obstacles in labor markets, derived from factors such as differing wage rates, lack of access to education, differences in power among stakeholders, varying digital infrastructure across geography or regulatory variability, may be amplified and codified as work processes evolve. Further technology development, such as AI or robotics, may also automate tasks disrupting the number and nature of opportunities for future employment.
This minitrack is focused on issues relating to how the changing nature may become a mechanism for enabling fairer work practices. This objective takes many forms, both in examining the socio-technical factors that enable fair employment as well as the factors that create barriers to the digital workforce. We welcome submissions examining factors at any level of analysis, spanning from global or national level factors influencing labor markets, to individual or team level factors influencing work practices. Increasing popular concerns regarding the changing nature of work are centering these topics in our global understanding of labor markets. Increasing oversight by regulatory bodies demonstrate the import for both academia and policy makers to not only understand emerging work conditions but to also articulate the impact of proposed interventions to the changing nature of work on labor markets.
As discussed above, technology is changing labor markets and work practices. While technology may enable greater employment access, it also may foster environments of power asymmetry: new technology may privilege the platform owners who have the power to control the digital work environments (such as the sourcing models, compensation models, and work policies) but disadvantage workers. Thus, we call for research that critically examines current work conditions and policies on the changing nature of work and propose new work processes, platform designs and polices to enhance the digital work environments and foster fair workforce access.
Finally, it’s important for both academia and industry to better understand the impact of the post pandemic transformation on changing nature of work. In the long term, technological developments at the intersection of remote work platforms and AI can potentially shape work at different levels. Research on the future of work and the essential skills and abilities of future workforce will update our knowledge and broaden our visions about the next generation of workforce.
Potential issues and topics on the changing nature of work and inclusive labor markets and work practices include, but are not limited to:
- Employment relations in distributed digital organizations
- Ethical and regulatory issues in the labor relations in changing work environments
- The changing nature of work in developing economies
- Algorithmic based discrimination in technology centered work environments
- AI impact work labor markets and career pathing
- Wanted/unwanted consequences of AI and ML on Work, i.e., work displacement; skill degradation
- AI complementarities/substitution
- Algorithmic Management
- Changing work conditions
- Impacts of the digital divide on labor markets
- The changing nature of collective bargaining in a global workforce
- Worker identity and engagement in the changing nature of work
- Psychological aspects of emerging work environments on workers (e.g., Technostress, Well-being)
Minitrack Co-Chairs:
Joseph Taylor (Primary Contact)
California State University, Sacramento
joseph.taylor@csus.edu
Lauri Wessel
European University Viadrina Frankfurt
wessel@europa-uni.de
Phoebe Pahng
California State University, Sacramento
phoebe.pahng@csus.edu
Across technology, design, and engineering fields, recent focus on justice, equity, and fairness in political discourse has galvanized critical interrogations of established (and often uncontested) methods and frameworks that reify harmful power structures. This minitrack will provide a platform for researchers, designers, and engineers engaging with critical design theory and methods to influence (1) the design of our modern technology systems, (2) the education of future designers in this space, and (3) to interrogate the very structures within which these technology systems operate. This mini-track will focus on issues of equity, ethics, and justice in research in the fields of Engineering Design, Computer Supported Cooperative Work (CSCW), and Information Systems (IS). While work across these areas has the ability to influence the design of our modern technology systems, ambiguity remains about how to develop, measure, and enact just systems, limiting progress in this space. This minitrack will bring together researchers from Engineering Design, CSCW, and IS to explore and bridge conceptual, empirical, and practical barriers in assessing just designs and just design processes in engineering and computing.
This minitrack will create a space for the formalization of concepts and intellectual discourse surrounding design justice and values in design for those working in related fields, including but limited to 1) Engineering design researchers who develop theories, methods, and tools for increasing the effectiveness of engineering design processes, 2) Social science and philosophy researchers who study the application of ethical theories and frameworks to modern human endeavors, 3) Practitioners from engineering and computing industry who use next-generation design tools, and 4) those who use Research through Design (RtD) and other speculative frameworks (such as AfroFuturism and Posthumanism) to challenge injustices.
Papers that include design justice, ethics, and equity as the focal point of inquiry from across the engineering, computing, and information systems application areas are welcome. Papers in all formats, using a breadth of intellectual traditions, methods, and epistemologies are encouraged, including empirical studies, design research, theoretical frameworks, case studies, ethnography, and research through design. In addition, all accepted papers to this mini-track will be considered for expedited review at the Journal of Mechanical Design as a special issue in the Design Theory and Methodology topic area. Potential topics include, but are not limited to:
- Theories of design justice
- Methods for addressing design justice throughout all phases of research
- Operationalization of justice and equity in design outcomes, processes, designers, or pedagogy
- Insights about Design Justice coming from practice and lived experiences
- Frameworks such as Values in Design, Values Sensitive Design, and other values-driven approaches to the design of technology
- Community-collaborative approaches such as Community Engaged Participatory Research and Action Research, among others
- Challenges and opportunities for participatory and collaborative design approaches to contribute to Design Justice
All accepted papers to this minitrack will be considered for expedited review at the Journal of Mechanical Design as a special issue in the Design Theory and Methodology topic area.
Minitrack Co-Chairs:
Christine Toh (Primary Contact)
University of Nebraska at Omaha
ctoh@unomaha.edu
Jaime Snyder
University of Washington
jas1208@uw.edu
Andrea Grover
University of Nebraska at Omaha
andreagrover@unomaha.edu
Sita Syal
University of Michigan
syalsm@umich.edu
Julia Kramer
University of Michigan
kramerju@umich.edu
The increasing relevance of digital platforms in political and societal processes has created both new challenges and opportunities for fostering democratic engagement and social cohesion. While digital technologies were initially envisioned as tools for open discourse, they have also contributed to the fragmentation of public debate, the spread of disinformation, and rising polarization. The growing impact of algorithmic recommendation systems and generative content, online social network structures, and platform-driven interactions raises concerns about their influence on political decision-making, societal trust, and institutional integrity.
At the same time, digital platforms have the potential to facilitate civic engagement, provide access to diverse perspectives, and strengthen participatory democracy. Notably, technology and digital platforms play a dual role: on the bright side, they empower citizens to engage in democratic processes, supports transparency, and fosters accountability. On the dark side, it can also enable corruption, institutional decay, political instability, and the abuse of power through mechanisms such as algorithmic manipulation, echo chambers, and disinformation campaigns. Understanding these mechanisms is crucial for shaping a democratic digital future where technology becomes a tool for resilience rather than decay.
Thus, we are interested in research that examines how digital platforms influence democracy, both positively and negatively, and how social cohesion and civic engagement can be fostered in digital environments.
- How do digital platforms influence political discourse, trust in institutions, and democratic participation?
- What mechanisms drive polarization in digital spaces, and how can they be mitigated?
- How can design interventions improve democratic deliberation and civic engagement on digital platforms?
- How do AI-driven recommendation systems shape public opinion, and what regulatory frameworks are needed?
- What role do social media platforms play in political activism, civic engagement, and election processes?
- How do misinformation and disinformation campaigns impact democratic resilience, and what countermeasures exist?
- How can digital technologies both contribute to and prevent corruption, institutional decay, and political instability?
- How can governments leverage digital tools to enhance citizen participation and inclusive decision-making?
- What are the ethical considerations and societal implications of platform governance in democracy?
- How can interdisciplinary approaches improve our understanding of digital democracy and social cohesion?
Minitrack Co-Chairs:
Jonas Fegert (Primary Contact)
Karlsruhe Institute of Technology
fegert@fzi.de
Olga Slivko
Erasmus University Rotterdam
slivko@rsm.nl
Stefan Stieglitz
University of Potsdam
stefan.stieglitz@uni-potsdam.de
Christof Weinhardt
Karlsruhe Institute of Technology
weinhardt@kit.edu
STEM fields offer numerous exciting and lucrative career opportunities, but unfortunately, these fields are often characterized by a lack of diversity and inclusivity. Educational institutions have encountered challenges in promoting STEM education among underserved populations. More recently, the research funding uncertainty in countries such as the United States has placed some STEM PhD programs at risk. This minitrack will focus on addressing barriers to equity and social justice in STEM education and careers, with a particular emphasis on underserved populations.
At the same time, the ‘half-life’ of knowledge is getting shorter with the current accelerated rate of technological advancement. STEM education needs to extend and expand beyond college education into supporting lifelong learning among working adults to keep pace with technological advancement and keep ahead of digital disruption.
The minitrack will explore new angles and approaches to promoting equity and social justice in STEM education and workforce development, including but not limited to the following topics:
- Cultivating interest in and fostering access to high-quality STEM education (student and/or workforce motivation, K-12 and lifelong STEM initiatives, innovative programs, etc.)
- Implementing inclusive and innovative curricula and practices in STEM education (culturally responsive pedagogy, high-impact practice, project-based learning, learner empowerment, psychological safety etc.)
- Addressing systemic barriers to underserved populations in STEM education across different demographic groups (barriers, strategies, policy change, community outreach, mentorship programs, etc.)
- Examining STEM career choices and interests and development of students and working adults (internship and apprenticeship programs, career development workshops, STEM career trajectory, digital reskilling and upskilling, industry partnership, professional development programme, etc.)
- Assessing and sustaining effective STEM programs (data and assessment:, accountability, best practices in STEM education, interventions, framework and models, etc.)
- Broadening STEM education in the age of AI (AI literacy, AI skill divide, individual adaptability in an AI-integrated workplace, lifelong learning)
- Adapting STEM PhD Education under research funding uncertainty (shrinking federal funds, institutional strategies to address budget constraints, individual resilience, etc)
Minitrack Co-Chairs:
Nancy Deng (Primary Contact)
California State University, Dominguez Hills
ndeng@csudh.edu
Calvin M.L. Chan
Singapore University of Social Sciences
calvinchanml@suss.edu.sg
The interplay between gender and technology is a critical lens through which we can examine the structural and systemic factors that either empower or marginalize individuals in the technology space. Achieving gender balance in technology is not merely a matter of diversity—it is a pressing social justice issue. As information technology continues to shape every facet of our lives, those who design, build, and control technology ultimately define the future of work, society, and human interaction. Gender balance in the technology space is therefore imperative to ensure that the future of work and life is not decided for individuals who are not well represented in this space.
This minitrack is dedicated to fostering discourse and advancing research on gender and technology. It seeks to amplify scholarship that conceptualizes, theorizes, and operationalizes the gender construct as a social identity and not just as a biological sex with a dichotomous category. In addition, we encourage research that leverages gender-based theories—such as the Individual Differences Theory of Gender and IT and Gender Role Theory— to articulate the conceptualization of gender and provide nuanced insights into the complexities of gender in the technology ecosystem. This minitrack invites gender-focused analysis of societal, organizational, and individual factors that not only advance our understanding of how gender shapes the technology milieu but also reveal interventions that can help attenuate gender inequities and imbalance. Topics of interest include, but are not limited to:
- Applying the Intersectionality perspective to advance gender analysis in IT research
- Designing “Gender-free” technology
- Feminist perspectives on gender and technology
- Gender analysis of the history of technology
- Gender analysis of the use and consumption of technology
- Gender analysis of design and construction of technology
- Gender attitudes toward technology
- Gender biases and stereotypes in the technology industry
- Gender, identity, and technology use
- Gender imbalance in the technology field
- Gender pay gap in the technology field
- Gender role congruity and technology career pathways
- Gendered nature of technology leadership
- Gendered opportunities and risks of new technologies
- Gendered patterns in the use of new technologies
- Hegemonic masculinity in the technology industry
- Imposter syndrome and women in technology
- New approaches to conceptualizing and operationalizing gender and technology
- Role of power in creating gender equity within the technology fields
- Work-life balance in technology field
- Understanding and removing barriers to STEM careers for women
- Gendered roles and digital entrepreneurship
- Care ethics, gender, and technology
Papers accepted to this minitrack will be published in a special Issue of the Information Systems Management Journal.
Minitrack Co-Chairs:
Regina Connolly (Primary Contact)
Dublin City University
regina.connolly@dcu.ie
Mina Jafarijoo
Stockton University
Mina.Jafarijoo@stockton.edu
Cliona McParland
Dublin City University
Cliona.McParland@dcu.ie
Social justice is the belief that everyone deserves fair and equal treatment and serves as a theoretical grounding for burgeoning research related to the oppressive and dehumanizing nature of modern ICT. Such technologies are developed and deployed based on the mass acquisition and curation of human-centric data, in some cases without consent from individuals, which serve as an affront to human dignity and very essence of being humans in a just society. Thus, ICT and Social justice research refer to studies about actions that promote equal rights, equal opportunities, and equal treatment between individuals, organizations, and the technologies themselves, as well as studies that highlight the use of ICT to uncover social injustice.
The guiding principles of social justice are human rights, access to basic elements such as food, water, shelter, safety, education, and opportunities, equal participation in decision-making, and equity to reduce systemic barriers to ensure every individual is treated fairly and equitably.
So why is social justice part of our remit as IS researchers? ICTs are involved in the way that we as individuals carry out our work and leisure activities, in the way that we organize ourselves in groups, in the forms that our organizations take, in the types of societies we create, and thus in the future of the world. ICTs are therefore deeply implicated in social justice, as IS inscribe our understanding of the world, and our attendant prejudices. Emergent ICT such as biometrics and modern AI systems are often, by design, developed through the collection and extraction of increasing amounts of human data, and in turn can unilaterally shape our perceptions of the world, and thus pose imminent and existential threats to social justice and humanity.
This minitrack invites submissions of original work concerning the intersection of IS research with social justice. Studies about the uses of ICT to uncover inequalities and injustice, and to promote justice at all levels (i.e. racial, climate, age, etc.) and equality and equity for those with fewer privileges such as people of color (POC), refugees and asylum seekers, unhoused, and people with disabilities. We also welcome critical approaches to these topics. Our goal is to spur discussion through research explorations that can enhance understanding and enliven new opportunities to derive novel ways of preserving and improving individual and societal well-being. The relevant topics for the mini-track include, but are not limited to, the following areas:
- ICT and social inclusion
- ICT and racial injustice
- ICT and equality and equity
- ICT and climate justice
- ICT and voting rights.
- ICT and income gap
- ICT and ageism
- ICT and individuality, humanness, and human dignity
- Feminist perspectives in data justice
Minitrack Co-Chairs:
Andrew Park (Primary Contact)
University of Victoria
apark1@uvic.ca
Jan Kietzmann
University of Victoria
jkietzma@uvic.ca
Jayson Killoran
Queen’s University and Oregon State University
j.killoran@queensu.ca
In recent years, the rapid evolution of information and communication technologies (ICT), and particularly the advent of cutting-edge AI technologies, has transformed not only the modus operandi of illicit actors but also the strategies employed by those seeking to interdict illegal or exploitative activities. ICT innovations have spurred new business models and practices that expand the markets for illicit behavior, escalating both the risk and the scope of victimization. At the same time, these technologies provide avenues for illicit actors to reach and exploit marginalized groups, who are already vulnerable to various forms of exploitation.
Law enforcement agencies and governments are reacting to these developments by attempting to update or reform outdated laws and policies, which often struggle to keep pace with both technological advances and the evolving nature of criminal conduct. Moreover, many legitimate organizations are exploring opportunities to deploy ICT to identify and mitigate the misuse of their products and services by criminal networks, thereby protecting their stakeholders and the broader community from harm.
Furthermore, the role of ICT in the administration of justice has traditionally been secondary and limited. However, the transformative impact of new AI tools is reshaping the legal landscape, with profound implications for the administration of justice. On one hand, AI-driven systems hold promise for enhancing judicial decision-making, streamlining legal processes, and expanding access to justice—particularly for underserved and marginalized populations. On the other hand, there is an urgent need to examine the potential risks associated with these technologies, including biases, discriminatory practices, and privacy infringements that may disproportionately affect vulnerable communities.
This minitrack is dedicated to exploring the intersection of ICT, criminal activities, and the administration of justice, with a particular focus on research that addresses:
- The Dual-Edged Impact of ICT and AI Technologies: Investigations into how ICT and AI technologies can both improve efficiency, accessibility, and fairness in legal processes, and simultaneously introduce challenges such as bias, discrimination, or privacy violations, particularly among marginalized populations.
- Evolving Criminal Behavior: Studies on how the integration of ICT and AI has enabled and altered criminal behavior, including how cybercriminals enter and explore cybercrime “careers” and how illicit actors exploit technological platforms to expand their networks and target vulnerable groups.
- Law Enforcement and Policy Responses: Analyses of how criminal justice institutions, law enforcement agencies, NGOs, and businesses leverage ICT and AI to detect, disrupt, or dismantle illicit networks, and the implications of these interventions for the rights and protection of suspects, convicted individuals, and victims.
- Legal and Institutional Reforms: Research that critically examines the adequacy of current legal frameworks and policies in the face of rapid technological change, with a focus on the role of ICT and new AI technologies in ensuring the protection of rights and access to justice for vulnerable populations.
- Victim Support and Access to Justice: Investigations into how ICT and AI can serve to support victims of crime and exploitation, ensuring that marginalized groups receive the legal assistance and protection they require. Criminal activity and the administration of justice, as an umbrella term, encompasses the laws, procedures, institutions, and policies active before, during, and after the commission of a crime. Central to the concept of criminal justice is the protection of the rights of all individuals involved—suspects, convicted individuals, and victims alike.
This minitrack invites contributions that offer conceptual, theoretical, empirical, and methodological insights into how ICT and AI are reshaping these domains, with the goal of advancing both our understanding and practical implementation of a just and equitable system. We welcome submissions from a diverse range of perspectives and methodological approaches, aiming to foster a rich dialogue on the potential and challenges of integrating ICT into the realm of criminal justice and the broader administration of justice. Join us in exploring how emerging technologies can help secure a safer and more equitable society while addressing the risks associated with their deployment.
This minitrack invites submissions of original work concerning the intersection of information systems research with criminal activities and the administration of justice. The relevant topics for the mini-track include, but are not limited to, the following areas:
- ICT and gun violence
- ICT and cyber-bullying, -stalking, and -harassment
- The application of datafication and AI in criminal activities
- Cybercriminals and cybercriminal “careers”
- Recognizing and rehabilitating cybercriminals and illicit actors
- AI and predictive policing
- Big data and risk assessment
- Facial recognition in criminal justice
- Dataveillance, security, and privacy
- Datafication and AI applications in border control
- Generative AI-related scams and phishing attacks
- Generated online hate for large-scale “hate-raids”
- Social engineering attacks, such as using Generative AI voice models
- AI-generated attacks on honourability/ reputation. (e.g., Deepfakes)
- Jail-breaking Generative AI to elicit harmful responses/escalate privileges
- Generative AI biases against marginalized groups and/or specific groups (e.g., ethnicity and political)
- Generative AI errors that exploit marginalized groups (e.g., hallucination reliance)
- Generative AI use to exacerbate polarization (e.g., synthetic media)
- Illegal content generation (e.g., CSAM and NCII)
- Attacks against Generative AI
- AI in Judicial Decision-Making
- ICT-Enabled Access to Justice for marginalized communities
- Digital Evidence and Data Integrity
- Legal and Policy Reforms for ICT and AI
- Cybersecurity in Justice Systems
- ICT in Disrupting Illicit Networks
- Balancing Surveillance and Privacy
- Ethical Implications of AI in Criminal Investigations
- Legal Frameworks for ICT and AI
- ICT-Supported Victim Assistance
Minitrack Co-Chairs:
Carlos Torres (Primary Contact)
Baylor University
carlos_torres@baylor.edu
Michael Dinger
Baylor University
michael_dinger@baylor.edu
Christine Dugoin-Clement
Sorbonne Business School
christine.dugoin-clement@iae.pantheonsorbonne.fr
The world is facing a multitude of challenges. One of these is the developments in connection with AI and their consequences. An increasing gap can be observed between those who have access to different AI tools and solutions and can benefit from the potential advantages and those who are increasingly being left behind. This mainly includes people who belong to marginalized groups. For example, AI tools and solutions tend not to allow different voices and perspectives to be heard or included. A focus on inclusive AI offers a promising framework to address this; an approach based on inclusion, respect, appreciation, and collaboration.
This minitrack focuses on the role and importance of inclusive AI and the directions and opportunities based on it. We welcome all types of contributions – theoretical, conceptual and empirical – that use various methods and methodologies as well as different perspectives and worldviews to present forward-looking thoughts and ideas that make inclusive AI feasible in different types of contexts and groups of people.
Topics of interest include, but are not limited to, the following:
- Strategies for inclusive AI at different levels
- Tools and solutions for inclusive AI
- Empowerment of marginalized groups through AI
- Accessibility of marginalized in various societal and economic initiatives through AI
- Development of inclusive AI skills and competences
- Factors influencing inclusive AI
- Training and further education in the field of inclusive AI
- Inclusive AI-based work environments
- Inclusive entrepreneurship and AI
- Cultural dynamics in the context of AI developments
- AI in the context of circular economy
- The role of AI in sustainable development
- Participatory approaches to inclusive AI Development
- Research showcasing decolonial perspectives using local epistemologies
- Highlight of decolonial approaches to technology and society
Authors of selected high-quality papers will be encouraged to submit their papers for the regular issue after thorough revision and improvement according to the requirements and guidelines of The Bottom Line. The papers will undergo the traditional double-anonymous peer review process. The Bottom Line journal does not offer a fast track.
Minitrack Co-Chairs:
Susanne Durst (Primary Contact)
Reykjavik University
susanned@ru.is
Jacques Yana Mbena
Society for Inclusive and Collaborative Entrepreneurship
yanajacques@yahoo.fr
Tarlan Ahmadov
Tallinn University of Technology
tarlan.ahmadov@taltech.ee
Machines are learning—but are we? As artificial intelligence (AI) rapidly advances, it is reshaping human learning, cognitive and skill development, ways of thinking and communicating in profound ways. Generative AI (GenAI), a subset of AI based on large language models (LLMs), has unleashed new, human-like capabilities. These models can generate seemingly novel, meaningful content—text, images, and audio—based on training data. Platforms such as ChatGPT, Gemini, Copilot, and Scribe for text, as well as DALL-E and Midjourney for images, are becoming integral to work and education. However, with their widespread adoption comes growing concern over the implications for human intelligence, creativity, and knowledge production.
This minitrack explores the growing divide between and within human and machine learning, examining how AI advancements impact the future of human learning. As AI systems evolve, disparities in access to learning and language representation widen, raising concerns about who benefits from these technologies and who is left behind. Scholars across multiple disciplines have highlighted the risks of AI-driven deskilling, the erosion of critical thinking, and the normalization of AI-generated mediocrity. A philosophy faculty member has even warned that AI presents an immediate threat to human creativity and decision-making.
This mini-track also examines the unintended consequences of AI—particularly GenAI and agentic AI—on human learning, cognition, and skill development. Exploring the ‘dark side’ of AI is crucial to addressing the complex, often adverse, societal impacts of AI. The emergence and proliferation of GenAI and agentic AI necessitate both the adaptation of existing theories and the development of new frameworks that reflect their distinctive characteristics. These technologies are not merely tools but transformative forces that alter how knowledge is produced, evaluated, and internalized.
Beyond education, AI is also reshaping global communication. AI and Natural Language Processing (NLP) influence how language is represented, understood, and used, yet their development often amplifies linguistic inequities. Current NLP models frequently underperform for marginalized languages, erase dialectal variations, and reinforce harmful linguistic profiling and biases. At the same time, AI holds emancipatory potential: when designed with linguistic justice in mind, it can empower marginalized communities, support language preservation, and create more inclusive digital spaces.
This minitrack welcomes interdisciplinary perspectives that critically examine how AI can be designed, developed, and governed to empower diverse linguistic communities rather than privileged dominant ones. By fostering interdisciplinary dialogue, we aim to push the boundaries of AI research and advocate for technologies that equitably serve all language communities. We welcome submissions on topics including but not limited to:
- AI-driven deskilling and disempowerment of individuals
- Challenges to human creativity and critical thinking in the era of AI-driven pattern generation
- AI and mediocrity as a new norm and the struggle for excellence
- Cognitive enhancement, deterioration, and dependence in GenAI-mediated environments
- Machine learning and human learning
- AI bias in multilingual and dialectal NLP models
- Ethical frameworks for linguistic justice in AI development
- Underrepresentation of marginalized languages in training data
- The role of AI in language preservation and revitalization
- Linguistic profiling and discrimination in automated decision-making
- Policy and regulatory approaches to linguistic equity in AI
- Community-driven NLP solutions and participatory AI design
By addressing these challenges and opportunities, this minitrack seeks to foster a critical and constructive discussion on AI’s role in shaping human learning and linguistic equity in the digital age.
Minitrack Co-Chairs:
K.D. Joshi (Primary Contact)
University of Nevada Reno
kjoshi@unr.edu
Nancy Deng
California State University, Dominguez Hills
ndeng@csudh.edu
This minitrack highlights experiential pluralism, the recognition that human engagement with the world – through cognition, affect, sensory processing, and social interaction—is inherently diverse. Neurodiversity encompasses natural variations in neurological functioning that influence perception, communication, and behavior. This includes, but is not limited to, autism, ADHD, dyslexia, dyspraxia, and other neurodevelopmental variations, as well as cognitive, affective, sensory, and social processing styles that do not necessarily align with specific diagnoses.
Digitalized workplaces, technology-mediated services, and AI-driven decision-making processes are often built on unspoken assumptions about how individuals think, communicate, and process information. Yet, human experiences are fundamentally diverse – no two individuals perceive, process, or interact with the world in the same way, and digitalized environments should reflect this diversity. This minitrack invites research that places neurodiversity at the core of discussions on the evolution of digital services, AI, digitalized workplaces, and online platforms across various domains. We encourage contributions that investigate:
- Neurodiversity in digitalized environments: How do digital workspaces, health services, educational platforms, and online communities include or exclude diverse cognitive, affective, sensory, and social styles? What roles do organizational structures and digital tools play in shaping neuro-inclusive experiences?
- AI, automation, and neurodiversity: How do AI-driven systems (e.g., hiring algorithms, content recommendations, automation tools) reflect or fail to reflect experiential pluralism, and how can they be adapted?
- Time, attention, and cognitive rhythms in digital interactions: How do different cognitive, affective, sensory, and social styles interact with expectations around synchronicity, responsiveness, and multitasking in digital environments?
- Beyond accessibility: rethinking digital system design through experiential pluralism: How can digital platforms, from dating apps to healthcare portals, be designed from inception with neurodiverse participation rather than merely adapted for inclusivity?Submission Guidelines
This minitrack welcomes both empirical and conceptual studies, including research on socio-technical systems, human-computer interaction, AI ethics, digital platforms, and information systems. We invite contributions using diverse methodologies such as literature reviews, theoretical explorations, design science, case studies, quantitative analyses, and interdisciplinary research approaches.
Minitrack Co-Chairs:
Maylis Saigot (Primary Contact)
University of Queensland
m.saigot@uq.edu.au
Rob Gleasure
Copenhagen Business School
rg.digi@cbs.dk
Elizabeth White Baker
Virginia Commonwealth University
bakerew@vcu.edu
Oyebisi Oladeji
Kennesaw State University
ooladej3@kennesaw.edu
“Introduced in 2008 by Satoshi Nakamoto, Bitcoin marked the beginning of the first peer-to-peer currency, igniting a recent wave of interest in Blockchain, Cryptocurrency, and FinTech. Despite significant interest in Blockchain, Cryptocurrency, and FinTech, more than a decade after their emergence, they have yet to become everyday technologies for consumers. These technologies still face numerous technical challenges, including scalability, security, privacy, interoperability, and energy consumption (Vasiljeva et al., 2016), along with challenges in business adoption, social trust, ethical, environmental, and regulatory controversies, and the potential for illicit activities.
For marginalized contexts, such as in developing economies, the challenges of integrating Blockchain, Cryptocurrency, and FinTech are also substantial. Issues such as the digital divide, lack of infrastructure, regulatory uncertainties, and the need for education and digital literacy can impede the adoption and effective utilization of these technologies.
This minitrack welcomes researchers from all disciplines, ranging from the ‘hard sciences’ such as engineering and computer science, to social sciences, finance, management, and beyond. We invite submissions that employ a variety of methodologies, including but not limited to algorithm/system design, experiments, simulation, theoretical analysis, empirical research, surveys, design science, development of theoretical frameworks, qualitative inquiries, and case studies. Our goal is to cultivate a vibrant dialogue across a wide range of methodological perspectives, thereby advancing understanding and fostering innovation in this field. Our scope of interest spans a wide range of topics, including, but not limited to:
- Technical Aspects
• The responsible applications of AI and Machine Learning in Blockchain, Cryptocurrency, and FinTech
• Development of responsible Algorithms, Protocols, and Consensus Mechanisms
• Scalability, Security, Decentralization, Interoperability, Transparency, Accountability, and Standardization in Blockchain, Cryptocurrency, and FinTech
• Quantum Computing and Cryptography in Blockchain, Cryptocurrency, and FinTech
• Open-source Development in Blockchain, Cryptocurrency, and FinTech - Social Aspects
• Responsible implementation of Blockchain, Cryptocurrency, and FinTech in marginalized contexts and developing economies
• Bridging the Digital Divide and promoting Financial Inclusion and Social Justice through Blockchain - Cryptocurrency, and FinTech
• Enhancing Education and Financial Literacy within these domains
• Supporting Small and Medium-sized Enterprises (SMEs) with Blockchain, Cryptocurrency, and FinTech
• The Future of Work in the era of Blockchain, Cryptocurrency, and FinTech - Business and Economic Aspects
• The adoption of Blockchain, Cryptocurrency, and FinTech
• Central Bank Digital Currencies (CBDCs)
• Microfinance and Crowdfunding through Blockchain, Cryptocurrency, and FinTech
• Regulation and Governance in the Blockchain, Cryptocurrency, and FinTech sectors
• The Geopolitical Landscape and Cross-border Applications of Blockchain, Cryptocurrency, and FinTech - Environmental and Ethical Aspects
• Energy Efficiency and Sustainability in Blockchain, Cryptocurrency, and FinTech
• Combating Illicit Activities with Blockchain, Cryptocurrency, and FinTech
• Ethics and Data Privacy in Blockchain, Cryptocurrency, and FinTech
Minitrack Co-Chairs:
Yibai Li (Primary Contact)
University of Scranton
yibai.li@scranton.edu
Kaiguo Zhou
Capital University of Economics and Business
zhoukg@cueb.edu.cn
Wanli Liu
Guangzhou Xinhua University
liuwlariel@xhsysu.edu.cn