Award Abstract # 2017289
CyberTraining: Pilot: Modular experiential learning for secure, safe, and reliable AI (MELSSRAI)

NSF Org: OAC
Office of Advanced Cyberinfrastructure (OAC)
Recipient: WESTERN MICHIGAN UNIVERSITY
Initial Amendment Date: June 24, 2020
Latest Amendment Date: April 30, 2021
Award Number: 2017289
Award Instrument: Standard Grant
Program Manager: Joseph Whitmeyer
jwhitmey@nsf.gov
 (703)292-7808
OAC
 Office of Advanced Cyberinfrastructure (OAC)
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 15, 2020
End Date: December 31, 2022 (Estimated)
Total Intended Award Amount: $298,257.00
Total Awarded Amount to Date: $314,257.00
Funds Obligated to Date: FY 2020 = $298,257.00
FY 2021 = $16,000.00
History of Investigator:
  • Alvis Fong (Principal Investigator)
    alvis.fong@wmich.edu
  • Ajay Gupta (Co-Principal Investigator)
  • Steven Carr (Co-Principal Investigator)
  • Shameek Bhattacharjee (Co-Principal Investigator)
Recipient Sponsored Research Office: Western Michigan University
1903 W MICHIGAN AVE
KALAMAZOO
MI  US  49008-5200
(269)387-8298
Sponsor Congressional District: 04
Primary Place of Performance: Western Michigan University
MI  US  49008-5200
Primary Place of Performance
Congressional District:
04
Unique Entity Identifier (UEI): J7WULLYGFRH1
Parent UEI:
NSF Program(s): CyberTraining - Training-based,
Secure &Trustworthy Cyberspace
Primary Program Source: 01002122DB NSF RESEARCH & RELATED ACTIVIT
01002021DB NSF RESEARCH & RELATED ACTIVIT

04002021DB NSF Education & Human Resource
Program Reference Code(s): 9251
Program Element Code(s): 044Y00, 806000
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

In this pilot project, core literacy and advanced skills at the intersection of Secure, Safe, Reliable (SSR) Computing, High Performance Computing (HPC), and artificial intelligence (AI) are integrated into educational curricula and training materials to prepare faculty, undergraduate, and graduate students at institutions with relatively low rates of advanced cyberinfrastructure (CI) adoption for large-scale secured data analytics. From self-driving vehicles to smart digital personal assistants and real-time multilingual translators, applications of AI have become omnipresent in our daily lives. There is an urgent need to ensure that current and future scientists who advance AI, as well as practitioners who use AI, understand the limitations of AI and how to develop robust and dependable AI. The long-term goals of this project are to contribute to a pipeline for a SSR AI-minded CI workforce and a self-sustaining advanced CI ecosystem.

In this project, inspired by authoritative sources such as Open AI and Partnership on AI, curricular modifications and materials are developed to educate computer science (CS) students in SSR techniques from the outset. Intensive, multi-faceted, modular, experiential learning units are designed to upgrade the skills of current and future CI users rapidly, so they can apply their new skills to their tasks. The loosely coupled modules can be integrated into existing classes, including elementary CS classes taken by non-CS STEM students. Students participate in research activities, which train next generation interdisciplinary scientists, including many from underrepresented groups. Universities at varied levels and varied locations as well as community colleges are included in the project. Using a collective impact plan, a group of multi-discipline, public-private-sector experts provide guidance and participate in train-the-trainer activities to multiply the effect. Lessons learned and best practices are codified into blueprints for reusability and widespread future adoption across STEM disciplines.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

(Showing: 1 - 10 of 23)
Alsheakh, Hussein and Bhattacharjee, Shameek "Towards a Unified Trust Framework for Detecting IoT Device Attacks in Smart Homes" IEEE Internatonal Conference on Mobile Adhoc and Sensor Systems (IEEE MASS) , 2020 https://doi.org/10.1109/MASS50613.2020.00080 Citation Details
Bhattacharjee, Shameek and Das, Sajal K "Unifying Threats against Information Integrity in Participatory Crowd Sensing" IEEE pervasive computing , 2023 https://doi.org/10.1109/MPRV.2023.3296271 Citation Details
Bhattacharjee, Shameek and Islam, Mohammad Jaminur and Abedzadeh, Sahar "Robust Anomaly based Attack Detection in Smart Grids under Data Poisoning Attacks" Proceedings of the 8th ACM Cyber-Physical System Security Workshop - held with ACM ASIA' CCS 2022 , 2022 https://doi.org/10.1145/3494107.3522778 Citation Details
Bhattacharjee, Shameek and Madhavarapu, Praveen and Das, Sajal K. "A Diversity Index based Scoring Framework for Identifying Smart Meters Launching Stealthy Data Falsification Attacks" ACM Asia Conference on Computer and Communications Security (ASIACCS) , 2021 https://doi.org/10.1145/3433210.3437527 Citation Details
Bhattacharjee, Shameek and Madhavarapu, Venkata Praveen and Silvestri, Simone and Das, Sajal K. "Attack Context Embedded Data Driven Trust Diagnostics in Smart Metering Infrastructure" ACM Transactions on Privacy and Security , v.24 , 2021 https://doi.org/10.1145/3426739 Citation Details
Fong, A. and Carr, S. and Gupta, A and and Bhattacharjee, S. "Promoting AI Trustworthiness through Experiential Learning (WIP)" Proceedings ASEE annual conference , 2022 Citation Details
Fong, Alvis and Gupta, Ajay and Carr, Steve and Bhattacharjee, Shameek "Workshop: Hands-on Sampling of Experiential Learning Modules that Promote AI Competency Across STEM Disciplines" IEEE World Engineering Education Conference (EDUNINE) , 2022 https://doi.org/10.1109/EDUNINE53672.2022.9782374 Citation Details
Fong, Alvis and Gupta, Ajay and Carr, Steve and Bhattacharjee, Shameek and Harnar, Michael "Modular experiential learning for secure, safe, and reliable AI: Curricular Initiative to Promote Education in Trustworthy AI" ACM SIGITE 2022 , 2022 https://doi.org/10.1145/3537674.3554756 Citation Details
Fong, Alvis C. and Gupta, Ajay K. and Carr, Steve M. and Bhattacharjee, Shameek "Work in progress: Experiential Learning Modules for Promoting AI Trustworthiness in STEM Disciplines" IEEE World Engineering Education Conference (EDUNINE), 2022 , 2022 https://doi.org/10.1109/EDUNINE53672.2022.9782319 Citation Details
Fong, B. and Fong, A. C. and Hong, G.Y. and Tsang, K. F. "Optimization of Power Usage in a Smart Nursing Home Environment" IEEE Transactions on Industry Applications , 2022 https://doi.org/10.1109/TIA.2022.3183957 Citation Details
Fong, Bernard and Fong, A. C. and Tsang, Kim-Fung "Capacity and Link Budget Management for Low-Altitude Telemedicine Drone Network Design and Implementation" IEEE Communications Standards Magazine , v.5 , 2021 https://doi.org/10.1109/MCOMSTD.0001.2100010 Citation Details
(Showing: 1 - 10 of 23)

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

From autonomous vehicles to smart personal assistants, artificial intelligence (AI) is increasingly being applied towards solving a wide range of previously intractable problems across multiple disciplines. Taking a convergent approach, the team embarked on a 2.5-year pilot study (Table I). It was an ambitious proof-of-concept investigation aimed at rapidly implementing accessible learning materials to instill the concepts and practice of safe, secure, and reliable (SSR) AI in current and future users of advanced CI. SSR are three pillars that form a sturdy tripod to support trust in AI technologies. Trust in turn is fundamental for enabling effective human-AI collaboration. Harmonious and productive human-AI collaboration has the potential to alter the course of human history, for instance, by enabling scientists and engineers to tackle extremely complex and impactful problems. AI technologies can also profoundly change the ways people live, work, and play. Examples include climate change mitigation, waste reduction and management, delivery of fair and equitable healthcare, transportation, and educational services, clean water and food production, disease prevention, development of smart X (X = cities, factories, infrastructures, ...), robotic pets and companions, detection of harmful content and substances in virtual and physical worlds, etc. 

Following a thorough survey of existing technologies and learning materials, the team supervised several graduate and undergraduate (UG) students and developed 12 initial experiential learning modules (ELMs, summarized in Table II with key characteristics in Table III) in the first project year. The ELMs were designed to complement and leverage existing resources. A panel of academic-industry-government experts provided advice and guidance throughout the design and development process. Some of the experts also subsequently field tested the ELMs in their respective settings during the second launch described below.

In year 2, the team took a 2-step approach for launching the modules. A small-scale launch took place in two CS courses (3000-level UG Big Data and 6000-level grad Information Retrieval) in fall 2021. After minor finetuning, a larger-scale launch occurred across multiple disciplines (e.g., computer science, mechanical engineering, civil engineering, business analytics, and statistics) and institutions in spring 2022. More than 10 faculty at WMU and other institutions took part in the launch, which directly affected more than 300 multidisciplinary UG and grad students. Anonymous data collected in the field test were analyzed by an independent evaluator. The team was engaged in dissemination and outreach activities during the latter half of 2022.

Dissemination and Outreach: Technical (AI biases and mitigation strategies, natural language processing (NLP), commonsense reasoning (CSR), smart health, phishing detection) and learning & teaching papers were published in prominent venues, such as IEEE periodicals and ASEE and ACM SIGITE annual conferences, and SemEval Competitions. Outreach went beyond formal and informal discussions at conferences. For instance, we also worked with area high schools to instill interest in these younger populations. One of the ELMs was customized specifically for outreach activities involving high school students.

Continuity and Forward Momentum: Early in the pilot study, the team uncovered very significant issues surrounding limitations of contemporary statistical machine learning (ML) and AI-induced biases. These limitations and biases hinder effective promotion of AI readiness, and have since become one of the motivating factors that drive further research.

Intellectual Merit: Integrated core and advanced skills at the intersection of SSR computing and AI into educational curriculum to prepare faculty and students at institutions with low rates of advanced cyberinfrastructure (CI) adoption for large-scale data analytics.

Broader Impacts: Several hundred current and future CI users and researchers from diverse backgrounds and locations benefited from the work. Supported 1 graduate and 6 undergraduates (4 minority, 2 female). The second launch directly affected more than 300 graduate and undergraduate students in multiple disciplines. Over ten faculty members from WMU and other institutions were directly involved. In addition to the published papers reported, two presentations were given at prominent teaching & learning conferences. 

 

 


Last Modified: 02/04/2023
Modified by: Alvis Fong

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page