Award Abstract # 1815514
CHS: Small: Establishing Action Laws for Touch Interaction

NSF Org: IIS
Division of Information & Intelligent Systems
Recipient: THE RESEARCH FOUNDATION FOR THE STATE UNIVERSITY OF NEW YORK
Initial Amendment Date: August 8, 2018
Latest Amendment Date: August 8, 2018
Award Number: 1815514
Award Instrument: Standard Grant
Program Manager: Ephraim Glinert
IIS
 Division of Information & Intelligent Systems
CSE
 Directorate for Computer and Information Science and Engineering
Start Date: August 15, 2018
End Date: July 31, 2022 (Estimated)
Total Intended Award Amount: $315,478.00
Total Awarded Amount to Date: $315,478.00
Funds Obligated to Date: FY 2018 = $315,478.00
History of Investigator:
  • Xiaojun Bi (Principal Investigator)
    xiaojun@cs.stonybrook.edu
Recipient Sponsored Research Office: SUNY at Stony Brook
W5510 FRANKS MELVILLE MEMORIAL LIBRARY
STONY BROOK
NY  US  11794-0001
(631)632-9949
Sponsor Congressional District: 01
Primary Place of Performance: SUNY at Stony Brook
Stony Brook
NY  US  11794-4400
Primary Place of Performance
Congressional District:
01
Unique Entity Identifier (UEI): M746VC6XMNH9
Parent UEI: M746VC6XMNH9
NSF Program(s): HCC-Human-Centered Computing
Primary Program Source: 01001819DB NSF RESEARCH & RELATED ACTIVIT
Program Reference Code(s): 7367, 7923
Program Element Code(s): 736700
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070

ABSTRACT

Theoretical action laws and quantitative models are foundational to any field. In human-computer interaction and ergonomics, Fitts' law is one of the theoretical foundations for interface and input device development and research. It has served as a theoretical framework for evaluating input devices, a tool for computational interface design, and a logical basis for modeling more complex HCI tasks. However, ample empirical evidence has shown that Fitts' law and other existing action laws encounter problems, or even fail, when modeling touch interaction, primarily because they do not account for the imprecision of finger touch in interaction. This research will establish robust new action laws for touch interaction, including both pointing and trajectory-based gesturing (steering), which will guide the design and evaluation of touch interfaces and serve as cornerstones for modeling complex tasks, computational interface design and optimization. Given the ubiquitous adoption of mobile devices where touch input is dominant, project outcomes are expected to have broad impact that will reach millions of users. To demonstrate the practical value and effectiveness of the new touch action laws, they will be used to quantify the touch capacity of older adults, thereby laying the foundation for designing touch interfaces well-suited for that user community.

The starting point for this research will be the principal investigator's Finger-Fitts (FFitts) law derived from the Dual Gaussian Distribution Model and Fitts' law. This preliminary work will be expanded to model 2D target selection and gesturing tasks. The research will be carried out following standard practices in Human Computer Interaction (HCI), first deriving candidate models from existing models, hypotheses, and rational assumptions, and then conducting rigorous user studies to evaluate the new models. The experimental results will in turn be used to refine the new models, which again will be evaluated via studies. Project outcomes will include the following theoretical and empirical intellectual contributions: action laws for touch pointing, including both the task form of the FFitts law which will predict touch pointing time with nominal task parameters (i.e., finger travel distance A and target width W), and the bivariate FFitts law which will model 2D pointing tasks on rectangular targets such as buttons, check boxes and hyperlinks; action laws for trajectory gestures (Finger-Steering laws), including a basic form for steering along straight paths and a generic form for other paths; and a touch action law based understanding of older adults' touch interaction capacity.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Cui, Wenzhe and Zheng, Jingjie and Lewis, Blaine and Vogel, Daniel and Bi, Xiaojun "HotStrokes: Word-Gesture Shortcuts on a Trackpad" the 2019 CHI Conference on Human Factors in Computing Systems , 2019 10.1145/3290605.3300395 Citation Details
Cui, Wenzhe and Zhu, Suwen and Li, Zhi and Xu, Zheer and Yang, Xing-Dong and Ramakrishnan, IV and Bi, Xiaojun "BackSwipe: Back-of-device Word-Gesture Interaction on Smartphones" CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems , 2021 https://doi.org/10.1145/3411764.3445081 Citation Details
Ko, Yu-Jung and Zhao, Hang and Kim, Yoonsang and Ramakrishnan, IV and Zhai, Shumin and Bi, Xiaojun "Modeling Two Dimensional Touch Pointing" UIST '20: The 33rd Annual ACM Symposium on User Interface Software and Technology , 2020 https://doi.org/10.1145/3379337.3415871 Citation Details
Li, Zhi and Zhao, Maozheng and Das, Dibyendu and ZHAO, HANG and Ma, Yan and Liu, Wanyu and Beaudouin-Lafon, Michel and Wang, Fusheng and Ramakrishnan, IV and Bi, Xiaojun "Select or Suggest? Reinforcement Learning-based Method for High-Accuracy Target Selection on Touchscreens" CHI Conference on Human Factors in Computing Systems (CHI'22) , 2022 https://doi.org/10.1145/3491102.3517472 Citation Details
Qin, Ryan and Zhu, Suwen and Lin, Yu-Hao and Ko, Yu-Jung and Bi, Xiaojun "Optimal-T9: An Optimized T9-like Keyboard for Small Touchscreen Devices" Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces , 2018 10.1145/3279778.3279786 Citation Details
Zhu, Suwen and Kim, Yoonsang and Zheng, Jingjie and Luo, Jennifer Yi and Qin, Ryan and Wang, Liuping and Fan, Xiangmin and Tian, Feng and Bi, Xiaojun "Using Bayes' Theorem for Command Input: Principle, Models, and Applications" CHI'20 , 2020 10.1145/3313831.3376771 Citation Details
Zhu, Suwen and Zheng, Jingjie and Zhai, Shumin and Bi, Xiaojun "i'sFree: Eyes-Free Gesture Typing via a Touch-Enabled Remote Control" the 2019 CHI Conference on Human Factors in Computing Systems , 2019 10.1145/3290605.3300678 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

Finger touch input is now the predominant input modality on mobile computer devices such as smartphones and tablets. This project aimed to provide theoretical understanding (a.k.a action laws) of the regularities behind finger touch input, and apply the understanding to improve touchscreen interface design and interaction experience. Our research has led to the following outcomes, including new models and action laws for touch input, and their applications in improving touchscreen interaction.

  1. 2D Finger Touch Fitts' law [4]. We have created the 2D Finger-Fitts law which relates the finger movement time to the size of the target and the travel distance of the input finger. Our experiment has shown that this new model outperforms the existing movement model such as Fitts' law for finger touch input. The work was published in a premier HCI conference, ACM UIST 2022, and won a Best Paper Honorable Mention award [4]. The model could serve as the cornerstone for touchscreen interface design, optimization, and evaluation, as it could predict the efficiency of interaction without running studies. 
  2. Rotational Dual Gaussian Model [2]: A Model for Touch Point Distribution. We have created a model that predicts the distribution of touchpoints, taking into account the finger movement direction. This model is an improvement over the original Dual Gaussian model which leverages no information about the movement direction. The new model is more accurate than the original dual Gaussian model in predicting the touch point distribution. Our experiments showed that the Rotational Dual Gaussian model can improve the accuracy of touchscreen keyboard decoder, which is the core component for auto-correction and auto-completion of a touchscreen keyboard.

  3. A Mixture Model for Blind Users [3]. We have also created a new model that can predict the touch pointing performance of blind users who rely on screen readers to select targets on touch screens.  We discovered that the gliding trajectories of blind people are a mixture of two strategies: 1) ballistic movements with iterative corrections relying on non-visual feedback, and 2) multiple sub-movements separated by stops, and concatenated until the target is reached. Based on this finding, we propose the mixture pointing model, a model that relates movement time to distance and width of the target. The mixture model created based on this finding substantially improves the prediction accuracy of touchscreen target selection for blind users, over the traditional Fitts' law.

  4. Improving touch-based selection accuracy with touch models [1]. We proposed a Suggestion-based Accurate Target Selection method, where target selection is formulated as a sequential decision problem. We then used Reinforcement Learning to train a policy that has the highest interaction efficiency. The proposed touch point distribution model serves as the generative model to simulate touch interaction, which is critical for the success of the method.

  5. This project has supported 5 Ph.D. students and 6 high school students. The Ph.D. students carried out research to create and evaluate models for touch pointing, and apply the created models to improve interaction experience. High school students worked mostly in summers in the past four years with these Ph.D. students, to gain experience of carrying out research. 

[1] Zhi Li, Maozheng Zhao, Dibyendu Das, Hang Zhao, Yan Ma, Wanyu Liu, Michel Beaudouin-Lafon, Fusheng Wang, IV Ramakrishnan Xiaojun Bi (2022) "Select or Suggest? Reinforcement Learning-based Method for High-Accuracy Target Selection on Touchscreens". In Proceedings of CHI 2022 - the SIGCHI Conference on Human Factors in Computing Systems. Article No.: 494. Pages 1 - 15.

 

[2] Yan Ma, Shumin Zhai, IV Ramakrishnan, Xiaojun Bi (2021) "Modeling Touch Point Distribution with Rotational Dual Gaussian Model". In Proceedings of UIST 2021 - The ACM Symposium on User Interface Software and Technology. pp 1197 - 1209.

 

[3] Yu-Jung Ko, Aini Putkonen, Ali Selman Aydin, Shirin Feiz, Yuheng Wang, Vikas Ashok, IV Ramakrishnan, Antti Oulasvirta, Xiaojun Bi (2021) "Modeling Gliding-based Target Selection for Blind Touchscreen Users". In Proceedings of MobileHCI 2021- The ACM Conference on Human computer interaction with Mobile Devices and Services Article No. 29 pp 1- 14.

 

[4] Yu-Jung Ko, Hang Zhao, Yoonsang Kim, IV Ramakrishnan, Shumin Zhai, Xiaojun Bi (2020) "Modeling Two Dimensional Touch Pointing". In Proceedings of UIST 2020 - The ACM Symposium on User Interface Software and Technology. Pages 858-868


 

 


Last Modified: 08/23/2022
Modified by: Xiaojun Bi

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page