Welcome to A11y Lab

A11y Lab is a research group within the Human-Centered Design and Development (HCDD) program at the College of Information Science and Technology (IST), Pennsylvania State University.

Our research focuses on Human-Computer Interaction (HCI) with an emphasis on Accessible Computing. We investigate how technology interacts with human abilities—where it succeeds, where it falters, and how to facilitate improvements. We then build intelligent interactive technologies to accommodate diverse human needs.

Our research on interactive technologies and intelligent systems encompasses:

  • Custom input/output devices for diverse user needs
  • User interaction modeling and adaptive UI design
  • Multimodal interaction techniques:
    • Sensor fusion for unaided communication
    • Midair gestures in virtual and mixed reality (VR/MR)
    • Natural sound for information representation
    • Speech-based natural interaction with adaptive feedback and error correction
  • Operating System instrumentation to enhance accessibility support
  • Human-AI collaboration for improved accessibility solutions
  • Inclusive design for STEM education including programming, surveying, and experiential learning

A11y stands for Accessibility – there are 11 letters between the “A” and the “y”. Making technology universally accessibility for all is a hard problem. This is because the problem is layered, multi-dimensinoal, and often difficult to fully understand, articulate, and theorize.

Some of our recent projects include:

  • Designing robust, efficient, and extensible accessibility APIs in emerging computing frameworks, such as the Unity Engine (funded by the Institute for Computational and Data Sciences and the Center for Immersive Experience at Penn State).
  • Creating effective educational tools and resources to teach computer programming to interdisciplinary students and students with vision disabilities. Try our award-winning tool Grid Coding.
  • Making non-visual interaction as efficient as visual interaction by designing novel input/output devices, interaction paradigms, and UI adaptation frameworks. Check out our SpaceX magnifier and Abacus Gestures papers (published in IMWUT 2023), Perceived Accessibility metrics paper (published in CHI 2023), and wheel-based interaction papers (funded by NIH).
  • Making high-dimensional data more accessible with natural sounds, such as birdsongs. Check out our CHI 2023 paper Susurrus.
  • Advancing human-centered AI through incorporating human factors into AI models like computer vision and large language models (LLMs). Check out papers on AI-infused Remote sighted assistance (RSA) research (funded by NIH).
  • Making augmentative and alternative communication (AAC) interaction natural, fluid, and fast by introducing unaided gestures (e.g., fingers, hands, arms, head, and body movements) via sensor fusion (funded by the Center for Socially Responsible Artificial Intelligence and the Center for BioDevice at Penn State).
  • Designing VR/MR interfaces to remotely operate service robots in order to support faclity managers (funded by NSF: Future of Work).
  • Designing AI-based study companion for students with attention issues (funded by the Center for Socially Responsible Artificial Intelligence at Penn State).


News

Oct 11, 2024
Wheeler received the UIST 2024 best paper honorable mention awards. Congratulations, Touhid and Imran!

Sept 12, 2024
Several papers are submitted to CHI 2025

Sept 3, 2024
One paper is submitted to ICRA 2025

August 28, 2024
Two papers are submitted to IEEE VR 2025

August 19, 2024
One paper is submitted to AAAI 2025

August 8, 2024
Zheyu's master's work on VR locomation is accepted to VRST 2024! Congratulations, Zheyu!

July 4, 2024
Our paper, Wheeler, is accepted to UIST 2024! Congratulations, Touhid and Imran!

April 22, 2024
One paper is accepted to CogSci 2024. Congratulations Lingyun, Duk Hee, and Ehtesham!

January 12, 2024
Two papers are accepted to CHI 2024! Congratulations, Chaeeun and Jingyi!

November 7, 2023
Two papers are now in R&R phase in CHI 2024.

November 1, 2023
Our proposal to NSF:Future of work has been funded!

October 1, 2023
Two papers are accepted in IMWUT 2023. Congratulations, Ehtesham and Touhid!

August 15, 2023
Welcome our new lab member, Imran Kabir!

June 16, 2023
Three seed proposals are funded by various centers and institutes at Penn State.

June 12, 2023
Co-write a book chapter on Robot-Mediated Assistance: Opportunities and Challenges in Computer Vision and Human-Robot Interaction

Feb 13, 2023
One paper is accepted to DIS 2023. Congratulations, Jingyi!

... see all News