Welcome to A11y Lab

Welcome! A11y Lab is part of the Human-Centered Design and Development (HCDD) program at College of Information Science and Technology (IST) in the Pennsylvania State University (Penn State).

We conduct technical research in Human-Computer Interaction (HCI) with a focus on Accessible Computing. Our aim is to deeply understand interactions between technology and human abilities - where they succeed, where they break down, and how to facilitate recovery. We then build intelligent interactive technologies to accommodate human needs.

In this process, we use a wide array of techniques, including human-AI teaming; inclusive design; sensor fusion; embodied gestures in 2D and 3D spaces (physical/AR/VR); custom input/output devices; natural sound for information representation; instrumentation of the Operating System (OS); and modeling user interaction with intelligent systems/models/algorithms in an easy and explainable manner.

This unique blend of scientific, engineering, and human-centered approaches empowers us to dissect interdisciplinary problems innovatively. It allows us to advance multiple scientific fields while directly creating new education and employment opportunities for people with diverse needs and situations, promoting inclusivity and accessibility.

A11y stands for Accessibility – there are 11 letters between the “A” and the “y”. Making technology universally accessibility for all is a hard problem. This is because the problem is layered, multi-dimensinoal, and often difficult to fully understand, articulate, and theorize.

Some of our recent projects include:

  • Designing robust, efficient, and extensible accessibility APIs in emerging computing frameworks, such as the Unity Engine (funded by the Institute for Computational and Data Sciences and the Center for Immersive Experience at Penn State).
  • Creating effective educational tools and resources to teach computer programming to interdisciplinary students and students with vision disabilities. Try our award-winning tool Grid Coding.
  • Making non-visual interaction as efficient as visual interaction by designing novel input/output devices, interaction paradigms, and UI adaptation frameworks. Check out our SpaceX magnifier and Abacus Gestures papers (published in IMWUT 2023), Perceived Accessibility metrics paper (published in CHI 2023), and wheel-based interaction papers (funded by NIH).
  • Making high-dimensional data more accessible with natural sounds, such as birdsongs. Check out our CHI 2023 paper Susurrus.
  • Advancing human-centered AI through incorporating human factors into AI models like computer vision and large language models (LLMs). Check out papers on AI-infused Remote sighted assistance (RSA) research (funded by NIH).
  • Making augmentative and alternative communication (AAC) interaction natural, fluid, and fast by introducing unaided gestures (e.g., fingers, hands, arms, head, and body movements) via sensor fusion (funded by the Center for Socially Responsible Artificial Intelligence and the Center for BioDevice at Penn State).
  • Designing VR/MR interfaces to remotely operate service robots in order to support faclity managers (funded by NSF: Future of Work).
  • Designing AI-based study companion for students with attention issues (funded by the Center for Socially Responsible Artificial Intelligence at Penn State).


News

January 12, 2024
Two papers are accepted to CHI 2024! Congratulations, Chaeeun and Jingyi!

November 7, 2023
Two papers are now in R&R phase in CHI 2024.

November 1, 2023
Our proposal to NSF:Future of work has been funded!

October 1, 2023
Two papers are accepted in IMWUT 2023. Congratulations, Ehtesham and Touhid!

August 15, 2023
Welcome our new lab member, Imran Kabir!

June 16, 2023
Three seed proposals are funded by various centers and institutes at Penn State.

June 12, 2023
Co-write a book chapter on Robot-Mediated Assistance: Opportunities and Challenges in Computer Vision and Human-Robot Interaction

Feb 13, 2023
One paper is accepted to DIS 2023. Congratulations, Jingyi!

January 16, 2023
Our journal, Understanding the Usages, Life-Cycle, and Opportunities of Screen Readers' Plugins is accepted to TACCESS Journal. Congratulations, Farhani!

January 13, 2023
Our paper, A Probabilistic Model and Metrics for Estimating Perceived Accessibility of Desktop Applications in Keystroke-Based Non-Visual Interactions is accepted to CHI 2023. Congratulations, Touhid!

January 13, 2023
Our paper, Data Sonification with Natural Sounds is accepted to CHI 2023. Congratulations, Naimul!

November 1, 2022
We are on the news: New coding tool could aid computer programmers who are blind or have low vision

October 9, 2022
Our paper, Grid-Coding, received the UIST 2022 Best Paper Award! Congratulations, Ehtesham! Try Grid Coding

... see all News