HCI Project — Sign Language

Devyn Myers
7 min readOct 20, 2020

We developed a web app that teaches people how to do sign language using machine learning! Now we’ll walk through the development process, from its inception to the final product.

Demo Video

1 — Ideation Stage

With technology now, there are many other ways to interact with computers rather than the traditional mouse and keyboard. There are many ways of interaction that can feel more expressive. During the past few weeks, we have designed and implemented a creative way to interact in 3D user space.

When coming up with ideas, we ended up with two main project concepts. The first project idea was a simple game where you could move a ball around the screen using your hand in a webcam. The other project option was to make a program which uses peoples webcams to teach them sign language. We ended up going with the second option, since the first idea wasn’t very fleshed out or interesting.

Our plan early on was to have the user perform sign language in their webcam, and the program would tell them what letter they were signing. We realized that this would be difficult to implement, so after much discussion, we decided to have the user sign a specific letter. This way, the teachable machine would be able to compare the user’s hand gesture to one letter.

2 — Formative Testing

To begin testing our intended product, we opted to use PowerPoint slides to simulate the user experience. We organized the slides in the order that the user would interact with the product. The slides included all of the elements we planned to incorporate into the final product.

The opening page of our test PowerPoint

We used a “Wizard of Oz” testing format, where a user would attempt to perform sign language into their webcam as if the program was already built. If the user was right we would tell them they were right, and asked if they would like to continue. For “Wizard of Oz” testing in class, one of our group members acted as the controller. This controller shared their screen of the application with the class and a class member performed the sign language hand motions corresponding to the application screen. The controller then jumped to the next slide if they were correct. Creating the simulation on powerpoint slides allowed us to easily jump between the different slides, depending on the user’s actions.

An example of what the user sees when performing sign language for our prototype

We did encounter some difficulties with user interaction during testing. It was confusing for the user to say what button they wanted to press, but we figured this wouldn’t be an issue in the final product.

3 — Building Process

To develop our product, we divided the team to have some people working on the GUI and others on the teachable machine. We designed our product on Glitch, which used HTML and CSS to create our application. Glitch allowed us to all collaborate with the code which made it fast to implement our application.

Our design differed slightly from the PowerPoint slides for the prototype demo day. In the learning stage, we did not think it was necessary to include the “Your Hand Gesture” box to show the user if they were correct after learning the gesture. After feedback from peers, our final project incorporated this box to ensure that the user correctly learned the letter. Therefore, the finished UI design was implemented as our testing design on PowerPoint slides.

In order to bridge the gap of evaluation and make sure the users understand the current state of the system, we clearly defined the two states of the system, the learning mode and the testing mode in the main page.

In addition, we designed hover effects for all of the buttons on our website including the letters that could be selected. The hover effects are signifiers of the affordance that reinforce the users about which button is currently selected and what will the system do according to their action of clicking on the buttons.

An example showing letters with hover effects

After the user chooses to enter one of the two modes, there is an instruction page informing the user about which mode they are entering into after they select which letter they would like to start learning or click on the “Start” button to go to the random letter test. The instruction pages for the testing mode and the learning mode also bridges the gulf of execution by informing the user what to expect in the corresponding mode. For example, by reading the testing mode instruction, the users will learn that this is a greater challenge than the learning mode since they will be tested with random letters instead of having the option to choose which one to test. They will also learn that they will be able to go back to the learning mode if they find out that they are not yet ready for the testing mode.

The instruction page before entering the random letter testing mode.

Another important element of our design is the big sign of “CORRECT!” across the camera canvas, which will be triggered when the user has shown the correct hand gesture of the letter. This is a feedback for the user that indicates the system reacted with their input when they followed the instructions.

The “CORRECT!” sign lets the user know if they got it right

4 — User Feedback

Certainly, the random letter test is challenging for most of the users who are not fluent with sign language but in the process of learning the sign language. We thought about whether this testing mode is way too challenging, but we also thought that a challenging application has its pros and cons. According to Overbeeke (2003), users are in search of challenging products, thus engaging products are not necessarily easy to use. We believe that the key here is to balance the challenges and the support within the user experience.

In order to achieve a better balance with the challenging random letter test, we added a feature in the learning mode in which users could click on the “Test” button within the learning page for a specific letter, and they will then be able to test this letter that they have just learned. In addition, there is a “Demonstration” button that triggers an overlay image on the camera canvas, with 0.5 opacity, that shows the correct gesture demonstration on the current letter. When the user gets stuck, he or she can easily bring up the demonstration, and align their hands with the demonstration. In this way, they will receive the “CORRECT!” sign, and be better prepared for the random letter test challenge.

The testing mode shows the user what sign they should make when pressing the ‘Demonstration’ button

5 — Conclusion

This project showed that considering different input modalities can lead to creative, innovative, and engaging design that fits user needs more effectively than standard approaches. It led us to consider the full scope of the options available during the design process. It also highlighted the value of recognizing the strengths and weaknesses of these different types of inputs, forcing us to design around the constraints inherent to our alternative input systems.

Because we lacked experience with gesture-based controls, user feedback became even more valuable in the design process. If we were to create a similar project in the future, we would focus on repeated, iterative testing with users to “fail faster”, gaining knowledge and improving the product more rapidly. We would also spend more time on our formative testing, ensuring that our “Wizard of Oz” prototype more closely mimicked our expected final design. This would allow us to find the flaws in our design as well as the pieces that were particularly engaging. Overall, this project stressed the importance of a thorough design process and use of design principles to create a product that best fulfills user needs and expectations.

Citations

Overbeeke, C. J., Djajadiningrat, J. P., Hummels, C. C. M., Wensveen, S. A. G., & Frens, J. W. (2003). Let’s make things engaging. In M. A. Blythe, C. J. Overbeeke, & P. C. Wright (Eds.), Funology: from usability to enjoyment (pp. 7–17). (Human computer interaction series; Vol. 3). Kluwer.

“Download Hand Gesture Language Alphabet for Free.” Freepik, 9 Aug. 2018, www.freepik.com/free-vector/hand-gesture-language-alphabet_2776309.htm.

--

--