[Add usage scenario]
1. PocketMenu (Adapted from Pielot et al.’s paper, 2012, CHI)
[Do I need to describe the original work first, and distinguish the difference? Or should I depict my adapted design directly?]
Adapted from Pielot et al.’s work, PocketMenu, this design aims for providing a new type of menu interaction without the needing of visual feedback. The menu item stack is located along the bottom left edge of the screen vertically. Its precise location depends on the user’s finger position. The bottom of the stack will snap to the user’s finger when the small area around the bottom left edge of the screen is touched. [Question: do we want to provide sound and vibration feedback to indicate the menu is activated?] This dynamically alignment allows the user to assume [word choice: a different verb] the finger’s initial position will always be on top of the bottom of the menu item stack without visual confirmation.
To browse the menu item, the user slides the finger up and down along the screen’s border. [Question: do we need to provide voice over option here?] The system provides two type of feedback: first, the text-to-speech system will announce the action name while the …show more content…
[Double check if it also applies to swiping. If not, we can use light press / hard press to trigger shallow or deep slide]. By detecting the force applied to the touch screen, we can classify the slide gestures into two categories: shallow slide and deep slide. The interaction of this interface is similar to iPhone’s slide to unlock; however, how hard the user press at the beginning of the sliding gesture will trigger different actions. The haptic feedbacks of the two slide gestures are also different: shallow slide provides lower friction, and deep slide provides higher