PANDALens: Towards AI-Assisted In-Context Writing on OHMD During Travels

Video Preview

This work will be presented at CHI2024.

Code: Github, Paper: PDF.


While effective for recording and sharing experiences, traditional in-context writing tools are relatively passive and unintelligent, serving more like instruments rather than companions. This reduces primary task (e.g., travel) enjoyment and hinders high-quality writing. Through formative study and iterative development, we introduce PANDALens, a Proactive AI Narrative Documentation Assistant built on an Optical See-Through Head Mounted Display that transforms the in-context writing tool into an intelligent companion. PANDALens observes multimodal contextual information from user behaviors and environment to confirm interests and elicit contemplation, and employs Large Language Models to transform such multimodal information into coherent narratives with significantly reduced user effort. A real-world travel scenario comparing PANDALens with a smartphone alternative confirmed its effectiveness in improving writing quality and travel enjoyment while minimizing user effort. Accordingly, we propose design guidelines for AI-assisted in-context writing, highlighting the potential of transforming them from tools to intelligent companions.

ParaGlassMenu: Towards Social-Friendly Subtle Interactions in Conversations

Video Preview

This work presents at CHI2023.


Interactions with digital devices during social settings can reduce social engagement and interrupt conversations. To overcome these drawbacks, we designed ParaGlassMenu, a semi-transparent circular menu that can be displayed around a conversation partner’s face on Optical See-Through Head-Mounted Display (OHMD) and interacted subtly using a ring mouse. We evaluated ParaGlassMenu with several alternative approaches (Smartphone, Voice assistant, and Linear OHMD menus) by manipulating Internet-of-Things (IoT) devices in a simulated conversation setting with a digital partner. Results indicated that the ParaGlassMenu offered the best overall performance in balancing social engagement and digital interaction needs in conversations. To validate these findings, we conducted a second study in a realistic conversation scenario involving commodity IoT devices. Results confirmed the utility and social acceptance of the ParaGlassMenu. Based on the results, we discuss implications for designing attention-maintaining subtle interaction techniques on OHMDs.

If you are interested in our project, feel free to access the code in Github.

AR²escuer – Towards AR Evacuation Helper in Fire Disaster

Demo Video

We designed the AR software, AR²escuer, to help users evacuate from fire disaster. AR²escuer could be installed in AR glass (e.g., NReal) to provide stable and reliable guidance to users compared with current physical exit signs. And it also provides users with intuitive and multimodal guidance to ensure delivering the information clearly and accurately.

This project won the Golden Glasses Award for Best Engineering in the first Summer Bootcamp of Future Interaction for Smart Glasses.

If you are interested in our project, feel free to access the code in Github.

Human Pose Estimation And Its Application in HCI

Human Pose Estimation is a method of extracting human key points from a given image or video. We analyze a variety of existing Human Posture Estimation models and select the OpenPose model to realize behavior recognition based on Human Posture Estimation and design specific applications of human-computer interaction in smart home scenarios.

  1. With the use of the Human Posture Estimation model, we analyze the scenario of fall detection of the elderly living alone. The background subtraction method is used to subtract the background of the input image in the specific scene of the elderly living alone, which helps to improve the accuracy of the OpenPose Human Pose Estimation model in single-person detection. In this project, rule-based and learning-based methods are developed respectively to process the human body’s key points obtained from the OpenPose model to achieve fall detection. This paper develops the function of sending warning emails automatically to inform the family members of the elderly living alone that the elderly may have fallen.
  2. With the use of the Human Posture Estimation model, we analyze the bad posture detection scenario of children watching TV. After subtracting the background, this project uses a rule-based method to process the human body key points obtained from the OpenPose model. The rule-based method realizes the detection of bad posture and notification of the bad posture of children watching TV. This project also provides an API for related TV or Smart Home Device manufacturers.
    The codes for the above two applications are now open-sourced on the GitHub website and can be accessed through this link.

How to Connect and Control Mi IoT Devices in Python

Step 1

Ensure your devices can connect to your smartphone or computer using WIFI. Using other protocol may need to use a Bridge / Gateway.

Step 2

Find your device’s mode, and find the correspondingly API in MIIO package.

Step 3

If you fail to find the APIs of your devices, try to search your devices mode in MIIO’s Github Issue.

If you cannot find any solution and your device supports WIFI connection. Then try the solution in Step 4.

Step 4

If your devices are not supported by MIIO API now, e.g.,MI Smart Power Plug 2, you can try the solution I post on Github Issue.

You can check your devices’ json file using the link. First, search the device in the home page, e.g., Mi Smart Plug (WiFi), then click the mode link in the page, e.g., chuangmi.plug.hmi206. Then you can see the SIID which indicates the service ID, and the PIID, which indicates property ID.

With SIID and PIID, you can set status of devices in python using MIIO’s Device Class and send function. e.g.,
if you want to open the socket, use the following codes:

from miio.device import Device 
plug = Device("DEVICE_IP", "DEVICE_TOKEN") 
print(plug.send("set_properties", [{'did': 'MYDID', 'siid': 2, 'piid': 1, 'value':True}]))