Date:  5 May 2019 (Sunday)

Time:  9:00 – 17:00

Venue:  SEC Centre Glasgow, UK (Room: Argyll 3 Crowne Plaza Hotel)


9:00 – 9:20Opening Remarks from Organizers
9:20 – 10:00Keynote Speech 1 (Daria Loi) (See below)
10:00 – 11:30Poster and Demo Session 1 & Coffee Break 1 (See below)
11:30 – 13:00Lunch and Preparation for Session 2
13:00 – 14:30Poster and Demo Session 2 (See below)
14:30 – 15:10Keynote Speech 2 (Hiroshi Ishii) (See below)
15:10 – 15:30Coffee Break 2
15:30 – 16:10Keynote Speech 3 (Wataru Yamada) (See below)
16:10 – 16:50Keynote Speech 4 (Takeo Igarashi) (See below)
16:50 – 17:00Reflection, Award and Closing

Invited Talk

Daria Loi

Daria Loi is a senior technical leader with 20+ years’ experience in and passion for mixing design strategy with agile UX research & innovation to enrich people’s everyday life. In her current Principal Engineer role at Intel Corporation, she focuses on distributed sensing and AI, with emphasis on smart home, aging in place and smart workspaces. Dr. Loi has a long track record exploring novel territories and leading through design innovation. Her seminal work on people’s use of touch screens, for instance, played a crucial role in enabling today’s touch-enabled laptops. Before Intel, she was architect in Italy and Senior Research Fellow at RMIT University in Australia. She conducted research and presented her work in most continents, published 60+ papers, 10+ patents, and is chair or committee member on numerous journals, institutes and conferences. In 2018 she was recognized as one of Italy’s 50 most inspiring women in tech (InspiringFifty).

Intelligent Systems for Aging in Place

This keynote reflects on the role that intelligent systems and ambient computing may play in future homes and cities, with a specific emphasis on populations aged 65 and beyond. Leveraging insights from a global study that she recently led, Dr. Daria Loi will contextualize the research domain, overview her ambient computing vision, and discuss implications and opportunities of designing intelligent systems for older adults. Her talk will ultimately advocate for adopting Participatory Design approaches, to ensure that intelligent, ambient technologies are developed with (instead of for) end users.

Hiroshi Ishii

Hiroshi Ishii is the Jerome B. Wiesner Professor of Media Arts and Sciences at the MIT Media Lab. After joining the Media Lab in October 1995, he founded the Tangible Media Group to make digital tangible by giving physical form to digital information and computation. Here, he pursues his visions of Tangible Bits (1997) and Radical Atoms (2012) that will transcend the Painted Bits of GUIs (Graphical User Interfaces), the current dominant paradigm of HCI (Human-Computer Interaction).

He is recognized as a founder of the “Tangible User Interfaces (TUI),” a new research genre based on the CHI ’97 “Tangible Bits” paper presented with Dr. Brygg Ullmer in Atlanta, Georgia, which led to the spinoff ACM International Conference on Tangible, Embedded and Embodied Interaction (TEI) from 2007. In addition to academic conferences, “Tangible Bits” was exhibited at the NTT ICC (2000) in Tokyo, Japan, at the Ars Electronica Center (2001-2003) in Linz, Austria, and many other international arts & design venues. For his Tangible Bits work, he was awarded tenure from MIT in 2001, and elected to the CHI Academy in 2006.

In 2012, he presented his new vision of “Radical Atoms” to leap beyond “Tangible Bits” by assuming a hypothetical generation of materials that can change form and properties dynamically and computationally, becoming as reconfigurable as pixels on a GUI screen. His team’s Radical Atoms works, including Shape Displays and Programmable Materials, contributed to form the new stream of “Shape-Changing UI” research in the HCI community.

His “Radical Atoms” vision was selected as the overarching theme of Ars Electronica Festival 2016, with the subtitle “The Alchemists of our Time.” His team ran a 3 year long Radical Atoms Exhibition at the Ars Electronica Center, which has been extended to run through 2019.

Ishii and his team have presented their visions of “Tangible Bits” and “Radical Atoms” at a variety of academic, design, and artistic venues (including ACM SIGCHI, ACM SIGGRAPH, Industrial Design Society of America, AIGA, Ars Electronica, ICC, Centre Pompidou, Victoria and Albert Museum, Cooper Hewitt Design Museum, Milan Design Week) emphasizing that the design of engaging and inspiring tangible interactions requires the rigor of both scientific and artistic review.

Prior to joining the MIT Media Lab, Ishii led the CSCW (Computer-Supported Cooperative Work) research group at NTT Human Interface Laboratories Japan from 1988-1994, where he and his team invented the TeamWorkStation and ClearBoard. He received a B.E. degree in electronic engineering, and M.E. and Ph.D. degrees in computer engineering from Hokkaido University, Japan, in 1978, 1980 and 1992, respectively.

His greatest treasure is the email message he received from Dr. Mark Weiser in 1997 regarding his CHI ‘97 Tangible Bits paper which was on the verge of rejection.


Making Digital Tangible

Today’s mainstream Human-Computer Interaction (HCI) research primarily addresses functional concerns – the needs of users, practical applications, and usability evaluation. Tangible Bits and Radical Atoms are driven by vision and carried out with an artistic approach. While today’s technologies will become obsolete in one year, and today’s applications will be replaced in 10 years, true visions – we believe – can last longer than 100 years.

Tangible Bits seeks to realize seamless interfaces between humans, digital information, and the physical environment by giving physical form to digital information and computation, making bits directly manipulatable and perceptible both in the foreground and background of our consciousness (peripheral awareness). Our goal is to invent new design media for artistic expression as well as for scientific analysis, taking advantage of the richness of human senses and skills we develop throughout our lifetime interacting with the physical world, as well as the computational reflection enabled by real-time sensing and digital feedback.

Radical Atoms leaps beyond Tangible Bits by assuming a hypothetical generation of materials that can change form and properties dynamically, becoming as reconfigurable as pixels on a screen. Radical Atoms is the future material that can transform its shape, conform to constraints, and inform the users of their affordances. Radical Atoms is a vision for the future of HumanMaterial Interaction, in which all digital information has a physical manifestation, thus enabling us to interact directly with it.


I will present the trajectory of our vision-driven design research from Tangible Bits towards Radical Atoms, illustrated through a variety of interaction design projects that have been presented and exhibited in Media Arts, Design, and Science communities. These emphasize that the design for engaging and inspiring tangible interactions requires the rigor of both scientific and artistic review, encapsulated by my motto, “Be Artistic and Analytic. Be Poetic and Pragmatic.”

Wataru Yamada

Wataru Yamada was born in 1987. He is a researcher at NTT DOCOMO and doctoral student at The University of Tokyo. He received M.A.S degree from The University of Tokyo in 2012. He joined NTT DOCOMO in 2012 and have worked on HCI field. His interest includes drone, AR/VR, ubiquitous computing and machine learning. He is a member of IPSJ and ACM.

Human-Drone Interaction

Technological improvements such as better batteries, greater processing power and advanced sensors have dramatically cut the prices of drones and made them a popular consumer product.

We have been investigating various drones and interaction techniques with them to create a new spatial platform in the real world. For example, we have developed a spherical drone that can display images in all directions and a safe drone that can fly without a propeller. In this presentation, we introduce them and explain a new research area called “Human-drone Interaction” and discuss its possibilities.

Takeo Igarashi

Takeo Igarashi is a professor at creative informatics department, the University of Tokyo. He received PhD from Department of Information Engineering, the University of Tokyo in 2000. His research interest is in user interface in general and current focus is on interaction techniques for 3D graphics. He is known as the inventor of sketch-based modeling system called Teddy, and received The Significant New Researcher Award at SIGGRAPH 2006. He is currently directing JST CREST “HCI for Machine Learning” Project.

Human-in-the-loop Computational Design

Computational design leverages computational methods such as optimization to design various forms of artifacts, ranging from visual design to product design. Most systems start with given requirements and then searches for designs that satisfy such requirements. However, in some cases these requirements are difficult to articulate at the beginning, or search space is too large and human guidance is necessary to reach a goal efficiently. We are trying to address these issues by integrating human interaction in the design process. One of our attempts is to integrate physical simulation and optimization into manual shape modeling process. Another attempt is to use crowd-sourced human-computation to solve design problems. We are also developing smart devices that support situated design and fabrication. This talk will present these results and discuss future directions.

Poster and Demo Session 1

1 A Method of Action Recognition in Ego-Centric Videos by using Object-Hand Relations Akihito Matsufuji, Wei-Fen Hsieh, Hao-Ming Hung, Eri Saro-Shimokawara, Toru Yamaguchi, Lieu-Hen ChenWe present a system for integrating the neural networks’ inference by using context and relation for complicated action recognition. In recent years, first person point of view which called as egocentric video analysis draw a high attention to better unde rstanding human activity and for being used to law enforcement, life logging and home automation. However, action recognition of egocentric video is fundamental problem, and it is based on some complicating feature inference. In order to overcome these problems, we propose the context based inference for complicated action recognition. In realistic scene, people manipulate objects as a natural part of performing an activity, and these object manipulations are important part of the visual evidence that should be considered as context. Thus, we take account of such context for action recognition. Our system consist of rule base architecture of bi-directional associative memory to use context of object-hand relationship for inference. We evaluate our method on benchmark first person video dataset, and empirical results illustrate the efficiency of our model.
2 Assessment of English Reading Ability using Bio Signals Yasuko Tsuchida, Ryota Yako, Akira Shimoda, Shigehiro Toyama, Yuki Murakami, Keisuke Takebe.N/A
3 Evaluation of Peer Tutoring Session Based on Non-Verbal Interactions Kaisei Tsujimoto, Yasuyuki SumiPeer tutors often improve their skills by personal reflection or getting feedback from other tutors, however, such evaluations can be quite subjective especially for non-verbal interactions. Thus, we developed a way to explain peer tutoring evaluations from the perspective of quantified non-verbal interactions. In this paper, we discuss the relevance between the quantified non-verbal interactions from recorded sessions and the questionnaire feedback from other tutors. We recorded the peer tutoring sessions of recruited novice tutors and asked tutors from a student organization to evaluate them using questionnaires. Specifically, we asked tutors to evaluate peer tutoring videos separated into specific scenes. It was suggested that there are common evaluation criteria among tutors and non-verbal factors affecting some evaluation based on the peer tutoring evaluation collected by the questionnaire survey and the non-verbal interaction in the peer tutoring session.
5 Analysis of Eye Tracking Characteristics in Reading Process Akira Shimoda, Yasuko Tsuchida, Shigehiro Toyama, Keisuke Takebe, Yuki MurakamiThis research investigates how the difficulty level of a sentence affec ts the eye movement of a reader. In order to observe the eye movements when the participants is reading sentences with different difficulty levels, the data of eye movements and the reading speed are analysed and evaluated in some experiments . As a result of the experiments, we found that order effects of some sentences need not to be considered. In addition, the reading tendency and English ability of the participants were found to correlate with the eye movement.
6 Analysis of Brain waves and its Characteristics in the Reading Process for Effective Feedback in English Language Learning Ryota Yako, Yasuko Tsuchida, Shigehiro Toyama, Keisuke Takebe, Yuki MurakamiN/A
7 Investigation of Midas-touches in Dwell Time Reduction Technique using Fitts’ Law for Dwell-Based Target Acquisition Toshiya Isomoto, Toshiyuki Ando, Buntarou Shizuki, Shin TakahashiWe conduct a follow-up study for investigating how many Midas-touches are caused in our previous dwell-based target acquisition technique. To investigate Midas-touches in dwell-based techniques, we first design a movie player which is designed by imitating the movie player used in daily life. Since dwell-based techniques face Midas-touch problem, we restrict the manipulation that the users can perform for conducting the study smoothly. We show the result of the follow-up study; Midas-touches were fewer in our previous technique than a common dwell-based technique.
8 Gaze Position Estimation with Neural Network Masaki Wada, Shigehiro Toyama, Keisuke Takebe, Yasuko TsuchidaN/A
9 Estimating Confidence in Voices using Crowdsourcing for Alleviating Tension with Altered Auditory Feedback Kana Naruse, Shigeo Yoshida, Shinnosuke Takamichi, Takuji Narumi, Tomohiro Tanikawa, Michitaka HiroseN/A
16 PlanT: A Plant-based Ambient Display Visualizing Gradually Accumulated Information Yoshitala Ishihara, Shori Ueda, Yuichi Itoh, Kazuyuki FujitaN/A
17 Development of VR Lecture System for Speaker and Audience at the Conference Mayo Morimoto, Sawako Mikami, Yosuke MotohashiThe situation of lecture, sometimes we also call presentation, talk, speech, haven’t change for long time especially in business conference. Although, there are some unsolved issue, for attending lecture of both speaker and audience, that speaker can’t give enough information to each audience and the speaker is needed to control the lecture all by themselves. The audience is having difficulty to get enough information with comfortable environment. Therefore, we developed the VR Lecture System for give necessary and enough information to make audience understand, and support the speaker in lecture at the conference by Chat Bot. The audience can control and get more information, and the speaker can communicate with Chat Bot in VR space. This paper explain about implementation and evaluation of VR Lecture System.
19 Opportunistic Data Exchange Algorithm for Animal Wearable Device through Active Behavior against External Stimuli Keijiro Nakagawa, Atsuya Makita, Miho Nagasawa, Takefumi Kikusui, Kaoru Sezaki, Hiroki KobayashiN/A
21 Remote Control Experiment with DisplayBowl and 360-degree-video Shio Miyafuji, Soichiro Toyohara, Toshiki Sato, Hideki KoikeDisplayBowl is a bowl shaped hemispherical display for showing omnidirectional images. DisplayBowl allows users to observe an azimuthal equidistant image of omnidirectional image by looking the image from above. The feature of the DisplayBowl solves the problem of inability to notice what happens around them, which occurs with conventional displays such as flat displays and head-mounted displays. In this paper, we conducted a user study, in which we asked participants to control a remote drone with an omnidirectional video streaming, to compare the uniqueness and advantages of three displays: a flat panel display, a head-mounted display and a DisplayBowl.
25 Exploring Performance of Thumb Input for Pointing and Dragging Tasks on Mobile Device Sanyan Sarcar, Ayumu Ono, Chaklam Silpauwanchai, Antti Oulasvirta, William Delamare, Xiangshi RenThumb based interaction is becoming increasingly popular in mobile devices. However, the interaction still remains slow, ambiguous, and error-prone. This paper presents an exploratory user experiment results of one-thumb pointing and dragging task performance, based on three factors: mobile size, target size, and postures (sitting and walking positions). Beside obvious findings like the pointing task while sitting is faster and less error-prone than other conditions, we observed some surprising scenarios, such as the gripping style of most of the users was casual and did not follow any formal model or structure. We concluded our experiences into design implications with respect to mobile size, posture, and gripping styles.
26 PICALA: An Interactive Presentation System to Share Reaction of Audiences with Light Color Tsubasa Yumura, Yuto Lim, Yasuo TanIn this paper, we introduce and propose a feeling sharing system, called PICALA, which can express the feelings of audience in a public presentation. We designed and implemented the system which change color of lights near screen by click buttons on browser. The proposed system is using a series of 4-color lights. To express the feelings of audience, each audience is given a pushing button unit to press 4 buttons for indicating heh, that’s amazing, laugh, and questionmark. We conducted demonstration experiments in the 3 workshops, WISS2014, wakate2015, and EC2015. According to subjective assessment by questionnaire surveys, main purpose of the system to share feelings was achieved.
28 Preliminary Evaluation of Stroke-Based Text Entry for Virtual Reality Naoki Yanagihara, Buntarou Shizuki, Shin TakahashiN/A
30 Math graphs for the visually impaired: Audio presentation of elements of mathematical graphs Jeongyeon Kim, Yoonah Lee, Inho SeoThe sense of sight takes a dominating role in learning mathematical graphs. Most visually impaired students drop out of mathematics because necessary content is inaccessible. Sonification and auditory graphs have been the primary methods of representing data through sound. However, the representation of mathematical elements of graphs is still unexplored. The experiments in this paper investigate optimal methods for representing mathematical elements of graphs with sound. The results indicate that the methods of design in this study are effective for describing mathematical elements of graphs, such as axes, quadrants and differentiability. These findings can help visually impaired learners to be more independent, and also facilitate further studies on assistive technology.
31 MIPOSE: A Micro-Intelligent Platform for Dynamic Human Pose Recognition Zhishuai Han, Xiaokun Wang, Xiaojuan Ban, Jianyu Wu
Giving computers the ability to learn from demonstrations is important for users to perform complex tasks. In this paper, we present an intelligent self-learning interface for dynamic human pose recognition. We capture 20 samples for an unknown pose to train a stable generative adversarial networks (GAN) system which aims to conduct data enhancement, then we adopt a threshold isolation method to distinguish relatively similar poses. A few minutes of learning time is sufficient to train a GAN system to successfully generate qualified pose samples. Our platform provides a feasible scheme for micro-intelligent interface, which can benefit to human-robot interaction greatly.
32 Player Selection in Cricket Based on Similarity of Playing Conditions Walid Mohammad, Sadia SharminN/A
35 Bridging the Female Entrepreneurs in Bangladesh using ICT Sadia SharminN/A
38 WindyWall: The Findings of Exploring Creative Wind Simulations David Tolley, Nguyen Thi Ngoc Tram, Anthony Tang, Nimesha Ranasinghe, Kensaku Kawauchi, Ching-Chiuan Yen In this paper we introduce WindyWall, a platform for creative design and exploration of wind simulations. WindyWall is a three-panel 90-fan array that encapsulates users with 270°of wind coverage. We present 1) a brief description of the design and implementation of the system, and 2) a summary of the significant findings and implications that we have noted from developing the system and using it to run initial pilot studies of users’ ability to perceive different magnitudes of wind stimuli. This paper is a condensed overview of findings from the development and evaluations conducted in our paper “WindyWall: Exploring Creative Wind Simulations”.
39 Exploring the Conversational User Experience with a Mental Health Chatbot SoHyun Park, Bongwon SuhMental health chatbots are on the rise. They detect user mood, identify signs for treatable issues, and provide solutions. However, there has been limited research on the manners in which those chatbots should engage in the therapeutic communication with users. This paper summarizes a recent exploratory study with a mental health chatbot prototype, Bonobot, which delivers an automated conversational sequence for a motivational interview. We conducted a qualitative user study with 30 participants on the conversational user experience and gathered findings as follows: (a) a question-and-feedback sequence can potentially support a counselling conversation; and (b) Bonobot’s questions can encourage self-reflection, potentially evoking coping actions. We discuss implications for integrating a theory-informed approach in designing therapeutic conversations with mental health chatbots.
41 PlayMaker: A Participatory Design Method for Creating Entertainment Application Concepts Using Activity Data Dong Yoon Koh, Ju Yeon Kim, Donghyeok Yun, Youn-kyung LimN/A
42 Evaluation of Mobile Applications for Disaster Responses through Personas and Scenarios Heejae Jung, Hyunggu JungDisasters are constantly affecting Asia. To minimize the damage, it is important to have an immediate and effective response, which requires multiple stakeholders of the disaster scenarios to communicate. After identifying stakeholders and their needs through personas and scenarios illustrating earthquake situations, we evaluated existing mobile applications with the needs for disaster responses, earthquake in particular. The findings of this study suggest design implications for creating mobile applications to support multiple stakeholders for disaster responses.
43 Cooperative Deployment of Shonabondhu Nova Ahmed, Farzana Islam, Kimia Tuz ZamanN/A
47 VibEye: Vibration-Mediated Object Recognition for Tangible Interactive Applications Seungjae Oh, Chaeyoung Park, Jinsoo Kim, Gyeore Yun, Seungmoon ChoiWe present VibEye, a vibration-based sensing system that enables recognition of objects held in a hand for tangible interaction. When a user is holding an object between two fingers wearing VibEye, the system triggers a vibration from one finger. The vibration that has propagated through the object is sensed at the other finger (Figure 1). Through spectrogram analysis of the received vibration (Figure 2), we can build a classifier that distinguishes the object with good accuracy within a short duration (0.5 sec). In this submission, we present a tangible interactive application for 3D modeling using 15 cubes of different materials. VibEye enables the cubes to function as haptic proxies of natural rich sensations in a virtual environment. Further details are provided in the full paper by the same authors in the main CHI proceedings.
49 Indonesian Hospital Knowledge Management Technology Characteristics Yohannes Kurniawan, Fredy Jingga, Natalia LimantaraThe critical steps of knowledge sharing process in hospital is to change tacit knowledge into explicit knowledge (externalization) and use that explicit knowledge to improve tacit knowledge (internalization). As we know technology in the medical field is growing rapidly. Therefore, hospitals will be successful if they consistently create new knowledge and disseminate it to all of stakeholders in their organizations and quickly adopt their latest technologies and services, especially in the medical field.
53 UI Sketching Reskill for UX Researchers Idyawati Hussein, Masitah Chazali, Mumi Mahmud, Aziah Ahmad, Huda Ibrahim
In this paper, we highlighted the importance of User Experience (UX) sketching as one of the skillsets required for UX researchers in project development especially in low participatory design awareness countries like Malaysia. This is due to the results from UX research activities that are not perceived to be impactful by developers, designers and other stakeholders in digital transformation projects, especially by the government that has been suffering from vendor-centric Request for Proposal (RFP) tender for the past 60 years. In consequences, time taken by developers to code from requirements captured by business analysts is longer than visual representation produced by UX or UI designers, which shortens the requirement gathering process. In conclusion, we found that UX sketching that produces visual representation of user needs to be effective especially in participatory design approach and to reduce user frustrations.
55 Mind the Gap: Joining Bezel-Separated Lines in Multi-mobile Systems Noris Mohd Norowi, Ong Beng Liang, Teo Rhun Ming, Rahmita Wirza O.K. Rahmat, Azrul Hazri JantanThis paper presents a multi-mobile system that allows users to come together with their mobile devices in an ad-hoc manner, and integrates together as one seamless display surface with multitouch capabilities. Typically, gaps and bezels between the display causes inherent design problems to the multi-display structure. Two user studies have been conducted with two versions of prototypes design by observing groups of students performing an interactive drawing task. Solution for the bezels was implemented into an iterative prototype. The findings show gaps and disjointed objects were observed in the drawing outcomes, with the implementation of the Continuous Spatial Configuration, the gaps and spaces between the screens were eliminated. It is believed that the prototype designs can provide the next step in the evolution of collaboration beyond the expensive tabletops systems for the society.

Poster and Demo Session 2

4 A Human-Friendly Fluid Measurement Technology using Artificial Fish Eggs as a Tracer Shogo Yamashita, Shunichi Suwa, Takashi Miyaki, Jun RekimotoN/A
10 Pressure-Based One-Handed Interaction Technique for Large Smartphones Using Cursor Kyohei Hakka, Toshiyuki Ando, Buntarou Shizuki, Shin Takahashi N/A
11 Design of Embodied Experience in Picture-Book Storytelling Mina Shibasaki, Kouta MinamizawaWe developed a vibrotactile cushion that was used during the storytelling of picture-books that grabs the attention of children and helped them to focus on the story. As a result of field tests that used our system, we confirmed that children had various positive reactions to the vibrotactile experience combined with picture-books read aloud. This project is a collaboration between a university, printing company, and picture book company. We designed our system based on user studies at a kid’s space. We conducted empirical field experiments and confirmed that children had various positive reactions to the experience. In this paper, we will discuss the possibilities and versatility of our system, the effect of the embodied experience with picture-books, and our obtained feedback.
12 A Design of Eyes-Free Kana Entry Method Utilizing Single Stroke for Mobile Devices Yuta Urushiyama, Takuto Nakamura, Buntarou Shizuki N/A
13 A Sensing Technique for Data Glove Using Conductive Fiber Ryosuke Takada, Junichiro Kadomoto, Buntarou Shizuki We show a hand shape estimation and grabbing tag differentiation technique for a data glove using conductive fiber included in a glove. By using this technique, hand shape (bending of a finger and contact of fingers) and differentiating of a grabbing tag are detected. In this technique, electrical resistance of conductive fiber is used for estimating the bend of each finger; this resistance decreases as the finger bends because the surface of the glove is short circuited. For detecting fingers’ contact, alternating currents of different frequencies are applied on each finger and observes the signal propagation between the fingers. This principle is used for differentiation of a grabbing tag. Each tag has alternating currents of unique frequency. We show a prototype data glove using this technique
14 Smart Maneuvering Assist System by Galvanic Vestibular Stimulation Keisuke Otani, Shigehiro Toyama, Kenji Kamimura, Fujio Ikeda N/A
15 Influence of Individual Difference for Mapping to Represent Human Motion with Humanoid Robot Rio Ito, Shigehiro Toyama, Keisuke Takebe, Fujio Ikeda N/A
18 Touch Interface Design System in Multilayered Urushi Circuit Takuto Nakamura, Koshi Ikegawa, Shogo Tsuchikiri, Keita Saito, Kazushi Kamezawa, Yuki Hashimoto, Buntarou Shizuki N/A
20 Capacitance- and Phase-Based Detection Technique of Finger Bend and Touched Hand Using Ring-Shaped Device Minto Funakoshi, Koshi Ikegawa, Buntarou ShizukiN/A
22 Perception of Spatial Information of Animated Content on Physically Moving Display Yuki Onishi, Anthony Tang, Yoshiki Kudo, Kazuki Takashima, Yoshifumi Kitamura The Living Wall display augments content representation using its simple physical movements coupling with spatial element of the content animation. It could be a new type of expressive 3D display. To explore the characteristics of the Living Wall display, firstly we examine the effect of physical screen movements to the viewers’ perception of the displayed spatial information (e.g., depth perception). We conduct a perception study where viewers’ perceptions of objects’ animations, rotation and depth-movement, are compared between two display conditions, the Living Wall display (with physical screen movement) and static visual display (baseline). Results show that viewers overestimated the degree of the presented content rotation when using the Living Wall display but no difference between the display types is observed for the other animations. We discuss insights into designing physically moving 3D displays.
23 Stacked-Block Distinction System Based on Resistance Measurement Keita Saito, Toshiyuki Ando, Hirobumi Tomita, Buntarou Shizuki N/A
24 Practical Automatic Infomation Authoring System for Information guidance Takenori Hara, Masuhiro Ozawa, Iori Sugahara, Mari Andou In this paper, we propose a method to automatically assign attributes to guidance information and automatic authoring method of information according to user attributes. This makes it possible to provide an information guidance system with low cost. Also, we describe the problem of “reading” occurring in Japanese and its solution. Our work has not completed yet. We just completed the system development. We are planning to carry out evaluation of our system from now on.
27 Augmented Typing: Augmentation of Keyboard Typing Experience by Adding Visual and Sound Effects Tsubasa Yumura, Satoshi Nakamura People make choice of a keyboard for various features such as key arrangement, repulsion strength, stroke depth, and stroke sound. Currently these features depend on hardware; however, if a physical keyboard can be customized with software, users will be able to arrange it as their preference. In this paper, we propose augmented typing that augments typing experience by adding a visual and sound effect to a physical keyboard. We implemented a prototype system of the proposed system using projection mapping. To determine the effect and the usefulness of the system, in addition, we conducted evaluation experiments. Results of the experiments showed that the evaluation of sound effect was more variable than visual effect; users found that visual effects are beautiful, whereas the sound effects to be annoying.
29 Be Bait!: Hammock-based Interaction for Enjoyable Underwater Swimming in VR Shotaro Ichikawa, Yuki Onishi, Daigo Hayashi, Akiyuki Ebi, Isamu Endo, Aoi Suzuki, Anri Niwano, Kazuyuki Fujita, Kazuki Takashima, Yoshifumi KitamuraWe present a novel interaction technique for virtual underwater swimming using a hammock – based device and demonstrate its reality, intuitiveness, and enjoyment by introducing an installation called Be Bait! in which the user becomes a bait and lures a shark. With our technique, the user wearing HMD lies over two hammocks and repeatedly sways her body from side to side to swim underwater like a fish. We implemented the mechanism mainly consisting of hammocks and an acceleration sensor , and additionally installed a mechanism for Be Bait! installation, providing the user with haptic sensations of being bitten by a shark. We gathered feedback from 157 users through an exhibition event, and it shows that most of the participants easily and intuitively explored and interacted with the virtual sea and enjoyed the immersive and attractive experience.
33 Detecting Negative Emotions during Social Media Use on Smartphones Mintra Ruensuk, Hyunmi Oh, Eunyong Cheon, Hwajung Hong, Ian Oakley Emotions are integral to the social media user experience; we express our feelings, react to posted content and communicate with emoji. This may lead to emotional contagion and undesirable behaviors such as cyberbullying and flaming. Nearly real-time negative emotion detection during the use of social media could mitigate these behaviors, but existing techniques rely on corpora of aggregated user-generated data – posted comments or social graph structure. This paper explores how live data extracted from smartphone sensors can predict binary affect, valence and arousal during the typical social media tasks of browsing content and chatting. Results show that momentary emotion can be predicted, using features from screen touches and device motions, with peak F1-scores of 0.86, 0.86, 0.88 for affect, valence and arousal.
34 Towards a Better Video Comparison: Case Study Atima Tharatipyakul, Hyowon Lee Video comparison is a common task that people engage in when interacting with video contents, for instance, in video browsing and video editing. However, there exists little guidance to prompt and assist designers about a comparison task when designing user interfaces and interactions for a video application. To develop the guidance, we are synthesizing relevant knowledge from related research areas and our own prototypes into a th eoretical model that pertains key elements and their relationships that characterise the interaction of video comparison. Here, we present a design originated from this model and an informal user study that helped us test and refine the model. The case study demonstrates the model’s potential in shedding new lights on how applications that support video comparison could be designed better and also in offering avenues for future research in applications that depend on video comparison, regardless of domain areas.
37 Commitment Devices in Online Behavior Change Support Systems Hyunsoo Lee, Hwajung Hong, Uichin Lee Commitment devices–a self-imposed contract that helps an individual stick to a plan of action–have been widely used to make a positive influence on one’s behavior change. We analyze commitment contract posts in, an online behavior change support system to characterize the types of target behaviors and the effectiveness of different commitment devices for behavioral changes. We provide several practical implications for designing behavior change support systems that could inform further directions for research in behavioral economics and psychology.
40 Understanding How Patients with Chronic Conditions Make Assumptions on Various Types of Self-Tracking Data Yoojung Kim, Joongseek Lee Due to the prevalence of self-tracking technologies, patients in chronic conditions can track their everyday activities and test the relationship between them. However, having testable assumptions to analyze the various type of data is often challenging for laypeople. Thus, in this paper, we present types of assumptions and expressions to test the relationship between a variety of data by conducting in-depth interviews with 20 chronic disease patients. As a result, participants made 49 assumptions, and the most frequent pair was meals–weight (observed 19 times), followed by the weight–activity pair (6 times). Participants used specific phrases such as a sudden increase in (10 times) and a steady decrease in (9). This study helps to understand people’s interests in various types of data so that can contribute to helping design self-experimentation tools to test self-tracking data.
44 Exploring Traditional Handicraft Learning Mode using WebAR Technology Ji Yi,Tan Peng,Zhou Jiayin,Fu Tieming N/A
45 CarolTree: An Interactive Storytelling Based Music Education Game for Kids Hwarim Hyun, Hyeyoon Kim, Na Hyun Kwon, Chaera Ryu, Soo Youn Lee, Changhoon Oh N/A
48 Towards the Development of Bengali Language Corpus from Public Facebook Pages for Hate Speech Research Alvi Md. Ishmam, Jawad Arman, Sadia Sharmin Online abusive or hateful speech detection on different languages in Social Networking Sites (SNS) have drawn the attention of researchers recently. Hateful comments in public pages of Facebook pages ignite social mishaps in Bangladesh. In this paper, we have discussed the development and annotation of corpus on hateful speech on the Bengali language based on public Facebook pages. We have classified hateful comments into six major classes based on lexicons following the socio- cultural aspects of Bangladesh. The corpus (4753 comments) is the maiden contribution as a publicly available data set which can be enhanced and utilized for future research for hate speech research in SNS.
50 An Innovative User Interface under Awfully Rigorous Conditions: Test Question Browser App for Japanese Examinees with Print Disabilities Kazunori Minatani, Yoshimi Matsuzaki, Keita Kusumoki, Toshimitsu Yamaguchi N/A
52 Mobile Learning: Enhancing Social Learning Amongst Millennials Eunice Sari, Adi Tedjasaputra Learning in digital age potentially enable learners to seize a lot of opportunities to connect with any resources, information, peers and experts to effectively learn. However, a conventional way of learning, which focuses more on teacher-centered learning is still being heavily practiced. This conventional learning is no longer suitable to prepare young people to be ready in competitive workplaces in this digital age. Digital technology offers a lot of opportunities to reimagine and redefine learning. Mobile technology, for example, is already at the hands of its users and it should be exploited as a tool to transform learning practice. This paper reports a preliminary result of a UX design research conducted in Indonesia, the fourth biggest market of mobile technology, where we explore how to maximize the use mobile technology as a learning tool at higher education. The goals of the research are first to instigate a change in learning among Indonesian millennials at higher education level and develop requirements and design for mobile learning ecosystem.
54 Designing Technology to Address Distress of TB Patients’ Emotional States: findings from contextual inquiry Haliyana Khalid, Masitah Ghazali This paper introduces our early work on designing a remote application for tuberculous (TB) treatment. By using contextual inquiry, we obtain information and insights about the Directly Observed Therapy Short Course (DOTS) treatment for TB patients to understand their experiences during the treatment, together with the emotional states and feelings. We discuss how these findings help us in designing technological solutions for these patients.