Date: 22 Apr 2018 (Sunday)
Time: 9:00 – 17:00
|9:00 – 9:15||Opening Remarks from Organizers|
|9:15 – 9:30||Ice Break|
|9:30 – 10:15||Keynote Speech 1 (Aaron Quigley) (See below)|
|10:15 – 10:30||Preparation for Poster Presentation 1|
|10:30 – 11:30||Coffee break 1|
|10:30 – 12:00||
Poster Presentation 1 (13 posters)
10:30 – 11:15 7 posters
11:15 – 12:00 6 posters
|12:00 – 13:00||Lunch and Preparation for Poster Presentation 2|
|13:00 – 14:30||
Poster Presentation 2 (13 posters)
13:00 – 13:45 7 posters
13:45 – 14:30 6 posters
|14:30 – 15:15||Keynote Speech 2 (Youn Kyung Lim) (See below)|
|15:15 – 15:30||Coffee Break 2|
|15:30 – 16:15||Keynote Speech 3 (Yasuyuki Sumi) (See below)|
|16:15 – 16:45||Talk: ACM SIGCHI Event in Asia|
|16:45 – 17:00||Closing|
Professor Aaron Quigley is the Chair of Human Computer Interaction and Director of Impact in Computer Science in the University of St Andrews in Scotland. He is director of SACHI, the St Andrews Computer Human Interaction research group. Aaron’s research interests include novel and on-body interaction, discreet human computer interaction, pervasive and ubiquitous computing and information visualisation.Aaron has delivered over 50 invited talks and is a keynote speaker at IEEE VISSOFT 2018 conference and Mensch-und-Computer conference in 2019. Aaron is the ACM SIGCHI Vice President of Conferences, a board member of ScotlandIS and member of the MobileHCI steering committee.His research and development has been supported by the EPSRC, AHRC, JISC, SFC, NDRC, EU FP7/FP6, SFI, Smart Internet CRC, NICTA, Wacom, IBM, Intel, Microsoft and MERL. He has published over 170 internationally peer-reviewed publications including edited volumes, journal papers, book chapters, conference and workshop papers and holds 3 patents.Aaron has held academic and industry appointments in Australia, Japan, USA, Germany, Ireland and the UK. He is the program co-chair for the ACM IUI 2018 conference, and was the ACM MobileHCI 2014 General Co-Chair, ACM PerDis 2014 Program Chair, ACM UIST 2013 General Co-Chair and ACM ITS 2013 General Co-Chair. In total Aaron has had chairing roles in thirty international conferences and has served on over ninety conference and workshop program committees. His research and personal goal is to empower the next billion people with a step-change in human machine interaction through advanced yet subtle and discreet human interface technologies to bridge the divide between the physical world we live in, and the digital world, where the power of computing currently resides.
Interaction is all around us, with devices on and around our body or with fixed and mobile systems. Computational interaction is bleeding into the very fabric of our day to day lives which present many challenges and opportunities for HCI. Today, our points of interaction come in varying form factors from small smart watches, head-mounted displays or mobiles to large automotive, appliances, fixed computing or public display systems.
Yasuyuki Sumi is a professor at Future University Hakodate since 2011. Before joining the university, he has been a researcher at ATR and an associate professor at Kyoto University. He received B.Eng. from Waseda University in 1990, and M.Eng. and D.Eng. degrees in Information Engineering from the University of Tokyo in 1992 and 1995, respectively. His research interests include experience medium, lifelog, knowledge-based systems, creativity supporting systems, interface/social agents, ubiquitous/wearable computing, Web intelligence, multimedia processing, and their applications for facilitating human interaction and collaboration.
Experience Medium Situated in Real-World Contexts
This talk presents a notion of “experience medium” where we can capture, interpret, and share our daily experiences embedded in the real-world contexts. We daily communicate and create knowledge and skills through conversation and collaborative work. In order to memorize/utilize such knowledge through experiences, we focus on contextual information such as actions during the experience, participating people, referred objects by joint attention or pointing. Context serves not only as a clue for accessing experiential knowledge but also as a prerequisite for knowledge creation. We have developed various application systems on the record and recall of experiential data so far. The common approach to them is not relying on semantic interpretation of recorded data, but focusing on structuring and recalling experiential knowledge by contextual information. In this talk, I will show our projects, e.g., collaborative experience capture system with wearable/ubiquitous sensors, town soundscape with situated in-car conversation, bookshelf conversation system, etc.
Dr. Youn-kyung Lim is an associate professor at the Department of Industrial Design at KAIST in South Korea. Before she joins KAIST, she was an assistant professor at School of Informatics and Computing in Indiana University, Bloomington.
Prof. Lim received her Ph.D. at the Institute of Design at the Illinois Institute of Technology (IIT) in Chicago, Illinois, and holds a Master of Design (M.Des.) in Human-centered Design from the same university. She holds a B.S. in Industrial Design from KAIST.
She has been participating in service activities as a technical program co-chair, a paper co-chair, an organizing committee member, and a program committee member for major top international conferences in the areas of HCI and Design such as CHI, UbiComp, DIS, DRS, and IASDR. She is also an editorial board member for Journal of Visual Language and Computing.
Her research directions include experience-centered design and aesthetics of interaction as well as prototyping in interaction design especially for creative interaction design in the domains of CHI, DIS, UbiComp and CSCW. She is a recipient of the 2009 Microsoft Research New Faculty Award by Microsoft Research Asia (MSRA).
User Experience Design for Smart Products
We are now facing with newly emerging forms of smart interactive products in our everyday living situations, such as voice-interactive intelligent assistants, chatbots, and IoTs. We interact with these products in our natural language. They are proactively learning about us and offer services based on that learning. Their forms and functionalities are not connected in a way we have expected from the traditional interactive products. For example, agents or robots are not limited to afford only particular or specific functionalities. They can do any services people may want. Their functions are not fixed. People can create new functions they want through these customizable systems and the systems are learning about the users over time and evolve over time. In this talk, the speaker will raise questions about how we should rethink about people’s experience with these newly emerging forms of smart interactive products and how we can define and design user experience of these products, by introducing some of her research outcomes in recent years.
Poster and Demo
|1||Narrative Instructional Creation Toolkit for Interactive Platforms for Learning||Toni-Jan Keith Monserrat, Ferdinand Pitagan, Eliezer Albacea||This paper provides a draft framework and toolkit plan that provides creation of interactive learning modules for tutors and teachers with the goal of making it easy enough for the teachers to use. The interactive learning modules follow the narrative method of teaching.|
|2||Ohmic-Touch: Extending Touch Interaction by Indirect Touch through Resistive Objects||Kaori Ikematsu, Itiro Siio||When an object is interposed between a touch surface and a finger/touch pen, the change in impedance caused by the object can be measured by the driver software. This phenomenon has been used to develop new interaction techniques. Unlike previous works that focused on the capacitance component in impedance, Ohmic-Touch enhances touch input modality by sensing resistance. We implement mechanisms on touch surfaces based on the electrical resistance of the object: for example, to sense the touching position on an interposed object, to identify each object, and to sense light, force, or temperature by using resistors and sensors. (Demo movie)|
|3||Understanding the Effects of Drivers’ Perceived Quality of Information and Suggestions on the Use of Social Navigation Applications||Briane Paul V. Samson, Yasuyuki Sumi||Social navigation applications like Waze have recently gained popularity as more localities experience regular trafficcongestions. They use crowd-sourced traffic data in suggesting the fastest route to its users. While most literaturefocus on designing efficient ways to sense data, and developing techniques to reduce their sparsity and error, thereare not much work investigating how the perceived quality of provided traffic information and optimal routes affectsactual navigating behaviors. In this preliminary work, we conducted an initial semi-structured qualitative study withsix drivers that use social navigation applications. Apart from interviews, we also recorded some of their trips. Ourinitial analysis revealed their criteria in choosing a route to follow, changes in navigation behavior, and reliance on theprior knowledge of others. We plan to extend this in a future work to derive design considerations in improving thetrustworthiness of social navigation systems.|
|4||Emotionally Moving Game Experiences of Japanese Players||Julia Ayumi, Bopp||Emotions are key to the player experience, and interest in emotionally challenging digital games has been growing,both among HCI researchers and players. While the CHI community has stressed the importance of taking people’scultural background into account, studies of Japanese players’ gaming experiences are rare. Considering Japan’sunique and long-standing game culture, studying their emotional gaming experiences may prove as insightful.In this paper, I present findings from an ongoing study of Japanese players’ emotionally moving game experiences.|
|5||Robotized Speed Dating: Conversing with Robot Agents instead of People Seeking Romantic Partners||Takuya Iwamoto, Kazushi Nishimoto||Speed dating is a good way of finding a romantic partner. However, it is difficult for shy people, such asthe typical Japanese, to converse at a social event with someone that they have met for the first time at adating event due to the initial barriers of conversation. This study aims to propose a “robotized speed dating”(RSD), where robots are substituted for people to converse with at the speed dating events. At an event,a robot can always accompany a participant and talk about the participant instead of the participant talkingabout themselves. As a result, the human attendees will get to know each other without worrying about theinitial barriers, such as finding and talking about attractive topics.|
|6||Toward A Crowdsourcing Platform For Real-Time Computing-Based Microtask Scheduling||Susumu Saito, Jeffrey P. Bigham, Teppei Nakano, Tetsunori Kobayashi||Response time control on microtask crowdsourcing is gaining in importance among many requesters. Current techniques for such control are only available by always returning real-time answers, which are usually expensive. Wepresent a real-time computing-based crowdsourcing platform for scheduling microtasks by their deadlines specifiedby requesters. Requesters are also provided with estimated cost for the specified deadlines before their microtasks areposted, so they are able to iteratively adjust the balance between time requirement and budget until they agree to thesuggested condition. We formalize several challenges upon designing our platform architecture and describe possibledetailed directions toward the problems.|
|7||Integrated Learning System by using Bi-directional Associative Memory||Akihiro Matsufuji, Wei-Fen Hsieh, Eri sato-Shimokawara, Toru Yamaguchi||We present a system for integrating the pre-trained neural networks (NNs). NNs are applicable to a wide range ofproblems in speech, vision, language and pre-training NNs chip are on devices. For resolve complex tasks, it is neces-sary to combine specific NNs. However, pre-training NNs is difficult to be combined and modify. Thus, our system integrates NNs using BAM. The result shows that our system is more accurate than NNs (trained all sources).|
|8||Smart City Readiness in Malaysia: The Role of HCI and UX||Masitah Ghazali, Idyawati Hussein, Nor Laila Md Noor, Murni Mahmud||Smart city is not just about equipping the city with sensors to collect and push electronic data to citizensfor the sake of keeping in the trend of Internet of Things. What is far more important is to also equippingthe citizens to become smart citizens. In this paper, we describe the journey of empowering the Malaysianscitizens by considering HCI and UX, which is also a part of the bigger plan of the KL ACM Chapter, that is toprovide an ecosystem of symbiotic collaborations between the academia, government agencies andindustries.|
|9||Case study of a pedaling assist systemfor piano players with lower leg muscleweakness||Akira Uehara, Hiroaki Kawamoto, Yoshiyuki Sankai||Pedaling is one of the most important operations for playing the piano. However, it is difficult for people whohave lower leg muscle weakness to operate stable pedaling. The purpose of this study is to develop apedaling assist system that can be placed at an arbitrary position around the underfoot and control the pedalingbased on the extent of the player’s effort. The system consists of a master pedal near the affected side and aslave pedal near the pedal of the piano. The master pedal measures the angle, which is operated by a player’s footand the slave pedal controls the power unit and operates the pedal based on the input of the master pedal. Inexperiments with an able-bodied person and a player with lower leg muscle weakness, we measured the angleand height of a piano damper. The results showed that the participant was able to operate the pedal stably. Inconclusion, we confirmed the feasibility of the system.|
|10||Pokemon Go Influences Where You Go: Analyzing the Effects of Location-based Services for Location Prediction||Keiichi Ochiai, Yusuke Fukazawa, Wataru Yamada, Hiroyuki Manabe, Yutaka Matsuo||Predicting user location is one of the most important topics in the field of location data analysis. While it is reasonablethat human mobility is predictable for frequently visited places such as home and the workplace, location predictionfor novel places is much more difficult. However, locationbased services (LBSs) such as Pokémon Go can influenceuser destination and we can exploit this to achieve more accurate location prediction even for new locations. In this paper, we conduct an experiment that assesses the behavior difference of Pokémon Go users and non-users. Then weperform a simple machine learning experiment to analyze how Pokémon Go usage impacts location predictability. Weassume that users who use the same LBS tend to visit similar locations. We find that the novel location predictability ofPokémon Go users is 53.8% higher than that of non-user.|
|11||Automating Usability Heuristic Evaluation of Websites Using Convolutional Neural Networks||Ryan Austin Fernandez, Jordan Aiko Deja||Heuristic evaluation is an important form of quality assurance and an important phase of UX design. Thistypically takes a long time since it involves consolidating opinions of multiple design experts. It is desirable toautomate the detection of usability issues in a given user interface design to lessen the expense and time needed tohire professionals and to focus on “”development, review, revision”” cycles. This paper proposes a method that isdata-driven, through the usage of Convolutional Neural Networks (CNN), which can learn complex patterns andinformation from images. A computational model using CNNs will be developed to determine usability features fromscreenshots of user interfaces, which have been labeled by UX experts.|
|12||BeautifyingProfile Pictures in Online Dating:Dissolvingthe Ideal-Reality Gap||Takuya Iwamoto, Yuta Miyake, Kazutaka Kurihara||With the drastic expansion of the online dating service market, attractive profile pictures are vital in thecompetitive world of dating. To attract others using these pictures, photo editors are helpful. However,enhanced profile pictures produce an ideal-reality gap.The more a profile picture is beautified, the wider is the gap between the image and the actual person, whichcan cause discomfort when two users meet in person. A solution to the gap problem is the gradually reversingthe beautified image to the non-edited image over time, which was supported by our first experiment whichtested if subjects could notice the gradual changes in given profile pictures over certain time. Additionally, weconducted an experiment, where one group saw gradual changes of a beautified image while anotherjust saw the beautified image, and finally, the subjects’ minds to meet the model on the image was compared.This paper discusses both the experiments.|
|13||Effect of Varying Combinations of Cutting Skill and Difficulty Level on Practice Efficacy||Takafumi Higashi, Hideaki Kanai||In this paper, we aim to measure both the difficulty level of a paper cutting task and the cutting skill of participants and investigated whether practice efficacy could be improved by using different combinations of difficulty and skill. The cutting patterns consisted of straight lines and curves, and we measured their index of difficulty (ID) using a method based on Steering law. To achieve these measurements,we developed a system consisting of a drawing display and.a stylus. The system measures cutting skill according tothe index of difficulty of the cutting line and the movementtime (MT), based on Steering law. We confirmed skill improvementsby measuring changes in MT after novices repeatedlypracticed with various patterns. Additionally, wemeasured the effects of practice for novices cutting a specificpattern. Novices practiced tasks suitable for their skilllevels showed greater improvement. In contrast, we confirmedthat practice with a task that was overly complicatedproduced only weak improvement.|
|14||Nothing is More Revealing than Body Movement: Measuring the Movement Kinematics in VR to Screen Dementia||Kyoungwon Seo, Hokyoung Ryu||The inability to complete instrumental activities of dailyliving (IADL) is the early signs of dementia.Questionnaire-based assessments of IADL are easy touse but prone to subjective bias. Here, we describe anovel virtual reality (VR) test to assess two complexIADL tasks: handling financial transactions and usingpublic transportation. While a subject performs thetasks in a VR setting, a motion capture system tracesthe position and orientation of the dominant hand andhead in a three-dimensional Cartesian coordinatesystem. Kinematic raw data are collected and convertedinto kinematic performance measures, i.e., motiontrajectory, moving distance, speed. Motion trajectory isthe path of a body part (e.g., dominant hand or head)in space. Moving distance refers to the total distance ofthe trajectory, and speed is calculated from the movingdistance divided by the time to completion. Inclusion ofthese kinematic measures significantly improved theclassification of patients with early dementia comparedto the healthy control group.|
|15||Emotional Tagging with Lifelog Photos by Sharing Different Levels of Autobiographical Narratives||Ahreum Lee; Hokyoung Ryu||The lifelogging camera continuously captures one’s surroundings, therefore lifelog photos can form a medium by which to sketch out and share one’s autobiographical memory with others. Frequently, the lifelog photos do not provide the context or significance of the situations to those not present when the photos were taken. This paper solicits the social value of the lifelog photos by proposing different levels of autobiographical narratives within Panofsky’s framework. By measuring the activation level of the lateral prefrontal cortex (LPFC), known to control one’s empathy and the narrative engagement with questionnaire, we have found that delivery the autobiographical narrative at the iconological level triggers the receiver’s empathetic response and emotional tagging of the sharer’s lifelog photos.|
|16||Dynamic Object Scanning:Object-Based Elastic Timeline forQuickly Browsing First-Person Videos||Seita Kayukawa, Keita Higuchi, Ryo Yonetani, Masanori Nakamura, Yoichi Sato, Shigeo Morishima||This work presents the Dynamic Object Scanning (DOScanning), a novel interface that helps users browselong and untrimmed first-person videos quickly. The proposed interface offers users a small set of object cuesgenerated automatically tailored to the context of a given video. Users choose which cue to highlight, and the interfacein turn fast-forwards the video adaptively while keeping scenes with highlighted cues played at originalspeed. Our experimental results have revealed that the DO-Scanning arranged an efficient and compact set ofcues, and this set of cues is useful for browsing a diverse set of first-person videos.|
|17||explorAR: A Collaborative Artifact-based Mixed Reality Game||Muhammad Ramly, Bikalpa Bikash Neupane||explorAR is a project that provides a new experience to learn the world of the past by exploring mixed reality with your phone. In this interactive experience, users engage with the museum and with each other by collecting artifacts which include fossils, paintings, statues, and other historical objects. Users will learn how to preserve historical objects by extracting fragments of artifacts, how to collaborate with each other by combining fragments of missing artifacts, how to express their creativity by designing their own virtual gallery, and how to participate in a crowdsourced research. We developed the concept using human-centered design approaches which includes interviews, personas, prototypes, and user testing.|
|18||Children’s Blocks: Machine Learning and the Analysis of Motion During Play||Xiyue Wang, Miteki Ishikawa, Kazuki Takashima, Tomoaki Adachi, Patrick Finn, Ehud Sharlin, Yoshifumi Kitamura||We are conducting experiments using Machine Learning (ML) to help analyze data gathered from children playing with specially designed toy blocks. Our “TouchBlox” were designed to generate data based on established outcomes from Psychology and tested in collaboration with colleagues from the area. Early results from our work integrating ML and HCI are promising. Owing to the unique nature of the circumstances that gave rise to our work, both the process and the approach fit suggestions for best practice from experts in the integration of Machine Learning in HCI.|
|19||On Building an Emotion-based Music Composition Companion||Jordan Aiko Deja, Rafael Cabredo||The task of music composition consists of hardwork, discipline and skill. Several applications have been developedthat have displayed artificial creativity in music to assist in the task of composition. However, some of these applications are considered unnatural and synthetic by most musicians.We intend to focus not on the application features but on helping musicians compose music itself. In this paper,we present a draft framework and plan that describes the considerations in the creation of an augmented companionfor musical composers that is emotion-based. The goal is to design an interaction that assists novice musical composersby helping them cope with creative blocks. The emotional consideration of a musical piece is heavily-considered inthe framework. A description of a prototype tool in its early stages along with future work have been included.|
|20||Case Study of Content Development Needs for Tele-therapy in an Asian Context||Pin Sym Foong, Nicholas Wong, Yiting Emily Guo, Yen Shih-Cheng, Gerald Koh Choon Huat||Content for Asian tele-therapy contexts are not easily available. Particularly for older adults who form thebulk of rehabilitation clients, cultural and language appropriateness can promote acceptance andengagement with digital rehabilitative systems. In this case study, we lay out the specific challenges faced indeveloping Asian, localized content. Our findings showed difficulties related to the translation of contentacross linguistic and cultural differences. The challenge of localizing content is worsened by the constraints ondistribution introduced by moving a health service online. This study contributes a description of thesechallenges, and concludes with an urgent call for more research focus on supporting efficient, low-resourcecontent development.|
|21||Head Posture Recognition based onNeck Shape Measurement||Takuto Nakamura, Akira Ishii, Buntarou Shizuki||We present a head posture recognition technique based on the neck shape measurement. People can change theirown head posture intentionally with high flexibility since they can twist their neck left and right and tilt their head for-ward, back, left, and right. In this study, we focus on the fact that the neck forms the unique shape based on the headposture. We realized a head posture recognition technique using this fact. First, we implemented a neck mounted de-vice that measures the shape of the neck’s surface (neck shape). Then, we implemented a head posture recognitionalgorithm. Using the device and the algorithm, it is possible to recognize 7 head postures (e.g., Look Forward, TwistNeck Left, and Tilt Head Right) with the accuracy of 96.7% in an informal study. Moreover, we developed a recipeviewer as an example application that allows the user to operate it without their hands or voice using the recognizedhead posture changes.|
|22||User Identification Method based onAir Pressure in Ear Canals||Toshiyuki Ando, Yuki Kubo, Buntarou Shizuki, Shin Takahashi||We present a user identification method on the variations in the air pressure inside the ear canals as the jaw, face, orhead move (face-related movements). We found that the variations in the air pressure differ among people, whichmake it possible use measurements of them to identify specific individuals. We evaluated the accuracy of this useridentification method by measuring 11 face-related movements of 12 participants using a barometer embedded inan earphone. The average identification accuracy of the face-related movements tested was 90.6%. Moreover, thegeneral identification accuracy, which identified specific participants and combinations of face-related movements, was78.4%.|
|23||* Shown at the symposium only||Toshiya Isomoto, Akira Ishii, Shuta Nakamae, Buntarou Shizuki||–|
|24||Audio Based Incidental Second Language Vocabulary Learning While Walking||Ari Hautasaari, Takeo Hamada, Shogo Fukushima||Second language (L2) learners often lack opportunities or motivation to dedicate their time to vocabulary learning over other daily activities. In this work, we introduce a mobile application that allows L2 learners to instead leverage their “dead time”, such as when walking to and from school or work, to study new vocabulary items. The application combines audio learning and location-based contextually relevant L1-L2 word pairs to allow L2 learners to “discover” new foreign language words while walking. We report on the evaluation of the approach for beginner-level second language vocabulary acquisition.|
|25||Interactive E-CommerceRecommender System usingDataset of Complaints||Toshinori Hayashi, Yuanyuan Wang, Yukiko Kawai, Kazutoshi Sumiya||In recent years, the use of e-commerce recommender systems has become more widespread and many kinds ofsystems are invented. However, there are only a few recommender systems based on users’ complaints.Therefore, We propose a novel item recommender system based on complaints and reviews. This systemrecommends items with complaints. Moreover, it recommends the items to satisfy their requirements basedon clicking history of users.|
|26||Double-sided Printed Tactile Display with Electrostimuli and Electrostatic Forces||Kunihiro Kato, Hiroki Ishizuka, Hiroyuki Kajimoto, Homei Miyashita||Humans can perceive tactile sensation through multimodal stimuli. To demonstrate realistic pseudo tactile sensation for the users, a tactile display is needed that can provide multiple tactile stimuli. In this paper, we have explicated a novel printed tactile display that can provide both the electrical stimulus and the electrostatic force. The circuit patterns for each stimulus were fabricated by employing the technique of double-sided conductive ink printing. Requirements for the fabrication process were analyzed and the durability of the tactile display was evaluated. Users’ perceptions of a single tactile stimulus and multiple tactile stimuli were also investigated. The obtained experimental results indicate that the proposed tactile display is capable of exhibiting realistic tac- tile sensation and can be incorporated by various applications such as tactile sensation printing of pictorial illustrations and paintings. Furthermore, the proposed hybrid tactile display can contribute to accelerated prototyping and development of new tactile devices.|