HASCA2018

6th International Workshop on Human Activity Sensing Corpus and Application: Towards Open-Ended Context Awareness

Welcome to HASCA2018

Welcome to HASCA2018 Web site!

HASCA2018 is a sixth International Workshop on Human Activity Sensing Corpus and Applications: Towards Open-Ended Context Awareness. The workshop will be held in conjunction with UbiComp2018.

Abstract

The recognition of complex and subtle human behaviors from wearable sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpuses and much improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition.

Unique this year, HASCA will welcome papers from participants to the Sussex-Huawei Locomotion and Transportation Recognition Competition in a special session.

"Sussex-Huawei Locomotion and Transportation Recognition Competition": http://www.shl-dataset.org/activity-recognition-challenge/

The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence. We expect the following domains to be relevant contributions to this workshop (but not limited to):

Data collection / Corpus construction

Experiences or reports from data collection and/or corpus construction projects, such as papers describing the formats, styles or methodologies for data collection. Cloud- sourcing data collection or participatory sensing also could be included in this topic.

Effectiveness of Data / Data Centric Research

There is a field of research based on the collected corpus, which is called “Data Centric Research”. Also, we solicit of the experience of using large-scale human activity sensing corpus. Using large-scape corpus with machine learning, there will be a large space for improving the performance of recognition results.

Tools and Algorithms for Activity Recognition

If we have appropriate and suitable tools for management of sensor data, activity recognition researchers could be more focused on their research theme. However, development of tools or algorithms for sharing among the research community is not much appreciated. In this workshop, we solicit development reports of tools and algorithms for forwarding the community.

Real World Application and Experiences

Activity recognition "in the Lab" usually works well. However, it is not true in the real world. In this workshop, we also solicit the experiences from real world applications. There is a huge gap/valley between "Lab Envi- ronment" and "Real World Environment". Large scale human activity sensing corpus will help to overcome this gap/valley.

Sensing Devices and Systems

Data collection is not only performed by the "off the shelf" sensors. There is a requirement to develop some special devices to obtain some sort of information. There is also a research area about the development or evaluate the system or technologies for data collection.

Mobile experience sampling, experience sampling strategies:

Advances in experience sampling ap- proaches, for instance intelligently querying the user or using novel devices (e.g. smartwatches) are likely to play an important role to provide user-contributed annotations of their own activities.

Unsupervised pattern discovery

Discovering mean- ingful repeating patterns in sensor data can be fundamental in informing other elements of a system generating an activity corpus, such as inquiring user or triggering annotation crowd sourcing.

Dataset acquisition and annotation through crowd-sourcing, web-mining

A wide abundance of sensor data is potentially in reach with users instrumented with their mobile phones and other wearables. Capitalizing on crowd-sourcing to create larger datasets in a cost effective manner may be critical to open-ended activity recognition. Online datasets could also be used to bootstrap recognition models.

Transfer learning, semi-supervised learning, lifelong learning

The ability to translate recognition mod- els across modalities or to use minimal supervision would allow to reuse datasets across domains and reduce the costs of acquiring annotations.


Program

October 12th

HASCA papers; 15min talk + 5 min question

8:30--8:40

Opening remarks and SHL introduction
(Kazuya Murao)

8:40--10:00

Session 1:
(Chair: Pekka Siirtola)

A Multi-Sensor Setting Activity Recognition Simulation Tool

Shingo Takeda:Kyushu Institute of Technology; Paula Lago:Kyushu Institute of Technology; Tsuyoshi Okita:Kyushu Institute of Technology; Sozo Inoue:Kyushu Institute of Technology;

Understanding how Non-experts Collect and Annotate Activity Data

Michael D Jones:Brigham Young University; Naomi Johnson:Brigham Young University; Kevin Seppi:Brigham Young University; Lawrence W Thatcher:Brigham Young University;

Fundamental Concept of University†Living Laboratory for Appropriate†Feedback

Masaki Shuzo:Tokyo Denki University; Motoki Sakai:Tokyo Denki University; Eisaku Maeda:Tokyo Denki University;

Exploring the Number and Suitable Positions of Wearable Sensors in Automatic Rehabilitation Recording

Kohei Komukai:Toyohashi University of Technology; Ren Ohmura:Toyohashi University of Technology;

10:00--10:30

Coffee break

10:30--12:10

Session 2:
(Chair: Paula Lago)

Study of LoRaWAN Technology for Activity Recognition

Tahera Hossain:Kyushu Institute of Technology; Tahia Tazin:North South University; Yusuke Doi:Kyushu Institute of Technology; Md Atiqur Rahman Ahad:Osaka University; Sozo Inoue:Kyushu Institute of Technology;

OpenHAR: Toolbox for easy access to publicly open human activity data sets

Pekka Siirtola:University of Oulu;Heli Koskim?ki:University of Oulu;Juha R?ning:University of Oulu;

Investigating the Capitalize Effect of Sensor Position for Training Type Recognition in a Body Weight Training Support System

Masashi Takata:Nara Institute of Science and Technology; Yugo Nakamura:Nara Institute of Science and Technology/JSPS; Manato Fujimoto:Nara Institute of Science and Technology; Yutaka Arakawa:Nara Institute of Science and Technology/JST PRESTO; Keiichi Yasumoto:Nara Institute of Science and Technology;

On Robustness of Cloud Speech APIs: An Early Characterization

Akhil Mathur:Nokia Bell Labs/University College London; Anton Isopoussu:Nokia Bell Labs; Fahim Kawsar:Nokia Bell Labs;Robert Smith:University College London; Nicholas D. Lane:University of Oxford; Nadia Berthouze:University College London;

A Wi-Fi Positioning Method Considering Radio Attenuation of Human Body

Shohei Harada:Ritsumeikan University; Kazuya Murao:Ritsumeikan University; Masahiro Mochizuki:Ritsumeikan University; Nobuhiko Nishio:Ritsumeikan University;

12:10--13:30

Lunch

13:30--15:30

Session 3:
(Chair: Hristijan Gjoreski)

Activity Recognition: Translation Across Sensor Modalities Using Deep Learning

Tsuyoshi Okita:Kyushu Institute of Technology; Sozo Inoue:Kyushu Institute of Technology;

A case study for human gestures recognition from poorly annotated data

Mathias Ciliberto:University of Sussex; Lin Wang:University of Sussex; Daniel Roggen:University of Sussex; Ruediger Zillmer:Unilever R&D;

SHL Challenge: Introduction

Hristijan Gjoreski: Ss. Cyril and Methodius University, Macedonia, & University of Sussex, UK

SHL Challenge: Summary & Baseline evaluation

Lin Wang, el al., Summary of the Sussex-Huawei Locomotion-Transportation Recognition Challenge. Proc. HASCA2018.

SHL Team 1

SHL Challenge: Posters

J. H. Choi, et al., Confidence-based deep multimodal fusion for activity recognition. Proc. HASCA2018.

P. Widhalm, et al., Top in the lab, flop in the field? Evaluation of a sensor-based travel activity classifier with the SHL dataset. Proc. HASCA2018.

M. Gjoreski, et al., Applying multiple knowledge to Sussex-Huawei locomotion challenge. Proc. HASCA2018.

A. D. Antar, et al., A comparative approach to classification of locomotion and transportation modes using smartphone sensor data. Proc. HASCA2018.

A Akbari, et al., Hierarchical signal segmentation and classification for accurate activity recognition. Proc. HASCA2018.

H. Matsuyama, et al., Short segment random forest with post processing using label constraint for SHL challenge. Proc. HASCA2018.

Y. Nakamura, et al., Multi-stage activity inference for locomotion and transportation analytics of mobile users. Proc. HASCA2018.

Y. Yuki, et al., Activity Recognition using Dual-ConvLSTM Extracting Local and Global Features for SHL Challenge. Proc. HASCA2018.

J. Wu, et al., A decision level fusion and signal analysis technique for activity segmentation and recognition on smart phones. Proc. HASCA2018.

V. Janko, et al., A new frontier for activity recognition - the Sussex-Huawei locomotion challenge. Proc. HASCA2018.

S. S. Saha, et al., Supervised and semi-supervised classifiers for locomotion analysis. Proc. HASCA2018.

A. Osmani, et al., Hybrid and convolutional neural networks for locomotion recognition. Proc. HASCA2018.

J. V. Jeyakumar, et al., Deep convolutional bidirectional LSTM based transportation mode recognition. Proc. HASCA2018.

T. B. Zahid, et al., A fast resource efficient method for human action recognition. Proc. HASCA2018.

S. Li, et al., Smartphone-sensors based activity recognition using IndRNN. Proc. HASCA2018.

K. Akamine, et al., SHL recognition challenge: Team TK-2 - combining results of multisize instances. Proc. HASCA2018.

M. Sloma, et al., Activity recognition by classification with time stabilization for the Sussex-Huawei locomotion challenge. Proc. HASCA2018.

L. Wang, et al., Benchmarking the SHL Recognition Challenge with Classical and Deep-Learning Pipelines.

15:00--15:30

Coffee break and SHL Challenge: Posters

15:30--17:00

Session 4:
(Chair: Tsuyoshi Okita)

SHL Team 2

SHL Team 3

SHL Team 4

SHL Team 5

SHL Challenge: Panel discussion

SHL Challenge: Ranking

17:00--17:05

SHL Challenge Winners Announcement

17:05--17:15

Discussion and Closing