HASCA2018

6th International Workshop on Human Activity Sensing Corpus and Application: Towards Open-Ended Context Awareness

Program

October 12th

HASCA papers; 15min talk + 5 min question

8:30--8:40

Opening remarks and SHL introduction
(Kazuya Murao)

8:40--10:00

Session 1:
(Chair: Pekka Siirtola)

A Multi-Sensor Setting Activity Recognition Simulation Tool

Shingo Takeda:Kyushu Institute of Technology; Paula Lago:Kyushu Institute of Technology; Tsuyoshi Okita:Kyushu Institute of Technology; Sozo Inoue:Kyushu Institute of Technology;

Understanding how Non-experts Collect and Annotate Activity Data

Michael D Jones:Brigham Young University; Naomi Johnson:Brigham Young University; Kevin Seppi:Brigham Young University; Lawrence W Thatcher:Brigham Young University;

Fundamental Concept of University†Living Laboratory for Appropriate†Feedback

Masaki Shuzo:Tokyo Denki University; Motoki Sakai:Tokyo Denki University; Eisaku Maeda:Tokyo Denki University;

Exploring the Number and Suitable Positions of Wearable Sensors in Automatic Rehabilitation Recording

Kohei Komukai:Toyohashi University of Technology; Ren Ohmura:Toyohashi University of Technology;

10:00--10:30

Coffee break

10:30--12:10

Session 2:
(Chair: Paula Lago)

Study of LoRaWAN Technology for Activity Recognition

Tahera Hossain:Kyushu Institute of Technology; Tahia Tazin:North South University; Yusuke Doi:Kyushu Institute of Technology; Md Atiqur Rahman Ahad:Osaka University; Sozo Inoue:Kyushu Institute of Technology;

OpenHAR: Toolbox for easy access to publicly open human activity data sets

Pekka Siirtola:University of Oulu;Heli Koskim?ki:University of Oulu;Juha R?ning:University of Oulu;

Investigating the Capitalize Effect of Sensor Position for Training Type Recognition in a Body Weight Training Support System

Masashi Takata:Nara Institute of Science and Technology; Yugo Nakamura:Nara Institute of Science and Technology/JSPS; Manato Fujimoto:Nara Institute of Science and Technology; Yutaka Arakawa:Nara Institute of Science and Technology/JST PRESTO; Keiichi Yasumoto:Nara Institute of Science and Technology;

On Robustness of Cloud Speech APIs: An Early Characterization

Akhil Mathur:Nokia Bell Labs/University College London; Anton Isopoussu:Nokia Bell Labs; Fahim Kawsar:Nokia Bell Labs;Robert Smith:University College London; Nicholas D. Lane:University of Oxford; Nadia Berthouze:University College London;

A Wi-Fi Positioning Method Considering Radio Attenuation of Human Body

Shohei Harada:Ritsumeikan University; Kazuya Murao:Ritsumeikan University; Masahiro Mochizuki:Ritsumeikan University; Nobuhiko Nishio:Ritsumeikan University;

12:10--13:30

Lunch

13:30--15:30

Session 3:
(Chair: Hristijan Gjoreski)

Activity Recognition: Translation Across Sensor Modalities Using Deep Learning

Tsuyoshi Okita:Kyushu Institute of Technology; Sozo Inoue:Kyushu Institute of Technology;

A case study for human gestures recognition from poorly annotated data

Mathias Ciliberto:University of Sussex; Lin Wang:University of Sussex; Daniel Roggen:University of Sussex; Ruediger Zillmer:Unilever R&D;

SHL Challenge: Introduction

Hristijan Gjoreski: Ss. Cyril and Methodius University, Macedonia, & University of Sussex, UK

SHL Challenge: Summary & Baseline evaluation

Lin Wang, el al., Summary of the Sussex-Huawei Locomotion-Transportation Recognition Challenge. Proc. HASCA2018.

SHL Team 1

SHL Challenge: Posters

J. H. Choi, et al., Confidence-based deep multimodal fusion for activity recognition. Proc. HASCA2018.

P. Widhalm, et al., Top in the lab, flop in the field? Evaluation of a sensor-based travel activity classifier with the SHL dataset. Proc. HASCA2018.

M. Gjoreski, et al., Applying multiple knowledge to Sussex-Huawei locomotion challenge. Proc. HASCA2018.

A. D. Antar, et al., A comparative approach to classification of locomotion and transportation modes using smartphone sensor data. Proc. HASCA2018.

A Akbari, et al., Hierarchical signal segmentation and classification for accurate activity recognition. Proc. HASCA2018.

H. Matsuyama, et al., Short segment random forest with post processing using label constraint for SHL challenge. Proc. HASCA2018.

Y. Nakamura, et al., Multi-stage activity inference for locomotion and transportation analytics of mobile users. Proc. HASCA2018.

Y. Yuki, et al., Activity Recognition using Dual-ConvLSTM Extracting Local and Global Features for SHL Challenge. Proc. HASCA2018.

J. Wu, et al., A decision level fusion and signal analysis technique for activity segmentation and recognition on smart phones. Proc. HASCA2018.

V. Janko, et al., A new frontier for activity recognition - the Sussex-Huawei locomotion challenge. Proc. HASCA2018.

S. S. Saha, et al., Supervised and semi-supervised classifiers for locomotion analysis. Proc. HASCA2018.

A. Osmani, et al., Hybrid and convolutional neural networks for locomotion recognition. Proc. HASCA2018.

J. V. Jeyakumar, et al., Deep convolutional bidirectional LSTM based transportation mode recognition. Proc. HASCA2018.

T. B. Zahid, et al., A fast resource efficient method for human action recognition. Proc. HASCA2018.

S. Li, et al., Smartphone-sensors based activity recognition using IndRNN. Proc. HASCA2018.

K. Akamine, et al., SHL recognition challenge: Team TK-2 - combining results of multisize instances. Proc. HASCA2018.

M. Sloma, et al., Activity recognition by classification with time stabilization for the Sussex-Huawei locomotion challenge. Proc. HASCA2018.

L. Wang, et al., Benchmarking the SHL Recognition Challenge with Classical and Deep-Learning Pipelines.

15:00--15:30

Coffee break and SHL Challenge: Posters

15:30--17:00

Session 4:
(Chair: Tsuyoshi Okita)

SHL Team 2

SHL Team 3

SHL Team 4

SHL Team 5

SHL Challenge: Panel discussion

SHL Challenge: Ranking

17:00--17:05

SHL Challenge Winners Announcement

17:05--17:15

Discussion and Closing