SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 752
  • Download : 3517
DC FieldValueLanguage
dc.contributor.authorDo, JHko
dc.contributor.authorKim, Munchurlko
dc.date.accessioned2024-12-24T03:00:10Z-
dc.date.available2024-12-24T03:00:10Z-
dc.date.created2024-12-24-
dc.date.issued2024-10-02-
dc.identifier.citation2024 European Conference on Computer Vision (ECCV), pp.401 - 420-
dc.identifier.issn1611-3349-
dc.identifier.urihttp://hdl.handle.net/10203/326551-
dc.description.abstractSkeleton-based action recognition, which classifies human actions based on the coordinates of joints and their connectivity within skeleton data, is widely utilized in various scenarios. While Graph Convolutional Networks (GCNs) have been proposed for skeleton data represented as graphs, they suffer from limited receptive fields constrained by joint connectivity. To address this limitation, recent advancements have introduced transformer-based methods. However, capturing correlations between all joints in all frames requires substantial memory resources. To alleviate this, we propose a novel approach called Skeletal-Temporal Transformer (SkateFormer) that partitions joints and frames based on different types of skeletal-temporal relation (Skate-Type) and performs skeletal-temporal self-attention (Skate-MSA) within each partition. We categorize the key skeletal-temporal relations for action recognition into a total of four distinct types. These types combine (i) two skeletal relation types based on physically neighboring and distant joints, and (ii) two temporal relation types based on neighboring and distant frames. Through this partition-specific attention strategy, our SkateFormer can selectively focus on key joints and frames crucial for action recognition in an action-adaptive manner with efficient computation. Extensive experiments on various benchmark datasets validate that our SkateFormer outperforms recent state-of-the-art methods.-
dc.languageEnglish-
dc.publisherEuropean Computer Vision Association-
dc.titleSkateFormer: Skeletal-Temporal Transformer for Human Action Recognition-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.beginningpage401-
dc.citation.endingpage420-
dc.citation.publicationname2024 European Conference on Computer Vision (ECCV)-
dc.identifier.conferencecountryEI-
dc.identifier.conferencelocation이탈리아 밀라노-
dc.identifier.doi10.1007/978-3-031-72940-9_23-
dc.contributor.localauthorKim, Munchurl-

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0