- Georgia Tech Egocentric Activity Datasets
It subsumes GTEA Gaze+ and comes with HD videos (1280x960), audios, gaze tracking data, frame-level action annotations, and pixel-level hand masks at sampled frames
- GeorgiaTech Egocentric Activitie - Stanford University
This dataset contains 7 types of daily activities, each performed by 4 different subjects The camera is mounted on a cap worn by the subject
- amitsou EGTEA_Gaze_Plus_Downloader - GitHub
--gtea_videos: Download GTEA Videos --gtea_png: Download Uncompressed PNG --hand_masks_2K: Download Hand Masks GTEA --hand_masks_14K: Download Hand Masks EGTEA+ --trimmed_actions: Download Trimmed Actions --gaze_data: Download Gaze Data --action_annotations: Download Action Annotations --gtea_labels_71: Download GTEA Action Labels
- Georgia Tech Egocentric Activities - Gaze (+)
Please refer to the following paper when using the datasets, code, software: Alireza Fathi, Yin Li, James M Rehg, Learning to recognize daily actions using gaze, ECCV 2012 (PDF) GTEA Gaze Dataset GTEA Gaze+ Dataset Code
- Progress-Aware Online Action Segmentation for Egocentric . . . - GitHub
Data GTEA: download GTEA data from link1 or link2 Please refer to ms-tcn or CVPR2024-FACT EgoProceL: download EgoProceL data from G-Drive Please refer to CVPR2024-FACT EgoPER: download EgoPER data from G-Drive Please refer to EgoPER for the original data
- Gitea Official Website
Gitea - Git with a cup of tea! Painless self-hosted all-in-one software development service, including Git hosting, code review, team collaboration, package registry and CI CD
- dinggd gtea · Datasets at Hugging Face
GTEA This is the GTEA dataset used for temporal action segmentation Dataset Details Dataset Description Curated by: [More Information Needed] Funded by [optional]: [More Information Needed] Shared by [optional]: [More Information Needed] Language (s) (NLP): [More Information Needed] License: [More Information Needed] Dataset Sources [optional]
- Georgia Tech Egocentric Activity Datasets | Jim Rehg
Summary Text for GTEA datasetYin Li, Alireza Fathi, Zhefan Ye, Miao Liu, James M Rehg
|