DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Jinho | ko |
dc.contributor.author | Xiang, Yu | ko |
dc.contributor.author | Savarese, Silvio | ko |
dc.date.accessioned | 2023-10-24T06:01:21Z | - |
dc.date.available | 2023-10-24T06:01:21Z | - |
dc.date.created | 2023-10-24 | - |
dc.date.issued | 2014-07 | - |
dc.identifier.citation | 2014 International Conference on Image Processing, Computer Vision, and Pattern Recognition, IPCV 2014, at WORLDCOMP 2014, pp.367 - 370 | - |
dc.identifier.uri | http://hdl.handle.net/10203/313710 | - |
dc.description.abstract | Over the last couple of years computer vision has grown. While the old problem used to be object detection, we are now faced with the challenge of correctly estimating the pose of the objects. Thus in order to test algorithms for pose estimation it is important to use good datasets for training data. However the object datasets we have today are mainly for object detection. Therefore we do not have many sufficient datasets suitable for testing pose estimation algorithms. In this paper we use deformable part models and latent SVM to propose a dataset that we hope can become a good dataset for testing pose estimation algorithms. | - |
dc.language | English | - |
dc.publisher | CSREA Press | - |
dc.title | Object pose dataset using discriminatively trained deformable part models | - |
dc.type | Conference | - |
dc.identifier.scopusid | 2-s2.0-85072839519 | - |
dc.type.rims | CONF | - |
dc.citation.beginningpage | 367 | - |
dc.citation.endingpage | 370 | - |
dc.citation.publicationname | 2014 International Conference on Image Processing, Computer Vision, and Pattern Recognition, IPCV 2014, at WORLDCOMP 2014 | - |
dc.identifier.conferencecountry | US | - |
dc.identifier.conferencelocation | Las Vegas | - |
dc.contributor.localauthor | Kim, Jinho | - |
dc.contributor.nonIdAuthor | Xiang, Yu | - |
dc.contributor.nonIdAuthor | Savarese, Silvio | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.