The standards of TV-Anytime cover a broad range of interactive digital TV service, such as broadcasting, content referencing and metadata. As another metadata description standard for multimedia, MPEG-7 specifications have often been coupled with TV-Anytime specifications to make advanced services since its early stage, by the complementing characteristics of the two standards. With its rich metadata description facilities, MPEG-7 is used to describe low-level audio-visual descriptions, such as color, shape-contour and sound-pitch, as well as high-level descriptions, such as title and author Among services that use these descriptors, Query-By-Example (QBE) is a technique that allows the user to search for documents using examples in the form of an image, sound, or movie clip as queries. When QBE is employed to the TV-Anytime system, TV users search the contents of interest that are unable to be found in TV-Anytime keyword search. It is because the QBE method requires no prior human annotation as keyword-based search does. Although the current TV-Anytime standard can support some of the Search-by-Content (keyword search) by using segment annotation, it does not support the exact QBE services based on multimedia content with its own components, due to its limitation of low-level metadata descriptors.
In this thesis, an extended TV-Anytime service that accommodates the QBE technique is proposed. This approach is made possible by combining TV-Anytime system with MPEG-7 components, especially, low-level metadata descriptors. The low-level metadata descriptors are used to describe multimedia contents, and they are stored in database of MPEG-7 metadata provider. The MPEG-7 metadata provider supports the QBE by providing similarity matching service for image. To enable this service, the metadata service discovery part of TV-Anytime is extended to meet the new service requirements, as the discovery part of current TV-Anytime standard does not support processing metada...