An Inter-Disciplinary Resource Website to Effects on Human Electrodynamic Physiology
www.uncg.edu/~t_hunter/sound.html |
Site Map |
Patent No. 6621939 Scene description generating apparatus and method, object extracting method, and recording medium (Negishi, et al., Sep 16, 2003)
Abstract
A scene description generating apparatus and method, an object extracting method, and a recording medium extract an object from an input image. Positional information on the extracted object is output. Based on the positional information, scene description information about a placement position of the object in a scene is generated. When the object is deformed, reference to the positional information is made, and the scene description information is generated in which the object deformation is reflected. Accordingly, the object is placed at a desirable position in the scene.
Notes:
SUMMARY
OF THE INVENTION
Accordingly, it is an object of the present invention to provide a scene description
generating apparatus and method and an object extracting method for solving
the above problems, that is, for preventing generation of undesirable shifting
or distortion in a scene described by a scene description even when an object
in an input image or graphic data is deformed, and for reflecting movement of
the object in the input image or the graphic data in movement of the object
or in movement of the texture in the scene.
According to an aspect of the present invention, the foregoing objects are achieved
through provision of a scene description generating apparatus and method including
an object extracting step of extracting an object from an input image and outputting
positional information on the extracted object. Based on the positional information
output in the object extracting step, scene description information about a
placement position of the object in a scene is generated in a scene description
generating step. When the object is deformed, the positional information is
referred to in the scene description generating step and the scene description
information in which the object deformation is reflected is generated.
According to another aspect of the present invention, the foregoing objects
are achieved through provision of a scene description generating apparatus and
method including an object extracting step of extracting an object from an input
image. In a positional information detecting step, positional information on
the object extracted in the object extracting step is detected. Based on the
positional information detected in the positional information detecting step,
scene description information about a placement position of the object in a
scene is generated in a scene description generating step. When the object is
deformed, the positional information is referred in the scene description generating
step and the scene description information in which the object deformation is
reflected is generated.
According to another aspect of the present invention, the foregoing objects
are achieved through provision of a recording medium for causing a scene description
generating apparatus for generating scene description information on an object
to execute a computer-readable program. The program includes an object extracting
step of extracting the object from an input image and outputting positional
information on the extracted object. Based on the positional information output
in the object extracting step, the scene description information about a placement
position of the object in a scene is generated in a scene description generating
step. When the object is deformed, the positional information is referred to
in the scene description generating step and the scene description information
in which the object deformation is reflected is generated.
According to the present invention, when placing an object segmented from a
static image signal, a moving image signal, or graphic data by an object extracting
unit/step in a screen and describing a new scene, the object extracting unit,
i.e., a segmentation unit, outputs positional information on a region containing
the object in the input image or the graphic data. Based on the output positional
information, a scene description generating unit/step determines a placement
position of the object. Accordingly, even when the region containing the object
is deformed or shifted, the object is placed at a desirable position in the
scene described by the scene description. When the segmented object is used
as a texture in the scene description, the scene description is generated in
which texture coordinates are transformed based on the positional information
output from the segmentation unit. Therefore, distortion of a texture pasted
in the scene is prevented, and shifting of the object is reflected in the texture.
Alternatively, texture distortion is prevented by changing the size of a scene
object on which the texture is to be pasted or by changing the position of the
texture.
When the positional information on the region containing the object in the image
or the graphic data is included in data of the segmented object, the positional
information is made equally available by means of a positional information detector
to which the object data is input to detect the positional information. Hence,
undesirable shifting or distortion in the scene is prevented.
When the region is determined so as to contain objects in frames of a plurality
of images or graphic data and is segmented, the number of changes of the placement
position is reduced, or changes are not necessary at all. In particular, when
the region containing the object is set as a picture frame of the input image
or the graphic data, it is not necessary to change the placement position.