TY - GEN
T1 - Bayesian tactile face
AU - Wang, Zheshen
AU - Xu, Xinyu
AU - Li, Baoxin
PY - 2008/9/23
Y1 - 2008/9/23
N2 - Computer users with visual impairment cannot access the rich graphical contents in print or digital media unless relying on visual-to-tactile conversion, which is done primarily by human specialists. Automated approaches to this conversion are an emerging research field, in which currently only simple graphics such as diagrams are handled. This paper proposes a systematic method for automatically converting a human portrait image into its tactile form. We model the face based on deformable Active Shape Model (ASM)[4], which is enriched by local appearance models in terms of gradient profiles along the shape. The generic face model including the appearance components is learnt from a set of training face images. Given a new portrait image, the prior model is updated through Bayesian inference. To facilitate the incorporation of a pose-dependent appearance model, we propose a statistical sampling scheme for the inference task. Furthermore, to compensate for the simplicity of the face model, edge segments of a given image are used to enrich the basic face model in generating the final tactile printout. Experiments are designed to evaluate the performance of the proposed method.
AB - Computer users with visual impairment cannot access the rich graphical contents in print or digital media unless relying on visual-to-tactile conversion, which is done primarily by human specialists. Automated approaches to this conversion are an emerging research field, in which currently only simple graphics such as diagrams are handled. This paper proposes a systematic method for automatically converting a human portrait image into its tactile form. We model the face based on deformable Active Shape Model (ASM)[4], which is enriched by local appearance models in terms of gradient profiles along the shape. The generic face model including the appearance components is learnt from a set of training face images. Given a new portrait image, the prior model is updated through Bayesian inference. To facilitate the incorporation of a pose-dependent appearance model, we propose a statistical sampling scheme for the inference task. Furthermore, to compensate for the simplicity of the face model, edge segments of a given image are used to enrich the basic face model in generating the final tactile printout. Experiments are designed to evaluate the performance of the proposed method.
UR - http://www.scopus.com/inward/record.url?scp=51949094606&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=51949094606&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2008.4587374
DO - 10.1109/CVPR.2008.4587374
M3 - Conference contribution
AN - SCOPUS:51949094606
SN - 9781424422432
T3 - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
BT - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
T2 - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
Y2 - 23 June 2008 through 28 June 2008
ER -