Deakin University
Browse

Dynamic biometrics fusion at feature level for video-based human recognition

conference contribution
posted on 2007-01-01, 00:00 authored by Q Wu, L Wang, X Geng, Ming Li, X He
This paper proposes a novel human recognition method in video, which combines human face and gait traits
using a dynamic multi-modal biometrics fusion scheme. The Fisherface approach is adopted to extract face
features, while for gait features, Locality Preserving Projection (LPP) is used to achieve low-dimensional
manifold embedding of the temporal silhouette data derived from image sequences. Face and gait features are
fused dynamically at feature level based on a distance-driven fusion method. Encouraging experimental results
are achieved on the video sequences containing 20 people, which show that dynamically fused features produce
a more discriminating power than any individual biometric as well as integrated features built on common static
fusion schemes.

History

Event

Image and Vision Computing New Zealand. Conference (2007: Hamilton, N.Z.)

Pagination

152 - 157

Publisher

Image and Vision Computing NZ

Location

Hamilton, N.Z.

Place of publication

[Hamilton, N.Z.]

Start date

2007-12-05

End date

2007-12-07

ISBN-13

9780473130084

ISBN-10

0473130084

Language

eng

Notes

Reproduced with the specific permission of the copyright owner.

Publication classification

E1 Full written paper - refereed

Copyright notice

2007, Image and Vision Computing NZ

Editor/Contributor(s)

M Cree

Title of proceedings

IVCNZ 2007 : Proceedings of Image and Vision Computing New Zealand

Usage metrics

    Research Publications

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC