Among the many clever post-password authentication schemes currently under development is multi-touch gesture analysis. The basic idea is to observe a user's movements on a touchscreen device for some period of time and to come up with a gestural profile unique to that individual. Then, based on this profile, the system can verify a user's identity continuously as they use the device.
The idea sounds fishy, yes. Couldn't some hacker just observe those same gestures and then mimic them to gain access to a system? The answer should be no because the gestures read by the system are interpreted in such a way as to compile biometric profiles of the user's hand/wrist/etc, resulting in a model that can be used to interpret/verify new/different gestures down the line.
While gestural ID systems are getting a lot of research play these days thanks to error rates trending toward the low single-digits, they also tend to take a rosy view of the security world in which hackers attempt to breach such defenses via crude impersonation, e.g. when one hacker-user attempts to mirror some target-user. This is called a zero-effort attack and it stands in contrast to an attack-by-forgery, in which an attempt is made to recreate (rather than mimic) the user-target.
A DARPA-funded report titled "Robotic Robbery on the Touch Screen" published recently in the journal ACM Transactions on Information and System Security looks at gestural authentication through the eyes of a more sophisticated hacker. It presents two Lego-driven robotic attacks on a touch-based authentication system—one is based on gestural statistics collected over time from a large population of users and the other is based on stealing gestural data directly from a user. Both were pretty effective.