Last year, a group of researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) showed us just how easy it is to “see” a human heartbeat in ordinary video footage. With a little filtering, a little averaging, and a touch of turn-of-the-century (1900) mathematical analysis, the telltale color changes in the skin associated with the peak pressure pulse of the heart can be seen by anyone. The CSAIL researchers have now rejigged their algorithms to optimize instead for detection of the head motion artifact associated with each beat. Remote detection of the heartbeat is a tremendous convenience that could potentially spot heart abnormalities in those who would otherwise never look twice — if it is accurate enough.
The release from MIT on this work mentions that the heart rate variability (HRV) — the moment-to-moment deviations from constancy — can be used to diagnose potential heart issues. The researchers note that the true margin of error for their could be a few beats per minute. While that may seem pretty good as far as getting an overall pulse rate for everyday curiousity, it could be a problem if you want to try to extract interbeat variability. At issue is the fact that the noise that is introduced by those few questionable beats (probably missed beats) could easily swamp and confound the real HRV signal.
Without getting too boggled up, we will just mention here that there are many ways to derive and characterize HRV. We could measure each beat-to-beat delay for example, bin them up in a histogram, and report a standard deviation or RMS (Root-Mean-Square) value. Alternatively, a fancier frequency domain analysis can be done to generate a power spectrum. No matter what method is used, the potential problem is that errors as low as 2% in the ground-level data will result in unwanted bias in the HRV calculation. The researchers are working to improve their algorithms and suggest that accuracy may be gained by combining motion and colorimetric imaging.
While most of the finesse in these techniques is in the software, successful signal extractions of this kind are ultimately driven by the sensor hardware. If an ordinary smartphone camera can already do a decent job, we can imagine the level of detail that more dedicated remote sensors might be able gather. Finer-grained spatial detection, and thermal analysis, might reveal slight asymmetries in perfusion of the face to paint a clearer picture of vascular health. Spectrographic detection of components in sweat might provide significant advances over the traditional galvanic skin resistance measurements. A quick and harmless laser pulse to the skin would evaporate hundreds of unique molecules readily available to be vacuumed up and analyzed. “Taking” such readings more-or-less covertly need not always be seen as suspicious. They may come to be normal conversational facilitators that amplify the feedbacks we already give, consciously or not, to indicate receptiveness.
On their own, things like pulse, blood oxygenation, pupil dilation, or skin resistance, are of limited use. If someone is exhausted, scared or startled, these metrics should be very predictable, and change in synchrony. The greater insights may come when these measures are analyzed differentially. In other words, if you note that someones heart speeds up, but their blood oxygenation has been sitting comfortably at 99%, you might suspect something more cerebral, rather than autonomic, is going on. If your data feeds note that a seemingly consistent right-side bias in the latency of your boss’s pupil dilation suddenly flip-flops, or the symmetry of the smile increases, that might be the moment to hit them up for a raise. While most of these things will be of little practicality to us, to our avatar and machine creations searching to understand human emotional virtuosity, they will likely be immensely interesting.