Category Archives: Biometrics

New algorithm uses subtle changes to make a face more memorable

Do you have a forgettable face? Many of us go to great lengths to make our faces more memorable, using makeup and hairstyles to give ourselves a more distinctive look.

Now your face could be instantly transformed into a more memorable one without the need for an expensive makeover, thanks to an algorithm developed by researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Two examples of faces that have been modified from an original photo (center) to look more (right) and less (left) memorable without altering their identity, attractiveness, age, or gender. Photo courtesy of the researchers.

Two examples of faces that have been modified from an original photo (center) to look more (right) and less (left) memorable without altering their identity, attractiveness, age, or gender. Photo courtesy of the researchers.

The algorithm, which makes subtle changes to various points on the face to make it more memorable without changing a person’s overall appearance, was unveiled earlier this month at the International Conference on Computer Vision in Sydney.

“We want to modify the extent to which people will actually remember a face,” says lead author Aditya Khosla, a graduate student in the Computer Vision group within CSAIL. “This is a very subtle quality, because we don’t want to take your face and replace it with the most memorable one in our database, we want your face to still look like you.”

More memorable — or less

The system could ultimately be used in a smartphone app to allow people to modify a digital image of their face before uploading it to their social networking pages. It could also be used for job applications, to create a digital version of an applicant’s face that will more readily stick in the minds of potential employers, says Khosla, who developed the algorithm with CSAIL principal research scientist Aude Oliva, the senior author of the paper, Antonio Torralba, an associate professor of electrical engineering and computer science, and graduate student Wilma Bainbridge.

Conversely, it could also be used to make faces appear less memorable, so that actors in the background of a television program or film do not distract viewers’ attention from the main actors, for example.

To develop the memorability algorithm, the team first fed the software a database of more than 2,000 images. Each of these images had been awarded a “memorability score,” based on the ability of human volunteers to remember the pictures. In this way the software was able to analyze the information to detect subtle trends in the features of these faces that made them more or less memorable to people.

The researchers then programmed the algorithm with a set of objectives — to make the face as memorable as possible, but without changing the identity of the person or altering their facial attributes, such as their age, gender, or overall attractiveness. Changing the width of a nose may make a face look much more distinctive, for example, but it could also completely alter how attractive the person is, and so would fail to meet the algorithm’s objectives.

When the system has a new face to modify, it first takes the image and generates thousands of copies, known as samples. Each of these samples contains tiny modifications to different parts of the face. The algorithm then analyzes how well each of these samples meets its objectives.

Once the algorithm finds a sample that succeeds in making the face look more memorable without significantly altering the person’s appearance, it makes yet more copies of this new image, with each containing further alterations. It then keeps repeating this process until it finds a version that best meets its objectives.

“It’s really like applying an elastic mesh onto the photograph that slightly modifies the face,” Oliva says. “So the face still looks like you, but maybe with a bit of lifting.”

The team then selected photographs of 500 people and modified them to produce both a memorable and forgettable version of each. When they tested these images on a group of volunteers, they found that the algorithm succeeded in making the faces more or less memorable, as required, in around 75 percent of cases.

Familiarity breeds likability

Making a face appear familiar can also make it seem more likable, Oliva says. She and Bainbridge have published a complimentary paper in the journal Cognitive Science and Social Psychology on the attributes that make a face memorable. The first time we see a face, we tend to “tag” it with attributes based on appearance, such as intelligence, kindness, or coldness. “If we tag a person with familiarity, because we think this is a face we have seen before, we have a tendency to like it more, and for instance to think the person is more trustworthy,” she says.

The team is now investigating the possibility of adding other attributes to their model, so that it could modify faces to be both more memorable and to appear more intelligent or trustworthy, for example. “So you could imagine having a system that would be able to change the features of your face to make you whatever you would wish for, but always in a very subtle way,” Oliva says.

We all wish to use a photo that makes us more visible to our audience, says Aleix Martinez, an associate professor of electrical and computer engineering at Ohio State University. “Painters of the Renaissance knew how to make portraits memorable, but we are clueless on how to take that picture that will give us an edge over others or, at a minimum, show the best side of us,” Martinez says.

Now Oliva and her team have developed a computational algorithm that can do this for us, he says. “Input your preferred picture of your face and it will make it even better,” Martinez says. “This will allow us to gain that advantage we were looking for and hopefully make people remember us more.”

Source: MIT

Advertisements

Facial features to be used to diagnose pain in dementia sufferers

Undiagnosed pain of dementia sufferers could soon be measured by facial recognition technology, as part of an Electronic Pain Assessment Tool (ePAT) being developed at Curtin University.

The ePAT, which is linked to the camera on a smart phone or tablet, is designed to quickly and accurately detect, evaluate and document the severity of pain in non-communicative patients with dementia.

Lead developer Professor Jeff Hughes, from Curtin’s School of Pharmacy, said the new tool would add an automated and innovative facial recognition component to more traditional methods of assessing pain.

It is hoped the technology will eventually translate to measuring pain levels in other non-communicative group such as babies and infants.

“A significant issue among some dementia sufferers is that they no longer have the communication skills to express the level of pain they are suffering,” Professor Hughes said.

“The seriousness of their pain can often go unrecognised. But automated evaluation technology will allow for the calculation of a pain severity score based in part on a patient’s facial expressions, captured by the smart phone or tablet.

“Combined with the American Geriatric Society’s other widely accepted pain indicators – vocalisation, behavioural change, psychological change, physiological change and physical change – ePAT will provide an accurate total pain score for the subject.

“The great hope is that dementia sufferers will no longer have to suffer in silence and that treatment stemming from ePAT evaluation can help give them a better quality of life.

“The obvious next step from there would be to use the technology to gauge pain scores in the very young.”

Professor Hughes said the growing number of people worldwide living with dementia underlined the potential value of the ePAT.

“Globally there are an estimated 36 million people living with dementia and this figure is predicted to rise to 115 million by 2050. In Australia, there will be one new person diagnosed with dementia every six minutes,” Professor Hughes said.

“With no current electronic tools available to assess pain in people with dementia, ePAT is an innovation that can be of great benefit – not only in terms of pain treatment but also improvement in cognitive function and care dependence.

“ePAT’s potential has already been demonstrated by the willingness of Alzheimer’s Australia to invest $50,0000 in the project and also fund a PhD student working on it.”

The ePAT innovation was a finalist in the 2013 Curtin Commercial Innovation Awards.

A panel of experts has been set up to aid in the development of the project and consultations are being undertaken with industry around the facial recognition component.

The computing partner for the project is Swiss-based nViso, a specialist in human facial micro-expressions and eye movements capture which won the 2013 IBM Beacon award for smarter computing.

The research team consists of Professor Hughes and his Curtin colleagues Dr Kreshnik Hoti, Mustafa Atee and Professor Moyez Jiwa.

It is hoped testing on ePAT will start in the first half of 2014.

Source: Curtin University

Innovative motion evaluation tool saves patients with back pain X-ray radiation exposure

Those have undergone extensive back surgery and need repeated X-rays to monitor their progress may soon have access to a new technology that skips the X-rays and repeated radiation exposure, opting instead for an innovative, noninvasive, non-X-ray device that evaluates spinal movement. The technology was created and patented by two engineering undergraduate students who recently formed their own company to market the device.

The paper describing the technology appears in the current special issue of Technology and Innovation- Proceedings of the National Academy of Inventors®, and was presented at the Second Annual Conference of the National Academy of Inventors® hosted by the University of South Florida, last February 21-23, 2013.

“Surgical treatment is inevitable for some of the 80 percent of Americans who at some point in their lives suffer from back pain,” said Kerri Killen of Versor, Inc. who, along with Samantha Music, developed the new technology while they were undergraduate students at Stevens Institute of Technology in New Jersey. “We developed an evaluation device that uses battery powered sensors to evaluate spinal motion in three-dimensions. It not only reduces the amount of X-ray testing patients undergo but also has the potential to save over $5 billion per year nationwide in health care costs.”

According to co-developer Music, there are 600,000 spinal surgeries every year in the U.S. with an annual exposure of 2,250 mrem of radioactivity per patient before and after surgery. The “electrogoniometer” they developed can be used by surgeons prior to patient surgery and after surgery and also used by physical therapists to further evaluate the progression of a patient’s surgery. The technology can also be used in other orthopedic specialties to reduce both costs and eliminate X-ray exposure.

“The electrogoniometer contains three rotary potentiometers, which are three-terminal resistors with a sliding contact that forms a voltage divider to control electrical devices, such as a rheostat. Each potentiometer measures one of the three spinal movements,” explains Music. “It also contains a transducer—a device that converts a signal in one form to energy of another form—to measure the linear displacement of the spine when it curves while bending.”

The developers add that the device is “easy to use” and requires minimal training for the health professional end-user. The vest-like attachment to a patient eliminates the need for any other special equipment and can be used during a routine clinical evaluation. “It is comfortable for the patient and efficient, providing immediate and accurate results,” they add.

An additional use for the device, they said, could be for measuring movement spinal angles and could be used to determine when an injured worker might be able to return to work. By developing new ways to attach the device, different areas of the body can be evaluated for movement, whether hip, shoulder, knee, or wrist.

When Killen and Music developed the electrogoniometer in their senior design class while in undergraduate school at Stevens, they also received mentoring and assistance for establishing a small business to market the device.

Source: USF

Image or reality? Leaf study needs photos and lab analysis

Automated remote photography is a convenient, labor-saving research tool for tracking leaf function and doing forest research. But does photography mirror what’s actually happening on the ground? A new study finds photography accurately tracks the timing of red pigments in the fall, but the timing of green in the spring and summer — not so much.

An automated camera on a tower above the forest canopy can record seasonal changes in overall leaf color, but photos might not always correspond to seasonal biochemical changes within leaves themselves. Credit: Marc Mayes/Brown University

An automated camera on a tower above the forest canopy can record seasonal changes in overall leaf color, but photos might not always correspond to seasonal biochemical changes within leaves themselves. Credit: Marc Mayes/Brown University

Every picture tells a story, but the story digital photos tell about how forests respond to climate change could be incomplete, according to new research.

Scientists from Brown University and the Marine Biological Laboratory have shown that the peak in forest greenness as captured by digital pictures does not necessarily correspond to direct measures of peak chlorophyll content in leaves, which is an indicator of photosynthesis. The study, which focused on a forest on Martha’s Vineyard, has significant implications for how scientists use digital photos to study forest canopies.

The work was led by Xi Yang, a graduate student at Brown and MBL and is published online in the Journal of Geophysical Research: Biogeosciences.

Photography does a better job gathering data on the timing of red pigments in the fall than tracking the greening of spring and summer. Credit: Xi Yang/Brown University

Photography does a better job gathering data on the timing of red pigments in the fall than tracking the greening of spring and summer. Credit: Xi Yang/Brown University

The use of digital photography to study how forests change has increased in recent years. The technology provides an inexpensive way to monitor forest change closely over time, an approach that isn’t labor-intensive. Cameras can be set up, programmed to take pictures at certain intervals, and then left to do their thing for long periods. This type of research has produced significant findings in recent years. Researchers are currently using networks of cameras around the country to monitor the timing of when leaves sprout in the spring and drop in the fall. Both events are expected to be sensitive to climate change.

Using cameras to see when leaves sprout and when they fall off is one thing, but Yang and his colleagues wanted to see how well photos could capture what happens in between. Could cameras be used to tell when leaves reach peak photosynthesis in the summer or when photosynthesis slows in the fall?

“The key question we want to address is our study here is what can the camera tell us about the function of the plants,” Yang said.

To find out, Yang placed a camera on a 50-foot tower above the canopy of the Manuel F. Correllus State Forest on Martha’s Vineyard, Mass. The camera took photos every hour between 10 a.m. and 3 p.m. each day from April to November 2011. During that same period, Yang and his colleagues took weekly leaf samples from the forest and tested them in the lab. In spring, they measured levels of chlorophyll, the molecule that helps leaves photosynthesize — turn light into energy — and gives them their green color. In the fall, they looked for the pigments that cause leaves to turn red.

Using careful color analysis, Yang and his colleagues found that the photos reached their peak greenness around June 9. The lab tests, however, showed that chlorophyll measurements didn’t peak until 20 days later.

“That means that if you want to use the greenness from the camera as an indicator of plant function in the spring, you might be wrong because they do not match very well,” Yang said. “This is a warning for future study.”

The implications are most significant for the studying how forests regulate the amount of carbon dioxide in the atmosphere, Yang says. Plants absorb carbon dioxide as they photosynthesize, and absorption peaks when photosynthesis does. Knowing the timing of the peak can be important for accurate measurements. “If you don’t get it right,” Yang says, “you might miss a lot of the carbon absorption.”

The news wasn’t all bad for the cameras, however. The study showed that peak leaf senescence — the increase in red pigments in the fall — was captured quite well by the cameras. The timing of peak redness in the camera data matched peak of red pigment found in the lab studies. That’s important for researchers to know, but could also be useful for tourism officials in places like Vermont, where leaf peepers plan trips to take in the fall colors. The researchers also compared their ground-based photos with color data from a satellite looking at the same forest over the same period of time. Those data matched fairly well. That suggests that cameras are a good way to validate data from satellites.

“This is really exciting because we can now take a simple measure of color, tie it quantitatively to plant seasonal activity, and ultimately link this to satellite observations where we think changes in plant cycles due to climate change are being expressed on a global scale,” said Jack Mustard, professor of geological sciences at Brown. Mustard and Jianwu Tang at MBL are Yang’s Ph.D. advisers and authors on the paper.

Yang expects cameras will continue to play a significant role in climate and forest research. But in light of these findings, researchers need to use a bit of caution.

“If you want to use cameras for the study of plant function,” Yang said, “you want to be careful, especially in the spring.”

Source: Brown University

Can walkies tell who’s the leader of the pack?

Dogs’ paths during group walks could be used to determine leadership roles and through that their social ranks and personality traits, say researchers from Oxford University, Eötvös University, Budapest and the Hungarian Academy of Sciences (HAS).

Hedvig Balázs takes her Vizsla dogs for a walk near Budapest, Hungary (Image: Enikő Kubinyi)

Hedvig Balázs takes her Vizsla dogs for a walk near Budapest, Hungary (Image: Enikő Kubinyi)

Using high-resolution GPS harnesses, scientists tracked the movements of six dogs and their owner across fourteen 30-40 minute walks off the lead. The dogs’ movements were measurably influenced by underlying social hierarchies and personality differences.

‘We showed that it is possible to determine the social ranking and personality traits of each dog from their GPS movement data,’ said study author Dr Máté Nagy of Oxford University’s Department of Zoology, formerly of Eötvös University and HAS. ‘On individual walks it is hard to identify one permanent leader, but over longer timescales it soon becomes clear that some dogs are followed by peers more often than others. Overall, the collective motion of the pack is strongly influenced by an underlying social network.’

The study, published in PLOS Computational Biology, demonstrates the power of path tracking to measure social behaviour and automatically determine dogs’ personalities. In future, one possible use of the technology would be to assess search and rescue dogs to see which dogs work best together. As dogs are ideal models of human behaviour, the same methods could be used to study social interactions in humans such as parents walking with their children. The study is part of the European Research Council project COLLMOT led by Professor Tamás Vicsek (Eötvös University and HAS) which aims to understand the collective motion a wide variety of different organisms in nature.

How dogs behave during walks reveals a lot about traits such as trainability, controllability, aggression, age and dominance. Dogs that consistently took the lead were more responsive to training, more controllable, older and more aggressive than the dogs that tended to follow. Dogs that led more often had higher dominance ranks in everyday situations, assessed by a dominance questionnaire.

‘The dominance questionnaire tells us the pecking order of dog groups by quantifying interactions between pairs,’ said Dr Enikő Kubinyi, senior author of the study from the Hungarian Academy of Sciences. ‘For example, the dogs that bark first and more when strangers enter the house, eat first at meals and wins fights are judged as more dominant. Conversely, dogs that lick other dogs’ mouths more often are less dominant as this is a submissive display.’

Pack leadership is well-established in wolves, where packs are typically led by a single breeding pair, but there is still much debate as to whether groups of domestic dogs have a social hierarchy.

‘These dogs have no breeding pair,’ said Dr Kubinyi. ‘However, there are dogs who take the lead more often than others. On average, an individual took the role of the leader in a given pair in about three quarters of the time. This ratio is of similar magnitude to the case of wild wolf packs with several breeding individuals. Using this qualitative data over longer time scales allows us to see the more subtle relationships that might otherwise be missed. Of course, hierarchies are likely to vary across breeds and individual groups, so we hope to use this technology on other animals in future to investigate further.’

The dogs used in this study were of the Vizsla breed, a Hungarian hunting dog known for their good-natured temperament and trainability. It is interesting to note that the leader-follower relationships were always voluntary; dogs chose who to follow and the leaders did not compel other dogs to follow them.

The technology used in the study could be applied to other dogs used for search and rescue to provide quantitative data allowing handlers to compare how different dogs work together and pick those with the highest compatibility. Each device weighs only 14 grams and further sensors such as gyroscopes could be used to determine what each animal is doing at a given time.

Source: Oxford University