Category Archives: Robotics

Germ-zapping robots put to the test to combat hospital-acquired infections

A research team led by Keith S. Kaye, M.D., M.P.H., director of clinical research in the Michigan Medicine Division of Infectious Diseases, will put germ-zapping robots to the test at Detroit hospitals.

The $2 million effort supported by the NIH’s Agency for Healthcare Research and Quality is the first of its kind to study no-touch room disinfection.

A Xenex Germ-Zapping Robot pulses germicidial ultraviolet light to combat hospital-acquired infections.

Michigan researchers will look at the ability of high intensity ultraviolet light delivered by Xenex Germ-Zapping Robots to protect patients from deadly superbugs, such as Clostridium difficile, found on surfaces.

Patients across the country are vulnerable to hospital-acquired infections – infections they can get while staying at a medical facility. Significant progress has been made in preventing some infection types, but they continue to be a major threat nationwide.

Kaye will work with colleagues at Wayne State University and the Detroit Medical Center to conduct the study in two hospitals covering 16 total hospital units at the DMC.

At the end of two years, researchers will report on rates of hospital-acquired infections in units where pulsed xenon UV light (PX-UV) was added to cleaning routines compared to units where a sham UV disinfection system was added to standard cleaning.

They’ll measure if cleaning plus PX-UV reduced the number of infections from drug-resistant organisms that cause C.diff,  vancomycinresistant enterococci (VRE), Klebsiella pneumonia, Escherichiae coli producing extended-spectrum betalactamases (ESBLs), methicillin-resistant Staphylococcus aureus (MRSA) and Acinetobacter baumannii.

Hospital cleanliness is recognized as a critically important process to help prevent hospital-acquired infections. It involves extensive cleaning and disinfection after a patient has been discharged and before the next patient has been admitted to the room.

PX-UV lamps in the robot produce a flash of germicidal light in millisecond pulses, damaging the cell structure and stopping the DNA repair mechanisms for most pathogens

The unique design of the study, which is double-blinded and sham-controlled, makes it the first to examine the clinical impact of adding PX-UV to hospital cleaning routines.

Source: University of Michigan Health System

Advertisements

Standardizing Communications for the Internet of Things

The fast-growing Internet of Things (IoT) consists of millions of sensing devices in buildings, vehicles and elsewhere that deliver reams of data online. Yet this far-flung phenomenon involves so many different kinds of data, sources and communication modes that its myriad information streams can be onerous to acquire and process.

GTRI researchers (l-r) Heyward Adams, Andrew Hardin and Greg Bishop examine Internet of Things devices whose output can be integrated using GTRI’s new FUSE software. Image credit: Rob Felt, Georgia Tech

GTRI researchers (l-r) Heyward Adams, Andrew Hardin and Greg Bishop examine Internet of Things devices whose output can be integrated using GTRI’s new FUSE software. Image credit: Rob Felt, Georgia Tech

Researchers at the Georgia Tech Research Institute (GTRI) have developed a flexible, generic data-fusion software that simplifies interacting with sensor networks. Known as FUSE, it provides a framework to standardize the diverse IoT world. Its application programming interface (API) lets users capture, store, annotate and transform any data coming from Internet-connected sources.

“The Internet of Things has always been something of a Tower of Babel, because it gathers data from everywhere – from the latest smart-building microcontrollers and driver-assist vehicles to legacy sensors installed for years,” said Heyward Adams, a GTRI research scientist who is leading the FUSE project. “Traditionally, people wanting to utilize IoT information have had to examine the attributes of each individual sensor and then write custom software on an ad-hoc basis to handle it.”

Before FUSE, Adams said, a typical IoT task could require several manual steps. For example, users would acquire data from the Internet by manually finding and setting up the proper communication protocols. Then each data value would have to be assigned to a supporting database. Finally, the user would need to process the data, via approaches such as arithmetic manipulation or statistical evaluation, before it could be fed into a decision algorithm.

“FUSE lets us take a task that used to involve a week or two, and complete it in 10 or 15 minutes,” he said. “It provides a standard way of communicating in the unstandardized world of IoT.”

Adams explained that the technical challenges in creating an Internet of Things framework include not just receiving and transmitting sensor data that use different communication protocols and modalities, but also digesting and processing a variety of data encodings and formats. One particular challenge involves dealing with timing differences between incoming data sources.

To build their framework, the GTRI team developed advanced algorithms for handling the many different source types, communication modes and data types coming in over the internet. They also devised methods for managing interactions among data sources that use varying and unpredictable data rates.

The result was FUSE, with capabilities that include:

  • Providing users with online forms that let them define the sources they need in the form of “domains” – abstract descriptions of how the targeted data interrelate;
  • Gathering incoming raw data according to user specifications and mapping them into the specified domains. The data can then be transformed and manipulated using “tasks,” which are user-defined JavaScript functions or legacy software that run inside the FUSE service;
  • Displaying the processed data to users on-screen via an interactive data visualization, exploration and analysis dashboard that supports most data types including numeric, logical, and text data. Users can also devise their own custom dashboards or other interfaces.

FUSE makes extensive use of the generic representational state transfer (REST) data capability. Referred to as RESTful, this widely used Internet standard supports the framework’s ability to receive and transmit divergent data streams.

The FUSE framework is designed to be massively distributable. Using load-balancing techniques, the service can spread IOT workloads across entire computer clusters. Moreover, FUSE can also operate on small and inexpensive microcontrollers of the type increasingly found in buildings and vehicles performing a variety of smart sensing tasks.

The development team has built a transform layer into FUSE that allows the framework to connect to legacy sensors, allowing integration of older devices that utilize diverse hardware and software designs. FUSE currently employs the open-source MongoDB program as its storage database, but GTRI researchers are developing adapters that let the service plug into common databases such as Oracle, MySQL and Microsoft SQL.

“One of the advantages of FUSE is that it can be broken up and distributed to accommodate any sensor and server architecture,” Adams said. “So it can grow and change as a business, facility or campus changes over time.”

Source: Georgia Tech

World’s Largest Robotic Field Scanner Now in Place

The world’s largest robotic field scanner has been inaugurated at the University of Arizona’s Maricopa Agricultural Center, or MAC, near Phoenix.

Mounted on a 30-ton steel gantry moving along 200-meter steel rails over 1.5 acres of energy sorghum, the high-throughput phenotyping robot senses and continuously images the growth and development of the crop, generating an extremely high-resolution, enormous data stream — about 5 terabytes per day.

The world's largest robotic field scanner (white steel box) is mounted on a 30-ton steel gantry moving along 200-meter steel rails over 1.5 acres of energy sorghum at the Maricopa Agricultural Center. Image credit: Susan McGinley

The world’s largest robotic field scanner (white steel box) is mounted on a 30-ton steel gantry moving along 200-meter steel rails over 1.5 acres of energy sorghum at the Maricopa Agricultural Center. Image credit: Susan McGinley

The scanner is part of the U.S. Department of Energy’s Advanced Research Projects Agency-Energy, or ARPA-E, program known as Transportation Energy Resources from Renewable Agriculture, or TERRA. The overall goal of the multi-institutional effort that includes the UA is to identify crop physical (phenotypic) traits that are best suited to producing high-energy sustainable biofuels and match those plant characteristics to their genes, greatly speeding up plant breeding to deliver improved varieties to market.

The UA and TERRA hosted a recent field day, which included a demonstration of the “field scanalyzer” and other ground and air-based robotics for plant breeding, along with tractor-based sensors and presentations on data analytics platforms for energy crops.

“The Maricopa Agricultural Center looks like a farm, but really it’s a laboratory. Having the field scanner here is part of our transformation into the next phase of agriculture,” said Shane Burgess, UA vice president for Agriculture, Life and Veterinary Sciences, and Cooperative Extension; dean of the UA College of Agriculture and Life Sciences; and director of the Arizona Experiment Station.

“The LemnaTec Scanalyzer is the largest field crop data acquisition platform in the world,” Burgess said. “It’s the vanguard of systems integrating phenotype with genotype for improving agricultural production.”

The test plots include 176 lines, cultivars and hybrids of sorghum planted in an area about the size of a football field. About 1.25 acres (30,000 to 40,000 plants) are being scanned, with the data feeding into the onsite Maricopa Phenomics Center, a joint collaboration with USDA-ARS Arid-Land Agricultural Research Center and MAC. The University of Illinois is handling the big-data analytics.

Scientists expect to see numerous variations in plant height, leaf surface area, biomass, heat tolerance and other responses to local conditions.

“The system was installed in Maricopa because we are the best location in the United States to do drought and heat studies,” said Pedro Andrade-Sanchez, associate professor and precision agriculture specialist at MAC in charge of the field deployment of the sensor systems. “Our climate, the natural conditions of the low desert, is why we are here. We manage the environment to provide the best conditions to image these crop materials.”

The UA’s role is twofold: to provide and maintain the infrastructure (instrumentation, electric power and a very large data pipeline) under Andrade’s direction and to establish and conduct the plant experiments, involving a complex experimental design and precise placement of seeds in the ground to be georeferenced properly. Mike Ottman, extension agronomist in the UA School of Plant Sciences, is handling the agronomic aspects of growing sorghum.

“We know the genes, but where we’re stumbling is we don’t know the phenotype, meaning the physical characteristics of the crop — height, leaves, how fast it grows,” Ottman said. “In the past, someone with a clipboard and a pencil had to take notes on these things. Now we have several scanning instruments that can track a crop and take our notes for us. It’s called high throughput phenotyping, meaning you can characterize all of these plants in a hurry a couple of times a day, and far more objectively. You can note water stress, varieties that are drought tolerant and then look at the common genes.”

USDA-ARS research plant physiologist Jeff White is using the various remote sensing capabilities of the scanner — 3-D capability, thermal imagery, fluorescence — to measure leaf area and other characteristics. Crop simulation models, combined with the phenotyping data he obtains, will help infer plant/water interactions and transpiration. White’s role is to ensure that the data collected by the scanalyzer correlates with important characteristics of the crop involving growth and development.

Although currently set to measure plant traits best suited for biofuel production, the field scanner is a reference tool that eventually will be scaled down to specific objectives and breeding applications in other crops. Grains, green leafy vegetables and loblolly pines are among the possibilities.

“The conditions allowing us to phenotype for the most important traits for agriculture in Arizona are right here,” said Karen Schumaker, director of the School of Plant Sciences. “Ultimately, we will be able to examine more than just leaf characteristics in a field. At some point, this could be used for seed germination and breeding for below-ground root systems for drought — roots that spread deeper or wider below ground. It’s just the start.”

Source: University of Arizona

Better Together: Interpreting Pathology with the help of Artificial Intelligence

Pathologists have been largely diagnosing disease the same way for the past 100 years, by manually reviewing images under a microscope. But new work suggests that computers can help doctors improve accuracy and significantly change the way cancer and other diseases are diagnosed.

Science and research backgrounds for your design

A research team from Harvard Medical School and Beth Israel Deaconess Medical Center and recently developed artificial intelligence (AI) methods aimed at training computers to interpret pathology images, with the long-term goal of building AI-powered systems to make pathologic diagnoses more accurate.

“Our AI method is based on deep learning, a machine-learning algorithm used for a range of applications including speech recognition and image recognition,” explained pathologist Andrew Beck, HMS associate professor of pathology and director of bioinformatics at the Cancer Research Institute at Beth Israel Deaconess. “This approach teaches machines to interpret the complex patterns and structure observed in real-life data by building multi-layer artificial neural networks, in a process which is thought to show similarities with the learning process that occurs in layers of neurons in the brain’s neocortex, the region where thinking occurs.”

The Beck lab’s approach was recently put to the test in a competition held at the annual meeting of the International Symposium of Biomedical Imaging, which involved examining images of lymph nodes to decide whether they contained breast cancer. The research team of Beck and his lab’s postdoctoral fellows Dayong Wang and Humayun Irshad and student Rishab Gargya, together with Aditya Khosla of the MIT Computer Science and Artificial Intelligence Laboratory, placed first in two separate categories, competing against private companies and academic research institutions from around the world. The research team today posted a technical report describing their approach to the arXiv.org repository, an open access archive of e-prints in physics, mathematics, computer science, quantitative biology, quantitative finance and statistics.

“Identifying the presence or absence of metastatic cancer in a patient’s lymph nodes is a routine and critically important task for pathologists,” Beck explained. “Peering into the microscope to sift through millions of normal cells to identify just a few malignant cells can prove extremely laborious using conventional methods. We thought this was a task that the computer could be quite good at—and that proved to be the case.”

In an objective evaluation in which researchers were given slides of lymph node cells and asked to determine whether they contained cancer, the team’s automated diagnostic method proved accurate approximately 92 percent of the time, said Khosla, adding, “This nearly matched the success rate of a human pathologist, whose results were 96 percent accurate.”

“But the truly exciting thing was when we combined the pathologist’s analysis with our automated computational diagnostic method, the result improved to 99.5 percent accuracy,” said Beck. “Combining these two methods yielded a major reduction in errors.”

The team trained the computer to distinguish between cancerous tumor regions and normal regions based on a deep, multilayer convolutional network.

“In our approach, we started with hundreds of training slides for which a pathologist has labeled regions of cancer and regions of normal cells,” said Wang. “We then extracted millions of these small training examples and used deep learning to build a computational model to classify them.”

The team then identified the specific training examples for which the computer is prone to making mistakes and re-trained the computer using greater numbers of the more difficult training examples. In this way, the computer’s performance continued to improve.

“There have been many reasons to think that digitizing images and using machine learning could help pathologists be faster, more accurate and make more accurate diagnoses for patients,” Beck added. “This has been a big mission in the field of pathology for more than 30 years. But it’s been only recently that improved scanning, storage, processing and algorithms have made it possible to pursue this mission effectively. Our results in the ISBI competition show that what the computer is doing is genuinely intelligent and that the combination of human and computer interpretations will result in more precise and more clinically valuable diagnoses to guide treatment decisions.”

Jeroen van der Laak, who leads a digital pathology research group at Radboud University Medical Center in the Netherlands and was an organizer for the competition, said, “When we started this challenge, we expected some interesting results. The fact that computers had almost comparable performance to humans is way beyond what I had anticipated. It is a clear indication that artificial intelligence is going to shape the way we deal with histopathological images in the years to come.”

Beck and Khosla recently formed a company (PathAI), with the mission of developing and applying AI technology for pathology.

Source: HMS

Study Sheds Light on Self-Driving Cars and Public Ethics

As driverless cars filling city roads draws ever closer to becoming a reality, researchers intensify their efforts to understand the implications of regulators and car manufacturers pre-setting vehicles with specific safety rules.

Everyone wants more safety on the roads, but are we prepared to put our own lives at risk to minimize casualties? According to a new study, published in the journal Science, the public is currently torn between saying “yes” to utilitarian safety rules being pre-programmed, and “no” to buying such vehicles themselves. Image credit: Steve Jurvetson via Wikimedia.org, CC BY 2.0.

Everyone wants more safety on the roads, but are we prepared to put our own lives at risk to minimize casualties? According to a new study, published in the journal Science, the public is currently torn between saying “yes” to utilitarian safety rules being pre-programmed, and “no” to buying such vehicles themselves. Image credit: Steve Jurvetson via Wikimedia.org, CC BY 2.0.

One of the major questions ethicists and the public face is whether these cars should be programmed with utilitarian principles set to save as many lives as possible, or should they protect the passengers regardless of the number of potential casualties.

To find out what the average person thinks about the issue, researchers led by Iyad Rahwan, an associate professor at the MIT Media Lab, conducted six surveys, using the online Mechanical Turks public-opinion tool, between June 2015 and November 2015.

Results show that people are generally in favour of programming vehicles to minimize injury and death on the road, but are not likely to buy such vehicles themselves.

“Most people want to live in a world where cars will minimize casualties,” said Rahwan. “But everybody want their own car to protect them at all costs.”

For instance, as many as 76 percent of respondents believe that it is more moral for a driverless vehicle, should such a circumstance arise, to sacrifice one passenger to save 10 pedestrians.

But when asked whether they would themselves buy a vehicle pre-programmed with government regulations based on utilitarian ethics, the survey-takers said they’d be only one-third as likely to do so, as opposed to buying a car that could be programmed in any fashion.

For the time being, write the authors in their paper, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest, which paradoxically could increase casualties by postponing the adoption of a safer technology.

Two caveats worth mentioning about the study is that the aggregate safety of autonomous vehicles on the road is not yet determined, and public polling on this issue is still in its infancy – people may very well change their minds as more data emerges in the future.

Having said that, concludes Rahwan, “I think it was important to not just have a theoretical discussion of this, but to actually have an empirically informed discussion”.

Source: phys.org.