Artificial Intelligence

Artificial intelligence (AI) and Cognitive Computing

Artificial intelligence (AI) and Cognitive Computing

Artificial intelligence (AI) is science that deals with building intelligent machines that can think and respond like human.

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple’s Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.

While AI tools present a range of new functionality for businesses ,the use of artificial intelligence raises ethical questions. This is because deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.

Despite potential risks, there are few regulations governing the use AI tools, and where laws do exist, the typically pertain to AI only indirectly.

Can We Trust AI ?

To invent the modern world, we have had to invent the complex web of laws, regulations, industry practices and societal norms that make it possible to rely on our fellow humans.

So what will it take for you to trust artificial intelligence? To allow it to drive your car? To monitor your child? To analyse your brain scan and direct surgical instruments to extract your tumour? To spy on a concert crowd and zero in on the perpetrator of a robbery five years ago?

We need a control spectrum for AI !
After all, what constitutes “good behaviour” in a social media company’s use of AI? Where is it documented?

We could conceivably come to a set of norms by trial and error – or scandal and response. But in a febrile environment, intellectual coherence is unlikely to emerge by lurching from one crisis of confidence to the next

Manners in human societies differ, but they are all designed to make individuals feel comfortable through mutual respect. Developers of AI need to keep this in mind, integrating it in their company model from the start and consistently reinforcing it as the reality of doing business.

n humans societies, there are consequences for falling short of societal standards. We need the same for AI developers – a way for consumers to recognise and reward ethical conduct.

At the same time, citizens would surely welcome the opportunity to make informed decisions, and to tread the middle path between accepting a free-for-all or excluding AI from their lives.

Artificial Intelligence in Healthcare

The University of California, San Francisco and GE Healthcare are studying how artificial intelligence and machine learning can help doctors and caregivers make faster and smarter clinical decisions. Together, they will be developing deep learning algorithms aimed at delivering information to clinicians faster.

Versions of AI such as those found in Apple’s Siri or Microsoft’s advanced image-recognition system have begun to prove the technology’s capability, “but in healthcare, there has not been nearly as much progress,” says Dr. Michael Blum, associate vice chancellor for informatics and a cardiologist at UCSF. “In medical school, physicians learn to use a stethoscope and to read X-rays to help identify what’s happening inside a patient’s body. Now, we will augment those century-old tools with contemporary technologies including artificial intelligence and machine learning.”

The UCSF and GE Healthcare team will first develop and validate AI algorithms using thousands of anonymized and annotated chest X-rays, many acquired using GE Healthcare equipment. Once the solution is deemed safe and effective, it can then be deployed worldwide on the GE Health Cloud and smart GE Healthcare imaging machines, and will have the ability to analyze large volumes of X-rays for critical abnormalities, such as a collapsed lung or an inappropriately placed feeding tube.

The technology in development aims to make clinical care teams more efficient and to help radiologists more intelligently prioritize their work by pushing cases that the AI algorithms identify as critical to the top of their work list. The long-term goal is to reduce the time it takes to treat patients in acute situations and improve patient outcomes.

Dr. Blum says that without the support of such algorithms, the radiologist’s time is not always effectively utilized. A radiologist, for example, might look at dozens of normal or unchanged chest X-rays before reviewing an exam with a time-sensitive imaging finding. The science behind deep learning enables a radiologist to provide the system with valuable feedback by confirming or rejecting the software’s selection, and continuously feeding it with new imaging data that constantly improves the accuracy of the algorithm.

GE Healthcare’s AI development roadmap aims to develop a library of algorithms for all diagnostic imaging methods, helping to improve diagnostic accuracy and patient outcomes as well as clinical workflows and productivity.

Dr. Blum says the first algorithms will be developed and tested over the coming six months and will focus on supporting clinicians in their daily practice. “They won’t be making a diagnosis or recommending a treatment initially, but we hope to develop those more sophisticated algorithms as the collaboration progresses. It’s easy to imagine that eventually we will develop algorithms that are numerically as good as the doctors [at making a diagnosis], but there will always be the need for experienced physicians in the complex, emotional undertaking of providing healthcare.”

This statement especially resonates in some global healthcare markets, including emerging markets, where there is a shortage of radiologists and radiology specialists. The future algorithms have the potential to address a lack of clinical resources and ensure providers around the world can access new knowledge and insights delivered through deep learning.

GE Healthcare and UCSF’s collaboration brings together two teams with a storied history in the field of diagnostic imaging. GE Healthcare invented the X-ray in 1895, and UCSF opened one of the first dedicated X-ray facilities in 1912 to instruct all medical students in radiology. Today, their partnership is helping shape the future of patient care.

Note: Technology in development that represents ongoing research and development efforts. These technologies are not products and may never become products. Not for sale. Not cleared or approved by the U.S. FDA or any other global regulator for commercial availability.

 

Leave a Reply

Your email address will not be published. Required fields are marked *