The use of Artificial Intelligence (AI) in health care is gaining significant attention and interest. AI is revolutionizing how providers diagnose, treat, and care for patients, but how did we get here? The origins of AI in health care and its development are essential to understanding its applications today. Knowledge of key advancements in AI and what the future holds for its use in wound care is vital to consider for any level of integration into one’s practice.
The introduction of the Turing test by Alan Turing in the 1950s is commonly credited as AI’s inception.1 However, several vital facets of AI, such as the development of neural networks (NN), are essential to understanding predictive AI technologies and those used to assess medical imaging.2
Neural Networks
Related to AI's ability to recognize complex features within medical images is the development of neural networks and spatial invariance. The idea of the artificial neuron was first introduced in 1943 through mathematical models. In 1958, the first iteration of the artificial neural network (ANN), or perceptron, was introduced.2 Almost in tandem with this development, neurologists in 1959 described how cells within the human visual cortex achieved pattern recognition, setting the stage for neurologists in 1962 to discover "spatial invariance."3 Complex cells achieve spatial invariance by responding to visual stimuli regardless of vertical or horizontal orientation, made possible by the combined effort of simple cells that respond to visual stimuli of specific orientation (eg, the top of an image).3
After the first AI winter, the 1980s and 1990s brought more developments related to spatial invariance and artificial neural networks but, subsequently, researchers found themselves in the second AI winter due to Moore's law or a lack of accessible data and computing power necessary to create more advanced systems, specifically more complex ANNs.2
Deep Learning and Convolution
Interest in AI would not be fully revitalized until 1997, when IBM's DeepBlue defeated grandmaster Gary Kasparov in a chess game. That same year, speech recognition software was incorporated into Windows properties, illustrating that processing power (GPU) was finally keeping pace with AI development.4
Complex artificial neural networks rose in the 2000s as GPU caught up with the availability of larger and larger data sets.2 In 2012, a neural network-based deep learning model, or the first convolutional neural network named AlexNet, demonstrated superiority over traditional machine learning technologies as well as humans on designated tasks.2,4 Such deep learning neural networks are, on the surface, a positive tool in health care; however, difficulty assessing how this software comes to its conclusions limits its use. Many experts are calling for explainable mechanisms and explainable AI (XAI) to be integrated into systems related to clinical decision-making.2,6
The 2017 introduction of transformer-based architectures to deep learning models led to their landmark status for audio processing and computer vision tasks.5 They also efficiently assess medical imaging by way of image segmentation. With more explainable operations compared to traditional deep learning, these architectures have shown greater effectiveness in solving complex tasks. Of note, in 2018, the General Data Protection Regulation went into effect in the European Union (EU), mandating that patients have the right to ask how a clinical decision related to their treatment was made.2,4 Most are familiar with these models due to popular generative models such as ChatGPT, also known as Chat Generative Pre-trained Transformer.
Around as early as the 1960s, scientists started experimenting with AI to augment health care.2 The primary goal was to develop AI systems that could aid medical judgment and decision-making. In 1975, researchers at Stanford University developed MYCIN, the rule-based consultant, to suggest which pathogens could be responsible for an infection and then recommend specific antibiotic treatments based on patient information (eg, body weight). Despite its success, MYCIN and other similar models were never used in a clinical setting, largely due to concerns about computer-based recommendations and liability.2
In 1987, the first automated diagnosis of carcinoma in skin lesions was tested and showed promising results.7 That same year, the University of Massachusetts developed DXplain, a successful decision support system.8 This system provided physicians with proposed diagnoses, including explanations and access to a knowledge base for differential diagnosis. However, routine capture of large clinical datasets was still limited due to paper charting.2,7,8
Health care facilities began embracing digital EHR systems in the early 2000s, starting the routine capture of large clinical datasets.2 These datasets were integral in improving the accuracy, depth, and breadth of AI’s application in health care decision-making. Since then, AI utilization to analyze EHRs has significantly increased.2,9
Recently, AI’s ability to diagnose illnesses10, analyze medical images11, and predict treatment outcomes12 is rapidly being explored. In 2017, researchers at Stanford University developed a deep convolutional neural network (CNN) capable of classifying skin lesions on par with experts.13
Perhaps the most notable application of AI in health care comes from the current uses in radiology.14 Medical imaging in radiology has evolved with AI-driven techniques, utilizing substantial computing power to detect nuanced differences in body scans. Functional imaging is emerging as a crucial aspect of patient care, particularly for cancer1, aiding in treatment monitoring and precision. AI-based algorithms aid in diagnosis, prediction of clinical outcomes, and reducing interpretation time.14
A recent survey reported that 79% of surveyed health care professionals anticipate that AI and robotics will enhance the field.15 As AI integration becomes more prevalent, personalized treatment plans are set to become more common. It is estimated that physicians will be able to spend about 17% more time on direct patient care with AI versus without AI.16 This saved time will be achieved by drawing data from various sources, including patient-generated data, to create databases that physicians can access and consult for more individualized patient care.2,16 Predictive analytical tools that use deep learning may soon become standard, assisting health care professionals in analyzing large amounts of data regarding chronic medical conditions and pattern recognition.2
AI technology has seen tremendous progress in health care, from its beginnings in mathematical models to the predictive and assessment models of today. The healthcare revolution is far from over as we head towards a future where personalized patient care is the norm, deep learning models are commonplace, and AI-enhanced surgical procedures are the order of the day. As wound care professionals, it is crucial to embrace these technological developments and understand their impact to ensure that your patients receive the best level of care possible.
References
The views and opinions expressed in this content are solely those of the contributor, and do not represent the views of WoundSource, HMP Global, its affiliates, or subsidiary companies.