Artificial intelligence (AI) is not a new technological development. The idea of intelligent machines has been popular for several centuries. The term “artificial intelligence” was coined by John McCarthy for a workshop at Dartmouth College in 1955 (1), and this workshop is considered the birthplace of AI research. Modern AI owes much of its existence to an earlier paper by Alan Turing (2), in which he proposed the famous Turing Test to determine whether a machine could exhibit intelligent behavior equivalent to—or indistinguishable from—that of a human.
The explosive growth in all things AI over the past few years has evoked strong reactions from the general public. At one end of the spectrum, some people fear AI and refuse to use it—even though they may have unwittingly been using a form of AI in their work for years. At the other extreme, advocates embrace all aspects of AI, regardless of potential ethical implications. Finding a middle ground is not always easy, but it’s the best path forward to take advantage of the improvements in efficiency that AI can bring, while still being cautious about widespread adoption. It’s worth noting that AI is a broad, general term that covers a wide range of technologies (see sidebar).
For life science researchers, AI has the potential to address many common challenges; a previous post on this blog discussed how AI can help develop a research proposal. AI can help with everyday tasks like literature searches, lab notebook management, and data analysis. It is already making strides on a larger scale in applications for lab automation, drug discovery and personalized medicine (reviewed in 3–5). Significant medical breakthroughs have resulted from AI-powered research, such as the discovery of novel antibiotic classes (6) and assessment of atherosclerotic plaques (7). A few examples of AI-driven tools and platforms covering various aspects of life science research are listed here.
Continue reading “Will Artificial Intelligence (AI) Transform the Future of Life Science Research?”