Growing Number of Researchers Turn to AI for Scientific Investigations

Researchers are progressively adopting AI technologies to enhance their scientific inquiries and findings.

Growing Number of Researchers Turn to AI for Scientific Investigations
Global publisher Wiley released a survey earlier this month indicating that researchers are expected to embrace artificial intelligence tools for tasks such as paper preparation, grant application writing, and peer reviews within the next two years.

The survey collected responses from 4,946 researchers across over 70 countries, focusing on their current use of generative AI tools like ChatGPT and DeepSeek, as well as their views on AI’s potential applications.

The majority of respondents believe AI will become a fundamental part of scientific research and publishing. More than half of those surveyed rated AI as superior to humans in over 20 identified tasks, including reviewing extensive literature, summarizing research findings, identifying writing errors, checking for plagiarism, and organizing citations. Furthermore, more than half expect AI to become mainstream in 34 out of 43 research-related tasks within two years.

Among the researchers surveyed, 27 percent are early-career professionals. Of the respondents, 45 percent reported using AI in their research, with the most common uses being translation, proofreading, and manuscript editing. Among these AI users, 81 percent have interacted with OpenAI's ChatGPT for personal or professional reasons, although only one-third were acquainted with other generative AI tools, such as Google's Gemini and Microsoft's Copilot.

The survey also highlighted notable differences across disciplines and regions, revealing that computer scientists are the most inclined to incorporate AI into their work.

Additionally, a report published in Nature on January 23 echoes the survey findings, noting that the Chinese-developed large language model, DeepSeek-R1, is garnering interest from scientists as a cost-effective and accessible alternative to "reasoning" models like OpenAI's o1.

The report mentioned that initial assessments of DeepSeek-R1 indicated its performance on specific tasks in chemistry, mathematics, and coding is comparable to that of OpenAI's o1.

The report further suggested that models like DeepSeek-R1 exhibit capabilities that surpass those of early language models, particularly in solving scientific problems, and hold promise for various research applications.

Max Fischer for TROIB News