The bipartisan Senate Working Group on Artificial Intelligence released a The U.S. Senate Artificial Intelligence Policy Roadmap encourages the Senate Appropriations Committee to fund cross-government artificial intelligence research and development projects, including research on biotechnology and artificial intelligence applications that will fundamentally change medicine.
The group recognizes a variety of use cases for artificial intelligence, including in healthcare settings such as improving disease diagnosis, developing new drugs and assisting providers in a variety of capacities.
Relevant committees should consider enacting legislation that would support the deployment of artificial intelligence in the field, the senators wrote. They should also implement guardrails and safety measures to ensure patient safety while ensuring regulations do not stifle innovation.
“This includes protecting consumers, preventing fraud and abuse, and promoting the use of accurate and representative data,” the senators wrote.
The legislation should also provide transparency requirements for providers and the public to understand the use of artificial intelligence in health care products and clinical settings, including information used to train artificial intelligence models.
The committee should also support the National Institutes of Health (NIH) in developing and improving artificial intelligence technologies, particularly in data governance and providing data for scientific and machine learning research, while ensuring patient privacy, the roadmap states.
U.S. Department of Health and Human Services (HHS) agencies, such as the Food and Drug Administration (FDA) and the Office of the National Coordinator for Health Information Technology, should also be provided with tools to effectively determine the benefits and risks of AI products so developers can comply Predictable regulatory structure.
The committee should also consider “policies that promote innovation in artificial intelligence systems to meaningfully improve health outcomes and the efficiency of health care delivery,” the senators wrote. This should include examining the Centers for Medicare and Medicaid Services’ reimbursement mechanisms and guardrails to Ensure accountability, appropriate use and widespread adoption of artificial intelligence across all populations.
The group also encourages companies to conduct rigorous testing to evaluate and understand the potential harmful effects of their artificial intelligence products and not to release products that do not meet industry standards.
larger trend
in December, Brought to you by digital health leaders mobile health news They have their own ideas about how regulators should develop rules around the use of artificial intelligence in health care.
“First, regulators need to agree on the controls required to safely and effectively integrate artificial intelligence into many aspects of healthcare, taking into account risks and good manufacturing practices.” Welldoc, tell mobile health news.
“Second, regulators must go beyond controls and provide guidance to industry that enables companies to test and implement in real-world settings. This will help support innovation, discovery and necessary development of artificial intelligence.”
sales force Regulators also need to define and set clear boundaries for data and privacy, said Amit Khanna, senior vice president and general manager of the health department.
“Regulators need to ensure that regulations do not create walled gardens/silos in healthcare, but rather minimize risks while allowing AI to reduce the costs of testing, delivering care, and research and development,” Khanna said.
Google’s Chief Clinical Officer Dr. Michael Howell tells us mobile health news Regulators need to consider the hub-and-spoke model.
“We believe that artificial intelligence is too important to regulate well. We think this may be counterintuitive, but we believe that good regulation will accelerate innovation rather than hinder it,” Howell said.
“However, there are some risks. The risk is that if we end up with regulations that differ in a meaningful way from state to state or from country to country, that could hinder innovation.”