After a long wait, the Department of Defense has finally released its Responsible Artificial Intelligence (RAI) Strategy and Implementation Pathway, highlighting the importance of having a data-focused organization that emphasizes flexibility and trust among those operating artificial intelligence.
The document comes as technological breakthroughs in artificial intelligence have consistently changed how the United States keeps its national security in check. Many of the United States’ close rivals, including Russia and China, have been investing heavily into AI and military applications of AI that may be of concern to the US and the entire global security. This is because the two countries are known to not play by the rule book of ethical use.
To keep pace with Russia, China, and other countries investing in artificial intelligence, the Pentagon aims to keep its military advantage by fully embracing AI technology. However, one must harness this technology for lawful and ethical use, thus implementing this new Responsible Artificial Intelligence Strategy and Implementation Pathway.
“The Responsible AI (RAI) Strategy and Implementation (S&I) Pathway illuminates our path forward by defining and communicating our framework for harnessing AI,” Deputy Secretary of Defense Kathleen H. Hicks said. “It help to eliminate uncertainty and hesitancy – and enables us to move faster. Integrating ethics from the start also empowers the DoD to maintain the trust of our allies and coalition partners as we work alongside them to promote democratic norms and international standards,” she added.
Full Documentation Covering Ethics and Implementation Pathway
The document generally operationalizes the Department of Defense’s artificial intelligence ethical principles, which were earlier published last 2020. It reportedly took 15 months for the Defense Innovation Board to create and agree upon five broad principles for the ethical principles of using artificial intelligence.
So what are these ethical principles? Here’s a summary of them based on a press release published last February 24, 2020:
- Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
- Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
- Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
- Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
These principles were made in consultation with the leading artificial intelligence experts in several industries to fully encompass the different facets of AI and how it is used in future applications. Some of these industries include the commerce industry, academia, and the government.
According to the new Responsible Artificial Intelligence (RAI) Strategy and Implementation Pathway, the Department of Defense will utilize a flexible approach to use and develop artificial intelligence as they will not succeed with a narrow set of requirements to accomplish their goals. With this in mind, the document outlines several “foundational tenets” that will guide the DoD in terms of AI:
- RAI Governance. The DoD will ensure disciplined governance structures and processes of the components for clear-cut accountability. They have the overall goal of modernizing governance structures and processes that govern the use of AI.
- Warfighter Trust. This tenet refers to providing education, training, and establishing a framework that “integrates real-time monitoring” of the AI to ensure the AI is working to optimum capacity. It essentially means that all personnel involved with AI should have a standard level of technological familiarity and proficiency with the systems used so that the AI in question is not used unethically.
- AI Product and Acquisition Lifecycle. Utilizing an engineering and risk management approach, the DoD will develop tools, policies, processes, and systems to guide the RAI implementation throughout the AI’s use. This mitigates the risks associated with AI development as the AI product is properly maintained and cared for.
- Requirements Validation. The DoD must incorporate the RAI into all AI requirements as established and approved by the Joint Requirements Oversight Council. This requirement validation process ensures that the usage of the AI is aligned with the department’s operational needs. This also increases the reliability and safety of the systems prior to being used out in the field.
- Responsible AI Ecosystem. The RAI ecosystem must be robust and must be used to improve collaboration between intergovernmental, industry, and academic collaboration, as well as cooperation with allies and partners to establish “global norms” in terms of the usage of AI.
- AI Workforce. The workforce and those using AI must be RAI-ready. As such, they must be educated and trained with regard to AI and RAI so that the workforce may use the technology in an efficient manner.
The newly established Chief Data and Al Office will lead the implementation of the new RAI in coordination with the DoD. Together, they will create a way to track and collect AI-related cases and mission domains for future reference and use.
“It is imperative that the DoD adopts responsible behavior, processes and objectives and implements them in a manner that reflects the Department’s commitment to its AI Ethical Principles,” the document emphasized. “Failure to adopt AI responsibly puts our warfighters, the public, and our partnerships at risk.”
“The department’s desired end state for RAI is trust,” the document emphasized. “Without trust, warfighters and leaders will not employ AI effectively and the American people will not support the continued use and adoption of such technology.”
There are on this article.
You must become a subscriber or login to view or post comments on this article.