×
JOIN US TODAY CONTRACT VEHICLES CONTACT US

Machine learning (ML) continues its rapid ascent in all corners of security, including cybersecurity. Recent commentary in C4ISRNET advocates for greater ML adoption in the military. As Richard Boyd of Tanjo writes regarding recent events:

“When President Biden was recently called onto the carpet to explain the rapid fall of Afghanistan in nine days, he should have had an AI that could at least explain the data, the models and weights that fed the analysis, conclusions and decisions based on the belief that the 300,000 strong Afghan army would be able to hold off the 60,000 Taliban fighters long enough for an orderly withdrawal. Journalists would then be free to question the data sources, the models or the weightings, but not the president, who would be relying on these systems for his judgment. But more importantly, such a system would have certainly predicted this rapid fall in its Monte Carlo distribution of potential outcomes, and would have generated counter measures and cautions. [emphasis added]”

Our military’s capacity for collecting real-time and historic data is unparalleled. But as Boyd points out, we’re also reaching a tipping point where machine learning is more effective, efficient, and useful than strictly human review. Our military must adopt more ML to enhance overall security.  

Business Case for Machine Learning Use in Cybersecurity

Cybersecurity efforts already embrace ML and these systems are continuously enhanced and refined. “Artificial intelligence and machine learning can automate and scale security testing methods to the point where they can take on much of the work of cyber defense,” wrote Kevin Tonkin of Rebellion Defense earlier this year in Defense One. ML tools allow data to be accessed, shared, and understood rapidly.

Businesses without this layer of defense are at a significant security disadvantage, and possibly a competitive disadvantage, too. ML speeds everything up: threat detection and remediation.

Of course, machines can get it wrong, too. Humans are required to keep the algorithms’ input and context in check. Human input can and should be examined.

“[C]ontextual machine learning algorithms can identify when behavior becomes inadvertently or intentionally risky”, preventing overconfidence in cybersecurity risks or defensive preparation wrote the editorial staff SC Magazine in January. “We need to create a safety net of security solutions that spot the errors people miss. Contextual machine learning offers us a way to solve this kind of problem.”

Advancing Machine Learning within Military Cybersecurity

Already, the Pentagon’s Joint Artificial Intelligence Center is seeking commercial solutions to prepare military data for artificial intelligence.

As C4ISRNET reported in April:

“The services addressed by the DRAID [Data Readiness for Artificial Intelligence Development] span the entire AI data preparation lifecycle, from data ingestion, through labeling, right up to before model training begins,” an April 1 blog post from the JAIC states. “Through access to these services, the DoD will be positioned to effectively prepare AI data to support the full range of AI activities across the DoD and do so in a responsible manner.”

The goal of inter-oberability is paramount. All levels of the military should be able to quickly and securely share data. The government is wise to outsource this to the private sector versus building custom solutions. Yet, this presents additional considerations on oversight and monopolization.

We are in the midst of rapid machine learning adoption and use. The military may be one of the first, and largest customers of commercial software for this purpose but mainstream businesses are following. Is yours?