New York City – Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) demonstrated a new model named AI2 that predicts cyber attacks at a significantly higher rate than currently used systems. This model, which combines the machine-learning startup PatternEx with input from living experts, was presented at IEE International Conference on Big Data Security in New York City last week.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) demonstrated a new model named AI2 that predicts cyber attacks at a significantly higher rate than currently used systems. Photo credit: Zenedge
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) demonstrated a new model named AI2 that predicts cyber attacks at a significantly higher rate than currently used systems. Photo credit: Zenedge

The security systems normally used today tend to fail because they rely just on rules created by human experts and miss all those attacks that simply don’t match those rules. Such systems are known as “analyst-driven solutions”.

But systems that rely only on machines usually fail too, given that they are based on “anomaly detection” and false positives are triggered too often, resulting in distrust of the security system and forcing humans to investigate the outcomes.

In contrast, researchers at MIT came up with a system that combines the best of humans and machines.

How does the system work?

The AI2 model, which the research team calls “analyst intuition”, has the ability to identify 85 percent of attacks, or three times more than previously-released systems. AI2 can also reduce the number of false positives by a factor of 5. Researchers tested this new system on 3.6 billion pieces that were generated by millions of users over three months. These pieces of data are known as “log lines”.

First using unsupervised machine-learning, AI2 analyzes data and detects attacks as it clusters the information into meaningful patterns. The system then presents the suspicious activity to human experts who have the task of confirming which events are real attacks and add their feedback into the system for the next data set.

“You can think about the system as a virtual analyst,” explained CSAIL research scientist Kalyan Veeramachaneni, who created the system alongside Ignacio Arnaldo, a data scientist at PatternEx. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly”, Veeramachaneni added.

Nitesh Chawla, the Frank M. Freimann Professor of Computer Science at the University of Notre Dame, described this paper as one that combines the strengths of machine learning and analyst intuition, resulting in a highly-efficient performance that drives out both false negatives and false positives.

Chawla affirmed that this research can potentially become a “line of defense” against the mayor challenges that current systems used by consumers have to face, including fraud and account takeover.

Source: MIT News