No drama means golf with Obama

Fifteen days from now, on Sunday October 22, the midterm elections will take place nationwide and the outcome will reveal where President Mauricio Macri’s centre-right coalition, Cambiemos, stands…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Can deep learning help against online fraud?

For all the stories about AI eventually destroying humanity, we have very few examples of a confrontation between man and machine. One case might be the recents developments of deep learning solutions against online fraud.

Fraudsters tend to use automation tools, not really AI (yet). They start with an idea and make money with volume and speed. Experts are moving from relatively slow rule-based models to deep learning streaming. Scanning billions of transactions looking for anomalies seems to be right up in its alley.

So, who is winning?

Their problem is the pressure to be nimble. Every platform promotes frictionless. Guest checkout is a must have, but it reduces the amount of data available to verify a transaction. Digital delivery and two-day shipping are expected, but they reduce the possibilities to detect and react on time at a reasonable cost. A platform like OpenTable makes small margins with their gift card business. They can’t send too many transactions to manual review without impacting cost — and losing customers.

Offering protection, the experts have developed a full range of solutions, based on analytics, rule-based systems, and classifications. They help to build a model, validate, and deploy. It can take a few weeks to a few months.

But a system is never done. Fraudsters try new things all the time. The team needs to continuously test performance and adapt. This adversarial aspect is a serious problem for statistical models that get better with more stable data. More data never comes. The other side is changing the rules too quickly.

As the fraud expert Noam Naveh says:

On the other side, the fraudsters have many advantages.

It’s a big business — $48 billion a year worldwide, with its specialists by segment, between fake accounts creation, legit account takeover, password hacking, abusive content, and of course fraudulent transactions.

Security is a hard “negative” goal — the defender has to be holistic while the attacker needs only one defect.

For $48 billion, fraudsters can afford some spending on infrastructure — to bypass the controls and generate activity camouflage. One star reviews hitting the target will be hidden by many reviews of random products. Bots post on Facebook and watch youtube videos.

These bots can never have the exact behavior of a real human. With enough data, their oddities are detected — but that means they have to be active for a while. Added to that, RSA reported that 70% of businesses need many days to act on a fraud.

All the delays add up and open enough of a window for fraudulent accounts to operate. Then automated systems create another batch of accounts - thousands of profile a day when an employee can review a most 300 and maybe remove 150.

If traditional systems are too rigid and too slow, can we expect help on the deep learning side?

The offer is exploding, with many cloud services offering a deep learning solution, often matched with other techniques.

Examples include Riskified (founded in 2013). Cash Shield, who recently raised $ 5.5 million, Siftscience (created in 2011) or Simility. Most of them offer API setup, real-time protection, and workflow tools — as well as chargeback guarantees.

Deep learning promise is speed: a stream mode instead of the analytics cycle. Ingesting data from all possible sources, the model finds the anomalies by itself. It can pick up patterns not previously identified.

Fraudsters might be smart but they have to sleep sometime. If a deep learning system can pick up new anomalies 24/7 in streaming, fraudsters now have a real challenge.

So is the machine winning? Not yet.

First, deep learning is data hungry — it needs to see a lot of frauds. A model is a big machine, not so nimble. A service will use feeds from all its customers to get enough cases, but variations and cleanup complicate things. Not all chargebacks are fraud, some are human error. A fraud itself can come in so many flavors that data can be sparse for a given case. Patterns in a business may not exist in another.

A sign of potential “hype” here is that many models actually also use several techniques — traditional classification methods still (logistic regression, tree-based model, support vector machines, or random forest), or clustering to find “edges” (No matter how well they are programmed, bots will show a specific profile. Clustering will pick up their similarity. A team at CMU presented good results to fight camouflage).

Second limit, an entire solution is more than setting APIs. For the merchant, the goal is to avoid frictions with customers, and the high costs of the “for review” bucket. That means process definition, performance metrics, and workflow setting. Deep learning may be a powerful engine, someone still needs to build the car — and, for now, to drive it.

The reality of the business is still the experts racing the fraudsters. The fraudsters come up with new ideas every day and use bots to deploy them at scale. The experts build models and use deep learning to add a streaming mode to pick up anomalies faster. But it’s still human ingenuity vs human ingenuity.

Add a comment

Related posts:

Training Choreography for Competition Season

There is the spring season when you are learning new tricks, taking breaks, and creating new choreography. Summer is the most intense and hardest work. As a skater, you are building your foundation…