“You can’t hide from the future.”

You might remember the line from Spielberg’s 2002 sci-fi epic, Minority Report.

It was a chilling, dystopian fantasy – set in a future where “pre-crime” units arrest potential criminals before they act.

It seemed far-fetched that something like that could ever really happen in America. But today, the line between fiction and reality is blurring.

Take this September 10 headline from Insider:

NYC police have spent millions on a tech company that claims it can use AI to monitor social media and predict future criminals.

Imagine being flagged as a potential threat, not for what you’ve done, but for what a machine thinks you might do.

As I’ll show you today, this story has implications that go beyond investing. It questions the very fabric of privacy, ethics, and the role of AI in our society.

NYC’s Nod to “Pre-Crime” Dystopianism

The New York City Police Department (NYPD) has poured millions into Voyager Labs. It’s a tech firm that touts its ability to track and foresee crimes via social media.

The Surveillance Technology Oversight Project – a nonprofit championing privacy and countering mass surveillance – unveiled redacted NYPD contracts.

These documents shed light on an over $8 million deal with Voyager Labs inked in 2018.

Voyager Labs positions itself as a trailblazer in “AI-based investigation solutions.”

It offers products not just to law enforcement, but also to the U.S. public sector and corporate security sectors, their official website says.

Law enforcement dabbling in social media analytics isn’t groundbreaking. But what sets Voyager Labs apart is its claim of predictive prowess.

According to a probe by the Brennan Center for Justice, a respected law and public policy institute, Voyager asserts that its tools have the capability to anticipate future criminal activities.

So, how does Voyager do it?

By using algorithms and data analytics to sift through tons of social media content, identifying patterns that signal potential risks.

In a sales pitch to the Los Angeles Police Department, Voyager Labs said:

Voyager Discover takes Voyager Analytics’ abilities a step further, analyzing not only who is most influential but also who is most invested in a given stance: emotionally, ideologically, and personally.

And there’s more. The company reportedly claims that its AI can assess the risk levels of social media users, identifying potential ties to extremism.

Plus, one of its products, VoyagerCheck, is designed to “automatically” flag individuals who may be considered a threat.

Now, here at Inside Wall Street, we’ve written about the many benefits of AI tech – and the massive profit potential for those who get in at the right time.

This doesn’t change any of that. But as these examples show, with any new technology, there is a dark side.

Troubling Trend

The NYPD has reassured the public that they only use these tools for monitoring, not predicting.

But the mere possession of such capabilities raises pressing questions about privacy and potential misuse.

And the NYPD isn’t the only department embracing these AI capabilities.

In 2020, the Los Angeles Police Department shelled out $1 million to Voyager Labs, aiming to use the software to scan social media and spot potential threats to public safety.

And in 2021, the Chicago Police Department teamed up with another big player in the game, Palantir Technologies, with a goal to pinpoint possible criminals.

We don’t have the exact numbers on how much U.S. government law enforcement is dropping on AI solutions everywhere (or the specific tech they’re snagging).

But there’s one clear trend we can’t ignore.

According to a study published by Stanford University, federal government spending on AI contracts has increased by nearly 4.2 times since 2010. It reached $3.3 billion in fiscal year 2022.

That’s what the chart below shows… 

Chart

You can bet that the police have been ramping up their AI spending in line with that growth trajectory too.

The Latest Attack on Your Privacy

Any new technology or system that can gather vast amounts of data is usually defended by the age-old axiom: “If you’ve got nothing to hide, you’ve got nothing to fear.”

But this misses the point. It’s not about what you might have to hide. It’s about the right to privacy.

And as regular readers know, if you live in the U.S., your privacy is already under attack in other ways.

In July, without much fanfare, the financial elites unleashed the foundation of a major overhaul happening to our money.

The good news is, Inside Wall Street editor Nomi Prins recently recorded an emergency briefing, Countdown to Chaos, to explain what it is… and her playbook for profiting from it.

History shows that folks who position themselves right now – before this overhaul is a done deal – could make as much as 10 times… 20 times… even 50 times their money, even as most Americans are blindsided. Click here to learn more from Nomi.

Regards,

Lau Vegys
Analyst, Inside Wall Street with Nomi Prins