Welcome to our Friday mailbag edition!

Every week, we receive great questions from readers. And every Friday, I answer as many as I can.

Our questions this week touch on two trends I wrote about here.

I’m talking about the Federal Reserve’s artificial intelligence (AI) “incubator”… And how AI might be integrated into the Fed’s all-digital dollar.

Some of your fellow readers seem to share my concerns about what could happen if these two trends collide…

What do the litigators of privacy say about this use of AI? Are there any Constitutionalist lawyers willing to sue the government to prevent this from happening?

Many have said that technology is moving faster than Congress can pass laws. Our concern needs to be to hinder this technology from starting down this road without strong protective privacy rights to the citizens.

– Tony G.

Hi Tony, thanks so much for your thoughtful question. I share your concern about AI posing a threat to our privacy.

I’m not a constitutional lawyer, but it strikes me that as AI develops, it makes sense to consider how to legally protect a citizen from AI privacy breaches.

There is no explicit mention of the right to privacy in the Constitution.

The Supreme Court has interpreted the Third Amendment as protecting people’s right to privacy in their homes. The amendment occurred because the British forced people to allow their soldiers into their homes during Revolutionary times.

The Supreme Court has interpreted the Fourth Amendment to mean citizens have the right to privacy on their person and in their homes from “unreasonable searches and seizures.”

Similarly, it’s reasonable to expect future Supreme Court cases regarding AI’s ability to “invade” one’s personal or home privacy.

There is no Supreme Court case pending on this. That said, Supreme Court Chief Justice John Roberts warned about the risks of AI “invading privacy interests” in his annual brief. So, the Supreme Court is aware of potential AI dangers.

Meanwhile, privacy litigators are focusing on several types of AI privacy risks.

These include:

  • Facial recognition AI. Facial recognition systems could be used for surveillance without their consent. That could infringe on individual privacy rights and lead to unfair profiling and discrimination.

    There was a recent successful suit against Clearview AI to keep it from selling its facial recognition software to private businesses along these lines. As a result, Clearview AI had to let users block their facial data.

  • Lack of transparency in AI models. AI programs make decisions based on analyzing complex data sets with non-transparent algorithms. Major AI companies like Google or Microsoft don’t share their algorithms publicly. That secrecy could potentially violate individuals’ right to understand how their data is collected and used.

  • Automated AI decision-making errors. AI systems could create and then perpetuate biases that discriminate against certain individuals or specific groups of citizens based on their personal data.

I’m a member of a LinkedIn group called “Women and AI.” We focus on discussing and monitoring fair and equal AI data interpretation.

Finally, AI technology is moving forward rapidly. The gap between AI advancements and legal frameworks, therefore, is growing.

And I agree with you that it’s essential that litigators and public officials monitor developments in this area and protect individual privacy.

Meanwhile, you can take some steps to protect your data. You can limit the amount of apps for which you use facial recognition. You can refrain from filling out online surveys or ones that ask you for information you’re uncomfortable sharing.

Countries might look at CBDCs but are they going to program user actions or inactions like freezing accounts, or in other words, are they weaponizing CBDCs?

Most people, I think, would like to preserve their privacy and freedoms. Is the WEF and other like-minded organizations like the UN telling the world what should be in their programming?

– Richard S.

Hi Richard, thank you for your question about the impact of Central Bank Digital Currencies (CBDCs) on your privacy and freedom over how you use your money. I also worry about these issues.

There are 134 countries that are at some stage of examining or having introduced some form of CBDCs.

It’s not a question of if but when and how CBDCs will be deployed everywhere.

And yes, that could entail governments using CBDCs to freeze accounts or otherwise restrict user actions.

Right now, banks have the authority to freeze accounts for various reasons, such as suspected fraud, money laundering, or political actions, using traditional fiat currencies.

CBDCs could amplify that power. That’s because they could be programmed to contain more information in one place or block of a blockchain. CBDCs could contain AI algorithms that stop their use under certain circumstances.

These algorithms can potentially predict a situation where CBDC accounts could be frozen. They can also be wrong. That’s even scarier.

The possibility of freezing accounts raises other concerns, such as censorship or restriction of individual freedoms based on financial behaviors or personal choices. Central banks can use CBDCs to track, store, and access detailed information about your transactions.

Now, let me answer the other part of your question.

Organizations such as the World Economic Forum (WEF) and the United Nations (UN) play a role in shaping global financial policies and regulations.

While the WEF recognizes the potential of CBDCs to facilitate quicker cross-border transactions, it also emphasizes the importance of protecting individual privacy.

It commends the Swedish Central Bank for restricting data access for its pilot e-krona to central banks, regulators, and commercial banks.

The UN’s stance on CBDCs is similar to that of the WEF. It warns about protecting CBDCs from data breaches and cyber-security attacks.

Both organizations stress that CBDCs must respect individuals’ privacy and data protection rights. That’s all well and good. But whatever they say about CBDCs, they can’t dictate how they are programmed or what they are used for.

Each country gets to decide how its CBDCs are programmed. There’s no single global set of rules or guidelines. And larger countries could decide to exert more pressure on how other nations use their CBDCs.

The Federal Reserve, for instance, could decide that unless another country’s CBDCs allow for imbedded restrictions, they won’t accept them.

In the end, the global use of CBDCs for individuals is still years away. But as we move to a CBDC world, everyone should take precautions to protect their money and financial privacy.

One way to do that is by investing a portion of it in non-programmable assets. That includes investing in gold, Bitcoin, and other real assets, such as silver and copper.

I hope that helps anyone grappling with these questions. 

And that’s all for this week’s mailbag. Thanks to everyone who wrote in!

If I didn’t get to your question this week, look out for my response in a future Friday mailbag edition.

I do my best to respond to as many of your questions and comments as I can. You can write me at [email protected]. Just remember, I can’t give personal investment advice.

In the meantime, happy investing… and have a fantastic weekend!



Nomi Prins
Editor, Inside Wall Street with Nomi Prins