Book Review: The “Age of Artificial Intelligence” And Your Portfolio

As you may have heard just once or twice, we are living in an age of artificial intelligence.

This is a big and tough topic. "Artificial intelligence" is a term that gets overused and abused pretty regularly and it can get to the point where it becomes just another piece of techno jargon and starts to lose all meaning.

We will cover this topic a few different ways and use it as a launching pad for a few new themes in the weeks ahead.

*******

Helpfully, a new book has just come out that tackles the thorny subject and gives the reader a useful framework for thinking through some of the specific use cases right now.

The book in question, The Age of AI and Our Human Future, is co-authored by three heavy hitting authors: former Secretary of State, Henry Kissinger, former Google CEO Eric Schmidt and MIT Dean of Computing, Daniel Huttenlocher.

Unsurprisingly for a book co-written by three titans of three different arenas there is a lot going on and, frankly, not all of it is always written very smoothly or very clearly.

But, while it might not always make for an easy read, it is oddly confidence inspiring that a former secretary of state can trip over their words or ramble on.

More helpfully, however, the writers start us off with a simple premise:

“Few eras have faced a strategic and technological challenge so complex and with so little consensus about either the nature of the challenge or even the vocabulary necessary for discussing it.”

Hard to disagree and we certainly sympathize - it is a tough subject to grapple with.

The celebrated authors follow up by delivering a pretty stark warning:

“Whether we consider it a tool, a partner, or a rival, [ai] will alter our experience as reasoning beings and permanently change our relationship with reality. The result will be a new epoch.”

It is a striking image and more than a little disconcerting. That feeling of unease might not be too wrong either.

Earlier, Henry Kissinger famously and somewhat darkly predicted that AI will prove to be the end of the Enlightenment epoch and its values and that, over time, this will fundamentally alter what it means to be human.

And what would replace the Enlightenment?

  • Well, humans are beginning to not only be routinely manipulated by artificial intelligence but also increasingly live in an entire world created and curated by AI. So, the technology is becoming indistinguishable from our everyday existence and also determining our perception of reality.

More practically, the authors really focus on one of the key risks of this shift as they see it:

 “AI holds the prospect of augmenting conventional, nuclear and cyber capabilities in ways that make security relationships among rivals more challenging to predict and maintain and conflicts more difficult to limit.”

  • In terms of states, the authors' argue that state sovereignty and especially national security will be significantly threatened by AI systems that operate using processes, information and reasoning we cannot understand or predict.

That is obviously problematic. And likely brings us around to the worries that gripped Elon Musk in the quote at the start of this newsletter.

If you are reliant on systems and technology for both your perception of reality as well as your safety and security in an era where you can neither fully grasp or forecast how those systems are making decisions then you can very rapidly either lose control or simply get a nasty surprise.

It is one thing if this surprise comes via conventional weapon systems. That would be problematic enough. But what if the attack came via nuclear or cyber capabilities? And what if our responses are also driven by AI systems that make counterattacks or defensive moves in response?

This predicament could rapidly take things away from being a problem that is difficult to predict and towards one that is difficult to limit.

One good illustration of this trend is what we already see with competitive chess. So many elite grandmasters are so dependent on sophisticated computer systems for their training that there is a real edge to be found in "thinking" like a human, and not one of the computer programs.

Magnus Carlsen, the current world champion, is renowned for this "new" approach and is justly famed for both his unpredictability and his ability to unlock new methods of playing (and winning).

Carlsen's particular genius is to find the razor thin edge of a winning strategy that will NOT be highly rated by computer systems.

There are no nuclear weapon type moves in chess, however. A single mistake with nuclear weapons and their use could prove fatal.

And this isn't the end of our potential problems with AI.

As the authors of The Age of AI note, there are some other differences besides being difficult to predict and understand between AI technology and previous eras of scientific advancement.

This is important to consider because it suggests that the tenor of human responses not only will be but should be different from the previous shocks that accompany large scientific advances.

For instance:

  1. AI is clearly dual use. It has tremendous civilian and military value and is being eagerly used by both disciplines.

  2. It is also, of course, able to spread (or be shared) very quickly and easily.

  3. Lastly, AI systems have the potential to inflict enormous harm.

This trifecta is a nasty combination of new and dangerous.

In earlier eras, new and incredibly powerful technologies couldn't achieve this trifecta. They could, at best manage, two of the three.

For instance:

  • In recent history, nuclear weapons and missile technology are dual use and very, very powerful but also hard to either spread or adopt and replicate.

  • In the an earlier era, the power of steam and early communications technology such as the telegraph were both dual use and quite easy to share but, while very technologically impressive, didn't hold the ability to destroy (or supplant) human civilization.

So, most if not all scientific advances never had this very perilous mix of qualities which does make "this time different."

The question now is, what will nuclear or cyber or even conventional weapons systems be able to achieve with these qualities?

And where does this leave us? In practical terms?

Well, here is one idea......

*******

Have questions? Care to find out more? Feel free to reach out at contact@pebble.finance or join our Slack community to meet more like-minded individuals and see what we are talking about today. All are welcome

    Previous
    Previous

    Omicron Update #2: What Does The Latest Variant Mean For Financial Markets?

    Next
    Next

    What The End Of Chinese Gambling Junkets Reveals About Investing In China’s Economy