Discussion about this post

User's avatar
Phil Tanny's avatar

You've touched on an important concept here. So let's expand it beyond just AI.

As the knowledge explosion generates powers of ever greater scale, everybody is further empowered, including the bad guys. As the bad guys accumulate ever greater powers, they represent an ever larger threat to the system as a whole.

As example, generative AI might be considered a small potatoes threat compared to emerging genetic engineering technologies like CRISPR, which make genetic engineering ever easier, ever cheaper, and thus ever more accessible to ever more people. Imagine your next door neighbor cooking up new life forms in his garage workshop. What will the impact on the environment be when millions of relative amateurs are involved in such experimentation?

Discussion of such threats is typically very compartmentalized, with most experts and articles focusing on this or that particular threat. This is a loser's game that the experts play, because an accelerating knowledge explosion will continue to generate new threats faster than we can figure out what to do about existing threats. 75 years after the invention of nuclear weapons we still don't have a clue what to do about them.

If there is a solution (debatable) it is to switch the focus away from complex details and towards the simple bottom line. The two primary threat factors are:

1) An accelerating knowledge explosion

2) Violent men

If an accelerating knowledge continues to provide violent men with ever more powers of ever greater scale, the miracle of the modern world is doomed. Somehow that marriage has to be broken up.

A key misunderstanding is the wishful thinking notion that the good guys will also be further empowered, and thus can keep the bad guys in check as the knowledge explosion proceeds. That's 19th century thinking. We should rid ourselves of such outdated ideas asap.

As nuclear weapons so clearly demonstrate, as the scale of powers grows, the bad guys are increasingly in a position to bring the entire system down before the good guys have a chance to respond. One bad day, game over.

You are right to focus on the scale of power involved in generative AI, and how bad actors will take advantage of it. Let's take that insight and build upon it.

If the above is of interest, here are two follow on articles:

Knowledge Explosion: https://www.tannytalk.com/p/our-relationship-with-knowledge

Violent Men: https://www.tannytalk.com/s/peace

Expand full comment
Scott P Eckley's avatar

Phil (below) writes, "Discussion of such threats is typically very compartmentalized," and Mike says, "There are plenty of other misuses of AI," and invites us to list "others" in the comments.

I've read a number of articles (here on Substack and other places) on the possible pros and likely cons of AI, but they tend to warn us of how "bad" people will trick us into thinking this art is real, or this song, or this news article. We complain about AI interfering with our need to talk to a REAL person at the credit card company, or internet provider.

What I don't hear many alarmed about is the likely disruption of our election process this November. I was at a meeting with our State Representative, who happens to chair the Technology and Infrastructure Innovation Committee at our capitol. He is more than concerned - in fact convinced - that AI will be injected into the process by people (bad guys) and nations (bad countries). It will not only undermine the integrity of the outcome but erode the confidence we have in our ability to have a voice in our representative government. There are already too many Americans who have lost that confidence. The misuse of AI by those who feel their ideology is more important than our freedoms will have the potential of harm us far greater than those issues we tend to compartmentalize.

Mike ends saying, "We’ll likely see a lot more of this stuff." True statement. It's coming, and we need to be aware of the consequences.

Expand full comment
7 more comments...

No posts