You've touched on an important concept here. So let's expand it beyond just AI.
As the knowledge explosion generates powers of ever greater scale, everybody is further empowered, including the bad guys. As the bad guys accumulate ever greater powers, they represent an ever larger threat to the system as a whole.
As example, generative AI might be considered a small potatoes threat compared to emerging genetic engineering technologies like CRISPR, which make genetic engineering ever easier, ever cheaper, and thus ever more accessible to ever more people. Imagine your next door neighbor cooking up new life forms in his garage workshop. What will the impact on the environment be when millions of relative amateurs are involved in such experimentation?
Discussion of such threats is typically very compartmentalized, with most experts and articles focusing on this or that particular threat. This is a loser's game that the experts play, because an accelerating knowledge explosion will continue to generate new threats faster than we can figure out what to do about existing threats. 75 years after the invention of nuclear weapons we still don't have a clue what to do about them.
If there is a solution (debatable) it is to switch the focus away from complex details and towards the simple bottom line. The two primary threat factors are:
1) An accelerating knowledge explosion
2) Violent men
If an accelerating knowledge continues to provide violent men with ever more powers of ever greater scale, the miracle of the modern world is doomed. Somehow that marriage has to be broken up.
A key misunderstanding is the wishful thinking notion that the good guys will also be further empowered, and thus can keep the bad guys in check as the knowledge explosion proceeds. That's 19th century thinking. We should rid ourselves of such outdated ideas asap.
As nuclear weapons so clearly demonstrate, as the scale of powers grows, the bad guys are increasingly in a position to bring the entire system down before the good guys have a chance to respond. One bad day, game over.
You are right to focus on the scale of power involved in generative AI, and how bad actors will take advantage of it. Let's take that insight and build upon it.
If the above is of interest, here are two follow on articles:
This will be the primary question of the coming years. Who can use this technology faster & better, the good guys or bad guys? My takeaway from looking at small scale scams is the bad guys (or low integrity guys) are able to get started much, much faster because they're willing to cut all kinds of corners. I'll read the knowledge explosion piece today. Looks interesting.
Imho, as the scale of the powers available to us grows, the historic contest between good guys and bad guys become increasingly irrelevant. Nuclear weapons illustrate this principle in an easy to understand manner. Once any bad guy has the power to bring down the entire system, it doesn't really matter what the good guys have accomplished.
Phil (below) writes, "Discussion of such threats is typically very compartmentalized," and Mike says, "There are plenty of other misuses of AI," and invites us to list "others" in the comments.
I've read a number of articles (here on Substack and other places) on the possible pros and likely cons of AI, but they tend to warn us of how "bad" people will trick us into thinking this art is real, or this song, or this news article. We complain about AI interfering with our need to talk to a REAL person at the credit card company, or internet provider.
What I don't hear many alarmed about is the likely disruption of our election process this November. I was at a meeting with our State Representative, who happens to chair the Technology and Infrastructure Innovation Committee at our capitol. He is more than concerned - in fact convinced - that AI will be injected into the process by people (bad guys) and nations (bad countries). It will not only undermine the integrity of the outcome but erode the confidence we have in our ability to have a voice in our representative government. There are already too many Americans who have lost that confidence. The misuse of AI by those who feel their ideology is more important than our freedoms will have the potential of harm us far greater than those issues we tend to compartmentalize.
Mike ends saying, "We’ll likely see a lot more of this stuff." True statement. It's coming, and we need to be aware of the consequences.
The Trump / Biden election will be wild. Both men already exist as extreme caricatures in popular culture. There will be such a deluge of AI memes that it will not be two men running against each other, but two cultural memes. The strongest meme will win.
And to your point-- I believe there was already some low-level election interference in the Democratic primaries with an AI Joe Biden calling voters and telling them to 'save their vote for November' and to 'not vote in the primaries'.
I think people will just get used to it, and our perception patterns will simply adjust. In few years no one will fall for something like this just because visual materials look super-expensive. And then new scams will rise ;-)
The scattering of a signal is interesting. An individual file is a historical reference to the ability of someone more talented or able to create a result—the process to come up to which the file depicts in some way—so that a viewer of a file may be inspired and able to recreate the result and in a parallel economy to the attention economy, use that result to compute yet another result as an unassuming participant of the programmatic or perhaps the meta/real economy.
When AI starts creating files and there is no one at the source and confusion results, it's almost as though the people interfacing with it automatically suffer for not anticipating what happened or not picking up on the cues of this newly recomposed mishmash. The data or file that should have been sought after would only have come to be as proof of having lived and in avoiding that because it is shameful or looks unpolished or amateurish, something soulless and useless remains as a product from the creator and also as the experience of the user.
You can't expect humanist data from non-human technologies. Face filters and catfishing on dating apps in the current times and ideas of cyborg come to mind among those about the future. The pitch about the act of design converging to experiences as products, alongside linguistic and genetic divergence from now, *maybe* converging to the past also seem relevant. Lived experience is something different entirely which can only be described in files using analogous past events, however recent, but it somehow remains beyond recollection.
I am starting to think files, and maybe even products and architecture is for historical and political records conducive to maintaining narratives more than they are for the purpose for which one may aspire to create, procreate, and continue living. This divorce of reality from narrative from material can render anyone "bad" if caught disregarding buildings, property, and records especially those that don't serve him and belong to someone whose decisions actively make his life death—which unfortunately for cyborgs would mean the loss of files or material just as much as loss of life.
...ai pig butchering call centers (these are not as fun to prank talk to as the humans used to be)...a.i. dating scams...a.i. how-to articles 9riddled with errors and make actual how-to articles near impossible to find...a.i. apps/games...a.i. swag...when quantity and speed win out over quality and need we all get to enjoy $%^& flavored ice cream...a.i.cecream...
I am little less concerned with small artists being threatened by AI, and more concerned with low integrity opportunists being able to win with AI.
My current guess is that generative AI will help small artists make bigger stuff with less outside help. That will likely be good for artists in IMO. However, the resulting mega mass of near infinite 'stuff' will likely be bad for artists who want to commercialize their art. There just won't be a market for it.
You've touched on an important concept here. So let's expand it beyond just AI.
As the knowledge explosion generates powers of ever greater scale, everybody is further empowered, including the bad guys. As the bad guys accumulate ever greater powers, they represent an ever larger threat to the system as a whole.
As example, generative AI might be considered a small potatoes threat compared to emerging genetic engineering technologies like CRISPR, which make genetic engineering ever easier, ever cheaper, and thus ever more accessible to ever more people. Imagine your next door neighbor cooking up new life forms in his garage workshop. What will the impact on the environment be when millions of relative amateurs are involved in such experimentation?
Discussion of such threats is typically very compartmentalized, with most experts and articles focusing on this or that particular threat. This is a loser's game that the experts play, because an accelerating knowledge explosion will continue to generate new threats faster than we can figure out what to do about existing threats. 75 years after the invention of nuclear weapons we still don't have a clue what to do about them.
If there is a solution (debatable) it is to switch the focus away from complex details and towards the simple bottom line. The two primary threat factors are:
1) An accelerating knowledge explosion
2) Violent men
If an accelerating knowledge continues to provide violent men with ever more powers of ever greater scale, the miracle of the modern world is doomed. Somehow that marriage has to be broken up.
A key misunderstanding is the wishful thinking notion that the good guys will also be further empowered, and thus can keep the bad guys in check as the knowledge explosion proceeds. That's 19th century thinking. We should rid ourselves of such outdated ideas asap.
As nuclear weapons so clearly demonstrate, as the scale of powers grows, the bad guys are increasingly in a position to bring the entire system down before the good guys have a chance to respond. One bad day, game over.
You are right to focus on the scale of power involved in generative AI, and how bad actors will take advantage of it. Let's take that insight and build upon it.
If the above is of interest, here are two follow on articles:
Knowledge Explosion: https://www.tannytalk.com/p/our-relationship-with-knowledge
Violent Men: https://www.tannytalk.com/s/peace
This will be the primary question of the coming years. Who can use this technology faster & better, the good guys or bad guys? My takeaway from looking at small scale scams is the bad guys (or low integrity guys) are able to get started much, much faster because they're willing to cut all kinds of corners. I'll read the knowledge explosion piece today. Looks interesting.
Hi Mike, thanks for your reply.
Imho, as the scale of the powers available to us grows, the historic contest between good guys and bad guys become increasingly irrelevant. Nuclear weapons illustrate this principle in an easy to understand manner. Once any bad guy has the power to bring down the entire system, it doesn't really matter what the good guys have accomplished.
Phil (below) writes, "Discussion of such threats is typically very compartmentalized," and Mike says, "There are plenty of other misuses of AI," and invites us to list "others" in the comments.
I've read a number of articles (here on Substack and other places) on the possible pros and likely cons of AI, but they tend to warn us of how "bad" people will trick us into thinking this art is real, or this song, or this news article. We complain about AI interfering with our need to talk to a REAL person at the credit card company, or internet provider.
What I don't hear many alarmed about is the likely disruption of our election process this November. I was at a meeting with our State Representative, who happens to chair the Technology and Infrastructure Innovation Committee at our capitol. He is more than concerned - in fact convinced - that AI will be injected into the process by people (bad guys) and nations (bad countries). It will not only undermine the integrity of the outcome but erode the confidence we have in our ability to have a voice in our representative government. There are already too many Americans who have lost that confidence. The misuse of AI by those who feel their ideology is more important than our freedoms will have the potential of harm us far greater than those issues we tend to compartmentalize.
Mike ends saying, "We’ll likely see a lot more of this stuff." True statement. It's coming, and we need to be aware of the consequences.
The Trump / Biden election will be wild. Both men already exist as extreme caricatures in popular culture. There will be such a deluge of AI memes that it will not be two men running against each other, but two cultural memes. The strongest meme will win.
And to your point-- I believe there was already some low-level election interference in the Democratic primaries with an AI Joe Biden calling voters and telling them to 'save their vote for November' and to 'not vote in the primaries'.
I think people will just get used to it, and our perception patterns will simply adjust. In few years no one will fall for something like this just because visual materials look super-expensive. And then new scams will rise ;-)
Really it's not an AI that is a problem here...
The scattering of a signal is interesting. An individual file is a historical reference to the ability of someone more talented or able to create a result—the process to come up to which the file depicts in some way—so that a viewer of a file may be inspired and able to recreate the result and in a parallel economy to the attention economy, use that result to compute yet another result as an unassuming participant of the programmatic or perhaps the meta/real economy.
When AI starts creating files and there is no one at the source and confusion results, it's almost as though the people interfacing with it automatically suffer for not anticipating what happened or not picking up on the cues of this newly recomposed mishmash. The data or file that should have been sought after would only have come to be as proof of having lived and in avoiding that because it is shameful or looks unpolished or amateurish, something soulless and useless remains as a product from the creator and also as the experience of the user.
You can't expect humanist data from non-human technologies. Face filters and catfishing on dating apps in the current times and ideas of cyborg come to mind among those about the future. The pitch about the act of design converging to experiences as products, alongside linguistic and genetic divergence from now, *maybe* converging to the past also seem relevant. Lived experience is something different entirely which can only be described in files using analogous past events, however recent, but it somehow remains beyond recollection.
I am starting to think files, and maybe even products and architecture is for historical and political records conducive to maintaining narratives more than they are for the purpose for which one may aspire to create, procreate, and continue living. This divorce of reality from narrative from material can render anyone "bad" if caught disregarding buildings, property, and records especially those that don't serve him and belong to someone whose decisions actively make his life death—which unfortunately for cyborgs would mean the loss of files or material just as much as loss of life.
...ai pig butchering call centers (these are not as fun to prank talk to as the humans used to be)...a.i. dating scams...a.i. how-to articles 9riddled with errors and make actual how-to articles near impossible to find...a.i. apps/games...a.i. swag...when quantity and speed win out over quality and need we all get to enjoy $%^& flavored ice cream...a.i.cecream...
I am little less concerned with small artists being threatened by AI, and more concerned with low integrity opportunists being able to win with AI.
My current guess is that generative AI will help small artists make bigger stuff with less outside help. That will likely be good for artists in IMO. However, the resulting mega mass of near infinite 'stuff' will likely be bad for artists who want to commercialize their art. There just won't be a market for it.