I've been dreaming of a "digital equivalence" model of reality for longer than all the AI hype has been around; partly informed by Jean Baudrillard's "Precession of Simulacra", and largely because I was sick of the tedium required to elicit still underwhelming physically analogous detail when rendering animations in blender, and the cost-prohibition of actually doing something. NeRF comes close.
This whole thing might converge when generative point-clouds are common place and it should come back to a sort of OS-level integration and the thesis I shared with you. Physics and game engines will also likely have progressed more to incorporate text-to-product workflows.
Really, we have the machine Nikola Tesla theorized would be used for teaching—a device hooked onto the imagination of a teacher, projecting images onto a screen for students to see. This makes me think of reverse-engineering meaning from dreams, which might be another good application for back-propagating through text-to-image algorithms in conjunction with whatever the work from Japanese scientists who allegedly created a dream reading technology has become.
I'm not sure what category this belongs in, but here's what I'm doing. It seems a pretty accessible methodology (or I wouldn't be doing it).
1) I create characters in Dalle.
2) I grab the sound from some text to speech AI, my current favorite being Pi, though the ElevenLabs GPT is also useful. I've used Murf in the past.
3) I then animate the faces and lip sync to sound in an old Mac software, CrazyTalk. While this software has been discontinued, I believe the same tech lives on in another software, Cartoon Animator from Reallusion.
4) Then I bring all these parts in to an old version of Hitfilm, a standard video editor. I LOVE my old version of Hitfilm, but the company that makes it has been sold, and I don't know the current status.
The latest example of a finished video can be seen here:
So this method is perhaps best described as a hybrid, content is AI generated, but then manipulated by old fashioned video editing. I haven't seen any AI video that interests me yet.
What would REALLY help me are more tools for animating the 2d characters I get out of Dalle. I can animate the faces adequately now, but animating body parts like hands, legs and arms etc seems beyond reach. Well, almost. I can do this in Hitfilm to a degree, and I've seen other methods that attempt body animations of 2d characters, but so far none of these methods seem too impressive.
Cartoon Animator apparently does a good job of animating cartoon illustration characters, but they don't interest me so much.
Any tips, suggestions or links regarding characters created in Dalle will be most appreciated.
You don't think the fully synthetic will stabilize master Gioia? New release from google et all seem to be indicating this will be a shortlived problem
I believe the technology will improve to get there Commodore Leese, but when everyone has the same god-level moviemaking model, results without a ton of personalized inputs will level out to a common quality that will become boring. The interesting element the equation will have to come in the form of human inputs. That's where you'll see truly differentiated, *directed* results. I have a feeling this won't be text inputs, but who knows.
I would add 3D animation to the list. This is also popular with beginners though most efforts are essentially just animated stills (using Pixar-style images generated in MJ). Mickmumpitz is one example of someone doing much more complex stuff with Blender and a bunch of AI tools. Work flow is not easy but probably doable for most. https://www.youtube.com/watch?v=SHbY-6Vy53g&t=181s
Thanks for breaking this down for us!
This is a good break-down! Now, consider the use of AI for the sound aspect of film making. Would be a great companion to this article 👍
This is an excellent breakdown. I learned a lot.
Thanks JB. It is, of course, just my two cents. My hope is it serves as a helpful taxonomy to help people form their own categories in their heads.
Love this. I'm calling it AIography.
I've been dreaming of a "digital equivalence" model of reality for longer than all the AI hype has been around; partly informed by Jean Baudrillard's "Precession of Simulacra", and largely because I was sick of the tedium required to elicit still underwhelming physically analogous detail when rendering animations in blender, and the cost-prohibition of actually doing something. NeRF comes close.
This whole thing might converge when generative point-clouds are common place and it should come back to a sort of OS-level integration and the thesis I shared with you. Physics and game engines will also likely have progressed more to incorporate text-to-product workflows.
Really, we have the machine Nikola Tesla theorized would be used for teaching—a device hooked onto the imagination of a teacher, projecting images onto a screen for students to see. This makes me think of reverse-engineering meaning from dreams, which might be another good application for back-propagating through text-to-image algorithms in conjunction with whatever the work from Japanese scientists who allegedly created a dream reading technology has become.
I'm not sure what category this belongs in, but here's what I'm doing. It seems a pretty accessible methodology (or I wouldn't be doing it).
1) I create characters in Dalle.
2) I grab the sound from some text to speech AI, my current favorite being Pi, though the ElevenLabs GPT is also useful. I've used Murf in the past.
3) I then animate the faces and lip sync to sound in an old Mac software, CrazyTalk. While this software has been discontinued, I believe the same tech lives on in another software, Cartoon Animator from Reallusion.
4) Then I bring all these parts in to an old version of Hitfilm, a standard video editor. I LOVE my old version of Hitfilm, but the company that makes it has been sold, and I don't know the current status.
The latest example of a finished video can be seen here:
https://hippytoons.com/p/whats-really-happening-at-substack
So this method is perhaps best described as a hybrid, content is AI generated, but then manipulated by old fashioned video editing. I haven't seen any AI video that interests me yet.
What would REALLY help me are more tools for animating the 2d characters I get out of Dalle. I can animate the faces adequately now, but animating body parts like hands, legs and arms etc seems beyond reach. Well, almost. I can do this in Hitfilm to a degree, and I've seen other methods that attempt body animations of 2d characters, but so far none of these methods seem too impressive.
Cartoon Animator apparently does a good job of animating cartoon illustration characters, but they don't interest me so much.
Any tips, suggestions or links regarding characters created in Dalle will be most appreciated.
You don't think the fully synthetic will stabilize master Gioia? New release from google et all seem to be indicating this will be a shortlived problem
I believe the technology will improve to get there Commodore Leese, but when everyone has the same god-level moviemaking model, results without a ton of personalized inputs will level out to a common quality that will become boring. The interesting element the equation will have to come in the form of human inputs. That's where you'll see truly differentiated, *directed* results. I have a feeling this won't be text inputs, but who knows.
Fantastic!
I would add 3D animation to the list. This is also popular with beginners though most efforts are essentially just animated stills (using Pixar-style images generated in MJ). Mickmumpitz is one example of someone doing much more complex stuff with Blender and a bunch of AI tools. Work flow is not easy but probably doable for most. https://www.youtube.com/watch?v=SHbY-6Vy53g&t=181s