I've been dreaming of a "digital equivalence" model of reality for longer than all the AI hype has been around; partly informed by Jean Baudrillard's "Precession of Simulacra", and largely because I was sick of the tedium required to elicit still underwhelming physically analogous detail when rendering animations in blender, and the cost-prohibition of actually doing something. NeRF comes close.
This whole thing might converge when generative point-clouds are common place and it should come back to a sort of OS-level integration and the thesis I shared with you. Physics and game engines will also likely have progressed more to incorporate text-to-product workflows.
Really, we have the machine Nikola Tesla theorized would be used for teaching—a device hooked onto the imagination of a teacher, projecting images onto a screen for students to see. This makes me think of reverse-engineering meaning from dreams, which might be another good application for back-propagating through text-to-image algorithms in conjunction with whatever the work from Japanese scientists who allegedly created a dream reading technology has become.
You don't think the fully synthetic will stabilize master Gioia? New release from google et all seem to be indicating this will be a shortlived problem
I believe the technology will improve to get there Commodore Leese, but when everyone has the same god-level moviemaking model, results without a ton of personalized inputs will level out to a common quality that will become boring. The interesting element the equation will have to come in the form of human inputs. That's where you'll see truly differentiated, *directed* results. I have a feeling this won't be text inputs, but who knows.
I would add 3D animation to the list. This is also popular with beginners though most efforts are essentially just animated stills (using Pixar-style images generated in MJ). Mickmumpitz is one example of someone doing much more complex stuff with Blender and a bunch of AI tools. Work flow is not easy but probably doable for most. https://www.youtube.com/watch?v=SHbY-6Vy53g&t=181s
Thanks for breaking this down for us!
This is a good break-down! Now, consider the use of AI for the sound aspect of film making. Would be a great companion to this article 👍
This is an excellent breakdown. I learned a lot.
Thanks JB. It is, of course, just my two cents. My hope is it serves as a helpful taxonomy to help people form their own categories in their heads.
Love this. I'm calling it AIography.
I've been dreaming of a "digital equivalence" model of reality for longer than all the AI hype has been around; partly informed by Jean Baudrillard's "Precession of Simulacra", and largely because I was sick of the tedium required to elicit still underwhelming physically analogous detail when rendering animations in blender, and the cost-prohibition of actually doing something. NeRF comes close.
This whole thing might converge when generative point-clouds are common place and it should come back to a sort of OS-level integration and the thesis I shared with you. Physics and game engines will also likely have progressed more to incorporate text-to-product workflows.
Really, we have the machine Nikola Tesla theorized would be used for teaching—a device hooked onto the imagination of a teacher, projecting images onto a screen for students to see. This makes me think of reverse-engineering meaning from dreams, which might be another good application for back-propagating through text-to-image algorithms in conjunction with whatever the work from Japanese scientists who allegedly created a dream reading technology has become.
You don't think the fully synthetic will stabilize master Gioia? New release from google et all seem to be indicating this will be a shortlived problem
I believe the technology will improve to get there Commodore Leese, but when everyone has the same god-level moviemaking model, results without a ton of personalized inputs will level out to a common quality that will become boring. The interesting element the equation will have to come in the form of human inputs. That's where you'll see truly differentiated, *directed* results. I have a feeling this won't be text inputs, but who knows.
Fantastic!
I would add 3D animation to the list. This is also popular with beginners though most efforts are essentially just animated stills (using Pixar-style images generated in MJ). Mickmumpitz is one example of someone doing much more complex stuff with Blender and a bunch of AI tools. Work flow is not easy but probably doable for most. https://www.youtube.com/watch?v=SHbY-6Vy53g&t=181s