AI Integration in my Creative Process.

AI Integration in my Creative Process.



Creativity, product development, and content distribution are not only safeguarded against total automation but are poised for pristine optimization. As a creative, it's not merely about the tools I employ but how I've mastered integrating them into my workflow seamlessly. Here's an overview of how I've refined my process, always remaining open to further enhancement and discovery.

I began utilizing AI-generated art in 2021, initially creating abstract videos that transported viewers to another realm. Today, the technology has evolved to such an extent that I wouldn't be surprised if, in a few years, distinguishing between A and B comparisons becomes challenging. However, my core belief remains that ideas are the key to winning wars, customers, and people's hearts. Thus, while these technologies are impressive, they do not serve as replacements. Now, let's delve into the tools and discuss how my extensive hours of testing and learning have significantly influenced my workflow.


Initially, you will encounter images from brainstorming sessions for character designs and environments. Following this, you will find visuals from one of my sessions fine-tuning 3D website designs and user interfaces, applicable across both spatial computing UIs and websites, all crafted using MidJourney, DALL·E 3, Disco Diffusion, and Stable Diffusion.

TOOLS:
KAIBER
ComfyUI +Stable Diffusion
MidJourney
DALL·E 3
Disco Diffusion

My AI journey began with DALL·E during its open beta phase, where I started creating textures for 3D models. These textures were not very repeatable but were extremely effective in providing a distinctive appearance to 3D models. At that time, it still required the use of Photoshop to create normal maps and such, adding dimensionality to the material. I then spent a lot of time in Google Colab with Disco Diffusion, a variant of the Diffusion model, where I conducted most of my video exploration. This process was time-consuming and did not reliably yield iterative results. Now, I utilize DALL·E 3, Stable Diffusion, and Midjourney, testing each to determine which produces the best outcomes. Subsequently, I use the generated images and integrate them into Kaiber or Runway to animate the video. Although I don't consider these tools production-ready, they can create captivating visuals for setting a mood in music video projects.