AI

Redefining Art with Generative AI

Redefining Art with Generative AI

Introduction: Redefining Art with Generative AI

Redefining Art with Generative AI: When photography began, it wasn’t considered an art form. Despite being rapidly embraced by the general public, it was perceived as a mechanistic way to capture moments in life. It was a matter of technical ability, not art.

Also Read: AI based illustrator draws pictures to go with text captions.

Despite this, it changed our relationship with art as we knew it. As a result, it liberated painters from the burden of reproducing reality accurately. Art enabled them to move around, free of ties, within the vast latent space that art offers. It might be appropriate for them to focus on the abstract, on painting ideas to convey feelings that photos fail to capture. It was possible to break the rules and find meaning in questions that exist only within us, breaking the rules and exploring the limits of what was possible.

As a result, photography democratized access to paintings by allowing snapshots of reality to be taken in minutes, rather than days. At the same time, it diminished the beauty of hand-made paintings, at least from the artist’s perspective. But technology always finds a way. Almost everybody now agrees with the quote “photography is art”.

Also Read: Artificial intelligence and image editing.

Now is the time for artificial intelligence to disrupt the field of visual art. Artificial intelligence will also push artists to redefine their relationship with art. Instead of competing with it, it’s all about embracing it. Together, creativity and the capabilities of artificial intelligence will bring a brand new form of art that we haven’t seen before.

Transformation: From reading art to creating it. 

Before the deep learning explosion in the early 2010s, computer vision was the main branch of machine learning. Although language has taken over, AI systems continue to improve the way they perceive the world. 

Source: YouTube | WIRED

Generated art has advanced over the last decade with the help of generative models, such as GANs (Generative Adversarial Networks). As a result, these networks can produce images that resemble those in the datasets that they have been trained on. Artists and researchers realized they could rely on GANs to generate novel imagery. Nonetheless, they were unable to find a way to condition the final output of the models after improving the architecture in an attempt to discover new forms of AI-generated art.

OpenAI introduced its CLIP model in 2021, which is a neural network trained on 400 million image/text pairs. 

Depending on the image, CLIP can select the text description that best describes the image. In releasing the weights, OpenAI gave artists a piece of a puzzle that they would soon complete.

Researchers Ryan Murdock and Katherine Crowson realized they could use CLIP as a “steering wheel” to guide generative networks to create images based on given text descriptions. 

Also Read: AI Generated Digital Painting from Start to Finish

He could condition the model to find the image that best represented the visual meaning of a short sentence by inputting this short sentence into the model. 

By combining VQ-GAN — a more powerful generative architecture published in 2020 that uses convolutions and transformers — with CLIP, Crowson built upon Murdock’s work. As opposed to the BigGAN+CLIP model, this one produces images with texture, almost tangible concreteness.

A combination of GAN models and CLIP was developed by Murdock and Crowson as a hack to get to DALL*E (the superb text-to-image multimodal AI that OpenAI refused to open-source). But no matter how hard they tried, they could not match the astonishing precision of DALL*E’s outputs. 

VQGAN+CLIP found its artistic state halfway between the unoriginal results of standalone generative networks and the literal pairings of text and image from DALL*E.

Also Read: AI Generated Crossword Puzzles

The AI community became aware that this was possible as soon as it learned how to do it. An ” emerging art scene” was born. Several GAN architectures have been experimented with, as well as variations of existing ones. It didn’t take long before people realized that the prompt (as they refer to the text description) had a direct impact on the final output. The model “understands” what you are trying to achieve based on what you write to it. 

The people began to create styles for their art generation in an analogous display of prompt engineering skills.

 

CLIP has the advantage of zero-shot setting performance, which is one of its key strengths. As a result, it doesn’t have to see examples of the text-image pairs anyone wants it to handle. You can write a sentence and get remarkably accurate results.

As a result of its zero-shot skill, prompt engineering, and the unlimited possibilities of VQGAN+CLIP, AI-generated art tools flooded the internet. A particularly popular example is Dream, a web browser app by Wombo that allows users to create infinitely many images from text descriptions.

Also Read: AI Search Prediction for Online Dictionaries

CLIP’s images can be seen as a reflection of how it “sees” the world, and how it “thinks” language represents our visual world. The result is, without a doubt, art.

The future of art

Artists and photographers now live and work together to create art that leaves us wondering about the beauty of both.

Also Read: AI enabled smart kitchens

We feel when we are surrounded by art, regardless of its form. There’s also plenty of room for new art elements that will make us experience new sensations. AI generative art will fill some of that space. They’ll displace already existing forms of art slightly towards new creative boundaries. DALL*E or VQGAN+CLIP, for example, will continue to evolve into highly sophisticated art engines.