Midjourney’s New Website

“Midjourney has launched the alpha version of its own standalone website, with access open to existing users who have generated more than 10,000 images. A switch to a different platform might not sound like a big deal. After all, rivals like OpenAI’s DALL-E and Adobe’s Firefly and most recently Meta’s Imagine already work on dedicated sites. But the move makes one of the most impressive AI image generators in terms of results significantly easier to use, improving its UI and UX and making it possible to add more new features not possible within the limitations of the Discord platform.”

I have experimented extensively, generating over 10,000 images (deleted about 70%) on Midjourney so I am fortunate to have access to the beautiful alpha version of the new website. I still like Discord though because the new alpha website doesn’t have a delete feature yet. : )


(Image credit: Midjourney)

Read more: Midjourney’s new alpha website

The Absent Moon


Image via Midjourney v6.0 by Dari – Lyrics by Said Leghlid.
Music and voice by Suno.ai Suno.ai limits song generations to 1:20 min

Was the moon absent? Beautiful!
The sun that hid it, was darkness disguised as Earth’s delight
The moon was not absent, it lurked in the shadow

Of the sun’s light rays, a secret we’ve come to know
The moon was life for Earth, and sea preserving gnomes

Was the moon absent? Beautiful!
The sun that hid it, was darkness disguised as Earth’s delight
The moon was not absent, it lurked in the shadow oooooooh oooooh

From toy to tool: DALL-E 3

“It’s impossible to dismiss the power of AI when it comes to image generation,” says Aurich Lawson, Ars Technica’s creative director. “With the rapid increase in visual acuity and ability to get a usable result, there’s no question it’s beyond being a gimmick or toy and is a legit tool.”

“With the advent of AI image synthesis, it’s looking increasingly like the future of media creation for many will come through the aid of creative machines that can replicate any artistic style, format, or medium. Media reality is becoming completely fluid and malleable. But how is AI image synthesis getting more capable so rapidly—and what might that mean for artists ahead?”

Read more at Ars Technica: From toy to tool: DALL-E 3 is a wake-up call for visual artists—and the rest of us

Does AI contribute to the blandification of design?


Image Creation: Dari – Midjourney v5.2

AI’s impact on design, particularly in terms of aesthetics and creativity, is a topic of ongoing discussion.

The term “blandification” suggests a concern that AI might lead to a homogenization or simplification of design, potentially making it less innovative or unique. Here are some key points to consider:

AI can be used to automate certain design tasks, such as generating templates, layouts, or color schemes. While this can improve efficiency, there is a risk that relying too heavily on AI-generated design elements could lead to a lack of diversity in visual aesthetics.

On the other hand, AI can enhance design by enabling personalization at scale. For example, it can analyze user preferences and behavior to tailor designs to individual tastes. This can make design more engaging and relevant to the end-user.

AI can serve as a creative tool for designers. Designers can use AI algorithms to generate new ideas, explore different design options, and push the boundaries of traditional design. It can be a source of inspiration and innovation.

The use of AI in design should also take into account ethical and cultural considerations. AI algorithms are often trained on existing data, which can embed biases and lead to designs that reinforce stereotypes or cultural insensitivity.

The most effective approach might be a collaboration between human designers and AI. AI can assist in design tasks, offering suggestions and automation, while human designers bring their creativity, critical thinking, and cultural awareness to ensure that the design is both visually appealing and culturally sensitive.

In summary, the impact of AI on design can vary depending on how it is used and integrated. While there is a concern about “blandification,” AI can also be a powerful tool for enhancing creativity, personalization, and efficiency in design. The key is to strike a balance between AI-driven automation and human creativity and oversight to ensure that design remains both aesthetically pleasing and culturally relevant.

How are AI art images created?


Image Creation: Dari – Midjourney v5.2

AI art images are created using a combination of artificial intelligence and machine learning techniques, particularly deep learning. Here’s a general overview of the process:
AI art creation often starts with the collection of a large dataset of images, paintings, or other visual materials. These datasets can include a wide range of styles, subjects, and artistic elements. The collected data is preprocessed to ensure consistency and compatibility. This may involve resizing, cropping, or converting images to a standardized format.

Neural Networks: The core of AI art creation lies in the use of neural networks, particularly Generative Adversarial Networks (GANs) or Convolutional Neural Networks (CNNs).

GANs: GANs consist of two networks – a generator and a discriminator. The generator creates new images, while the discriminator evaluates these images. They work in a competitive manner, with the generator trying to create images that can fool the discriminator.

CNNs: CNNs are commonly used for image style transfer. They can extract style and content features from input images and apply them to generate a new image.

The neural networks are trained on the dataset. During training, GANs, for example, learn to generate images that closely resemble those in the training data. This process typically involves many iterations and can take a substantial amount of computational power and time.

In some AI art creation techniques, style transfer models are employed to apply the style of one image onto the content of another. This results in images that have the content of one image but the artistic style of another.

Artists and developers often fine-tune the AI model to achieve specific artistic effects or to control the output more precisely. This can involve adjusting parameters, altering the model architecture, or using different loss functions during training.

Once the AI model is trained and fine-tuned, it can be used to generate new art images. Users can input specific parameters or constraints to influence the style and content of the generated images. The generated images may undergo post-processing to enhance their visual quality or apply additional artistic effects. Not all generated images are equally appealing or artistic. Humans typically evaluate the generated images and select the ones that meet the desired artistic criteria.

AI art creation can take various forms, such as generating entirely new art, transforming photos into artwork, or creating art that mimics the style of famous painters. It’s important to note that the quality and creativity of AI art can vary significantly based on the sophistication of the AI model, the size and quality of the training dataset, and the expertise of the individuals involved in the process.

Pika: Text-to-Video Magic


What you’ll learn from this video:
– How to get started with Pika and join its Discord server.
– The basics of creating and animating videos in Pika.
– Pro tips on aspect ratios, camera movement, and more.
– How to make longer videos by stitching together short clips.