AI art is 2022’s hottest trend, and it’s all thanks to models like DALL-E and Stable Diffusion. Using those, you’re able to generate eerily realistic AI-fueled images. Stable Diffusion 2 has been officially released, bringing several improvements — and apparently being nerfed in other aspects.
Stable Diffusion 2’s biggest improvements have been neatly summarized by Stability AI, but basically, you can expect more accurate text prompts and more realistic images. The text-to-image models are trained with a new text encoder (OpenCLIP) and they’re able to output 512×512 and 768×768 images.
Other models are also improving a lot, including the upscaler, which can now produce much more accurate images, and the depth-to-image model, which can generate new images using both text and an existing image. There’s also an inpainting model that can swap out parts of an image to generate a brand new image.
However, the new update does have some drawbacks. Users have complained that the new version of Stable Diffusion makes it harder to generate NSFW content as well as art that imitates an actual artist’s style, leading some to claim the new version has been “nerfed.” Given AI art’s heavy criticism for its capabilities to produce real-looking fake images, it wouldn’t be surprising if the model deliberately strays away from producing images that could cause trouble.
If you want to access the new Stable Diffusion 2, make sure to check out more on GitHub.