When the first AI-generated images started cropping up online, many looked like surrealist paintings, depicting humans with extra fingers. These immediately made Kamb think of morphogenesis: “It smelled like a failure you’d expect from a [bottom-up] system,” he said.

AI researchers knew by that point that diffusion models take a couple of technical shortcuts when generating images. The first is known as locality: They only pay attention to a single group, or “patch,” of pixels at a time. The second is that they adhere to a strict rule when generating images: If you shift an input image by just a couple of pixels in any direction, for example, the system will automatically adjust to make the same change in the image it generates. This feature, called translational equivariance, is the model’s way of preserving coherent structure; without it, it’s much more difficult to create realistic images.

In part because of these features, diffusion models don’t pay any attention to where a particular patch will fit into the final image. They just focus on generating one patch at a time and then automatically fit them into place using a mathematical model known as a score function, which can be thought of as a digital Turing pattern.

Researchers long regarded locality and equivariance as mere limitations of the denoising process, technical quirks that prevented diffusion models from creating perfect replicas of images. They didn’t associate them with creativity, which was seen as a higher-order phenomenon.

They were in for another surprise.

Kamb started his graduate work in 2022 in the lab of Surya Ganguli, a physicist at Stanford who also has appointments in neurobiology and electrical engineering. OpenAI released ChatGPT the same year, causing a surge of interest in the field now known as generative AI. As tech developers worked on building ever-more-powerful models, many academics remained fixated on understanding the inner workings of these systems.

Image may contain Blonde Hair Person Clothing Sleeve Crew Cut Teen and TShirt

Mason Kamb (pictured) started his graduate work in 2022 in the lab of Surya Ganguli.

Photograph: Charles Yang

Image may contain Saurabh Patel Computer Electronics Laptop Pc Adult Person Head Face Accessories and Glasses

Surya Ganguli is a physicist at Stanford University.

To that end, Kamb eventually developed a hypothesis that locality and equivariance lead to creativity. That raised a tantalizing experimental possibility: If he could devise a system to do nothing but optimize for locality and equivariance, it should then behave like a diffusion model. This experiment was at the heart of his new paper, which he wrote with Ganguli as his coauthor.

Kamb and Ganguli call their system the equivariant local score (ELS) machine. It is not a trained diffusion model, but rather a set of equations which can analytically predict the composition of denoised images based solely on the mechanics of locality and equivariance. They then took a series of images that had been converted to digital noise and ran them through both the ELS machine and a number of powerful diffusion models, including ResNets and UNets.

The results were “shocking,” Ganguli said: Across the board, the ELS machine was able to identically match the outputs of the trained diffusion models with an average accuracy of 90 percent—a result that’s “unheard of in machine learning,” Ganguli said.


Source: Wired.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.