Nvidia deep learning technologies continue to do wonderful and weird things. Just a few weeks ago we saw how the company can use AI to automatically match voice lines to 3D animated faces. This cool kind of tech that can help people create great things with ease, or in the case of Nvidia’s latest unveiling, potentially horrible things, but still with ease.
We first saw Nvidia's GauGAN a few years back. It was demonstrated to turn basic doodles into photorealistic images with a click of a few buttons. It's pretty neat stuff, and definitely worth playing with. Now GuaGAN2 is out and it doesn’t even need your sketches to make highly detailed landscape images.
Instead, the deep learning model can turn text into what appear to be surprisingly complex images. It's been trained on 10 million landscape images, so it seems to know what it’s doing. The example shows how the image changes to suit more text as it is added, implying changes are easily made on the fly.
Black Friday 2021 deals: the place to go for the all the best early Black Friday bargains.
GuaGAN 2 works by using the tech we saw in GauGAN that can turn shapes into images, combined with text-to-image generation. This means a mix of drawings and words can be used to create these images. Users can start with a general statement to get a broader image and then are able to go in and alter the little details by hand.
If you want to play with some of this tech, you can try it out on Nvidia's interactive AI demos, though we had trouble getting it to work. Or, if you have an Nvidia RTX GPU, you can download Nvidia Canvas for free. It offers a nice taste of painting with AI, which is just impressive to see work in real time with your own sketches.
As for game implications of this technology, basically anything deep learning is likely to cut down time for developers. People can quickly mock up test scenes with tech in seconds. Haven’t figured out quite how the lookout from the cliff of your alien world looks, need a reference image you can’t find online? It has so many practical uses for anyone looking to create.
Though personally I'm hoping someone makes use of this tech into a cool pseudo-text adventure where you write the visuals into life. It could even work in with Nvidia's planned AI haptics somehow to be even more wonderful. Or disturbing.