Currently, you can download all of the iterations of a single image. To do this, click into one image, click Download Single Image, and then click Download All Iterations. This makes it easy for you to compile videos, if you want to see the progress of training.
We do not have the functionality to download every image from every iteration at this time.
In the Creative Morph process, the Inspirations control the shapes and contours of the results. This means that the first results begin very similarly to the Inspirations, slowly morphing into the Influences. These shapes often lose their forms or have more obscured contours as they get trained further.
In the Freeform process, the process interprets the Inspirations and tries its best attempt to replicate its subject matter. This means that all the shapes inputted will affect the shape of the amalgamated result.
In the Creative Morph and Style Transfer processes, the Influences’ color, textural, and stylistic qualities will directly affect the resulting images.
In the Freeform process, the Inspirations directly affect every aspect of the newly generated results, including color.
The process is learning from a blank slate every time you start training a new project. It should just take a few minutes to get the first few results, but it takes a significantly longer time for the process to truly make intriguing results. We will also always give a recommendation for training time, based off of the myriad tests we’ve done.
Currently you are unable to see another artist’s collections and results, but we’re working on this! We want to build a community in which you can share your collections and results.
For now, we would love for you to share your results on Slack or in the Explore feed.
Sometimes the server can’t pull down the image from the cloud, so to correct it, you just need to refresh! If it continues to happen, there might be an issue with our server or your internet connection.
This means that your results have likely converged, meaning that it’s reached a point in which there will be little noticeable changes.
They are a process’s interpretation of the inputs given, meaning that their understanding of faces will be different than how we understand them. Different processes will have differing results on faces, with some processes specifically made for generating new faces. However, in order to focus on processes that can adapt to a wider range of inputs from artists, those specific ones are currently unavailable on Playform.
Traditionally, it takes exponentially longer to train a model to create larger results. Because of this, we have optimized the process to create 512px images for you to be able to experiment more quickly. You can also upscale your images in the single image view.
This will not challenge the Creative Morph process, so you will end up getting the exact same images you inputted. Since you do not want to be spending resources to get no results, we will make sure to alert you if you have redundant collections.
The more the better! The minimum is 30, but you can upload as many as 5000. The images should be at least 256 x 256 px in resolution for the 256px Freeform and 1024px for the Hi Res Freeform. We accept PNGs and JPGs. Though uploading lower resolution images for your specific model is possible, results will likely exhibit a degraded quality.
We find that a well-curated, large image set works really well. This image collection of Mark Zuckerberg photos is a good example. It has consistent cropping, and the eyes are "registered", meaning that in each image, the eyes are in the same place. However, the image collection would be better if it were higher resolution. The more consistent the images, the better the results.
You will get 50 resulting images per snapshot, regardless of the amount of images you inputted during training. For the Freeform model, you can also generate more resulting images using Mix, which uses the trained model from your most recent snapshot.
This is called Mode Collapse, and it's a well-documented phenomenon that Generative Adversarial Networks (GANs) exhibit. Though it's a part of the mathematical properties of the AI model, we are looking into ways of decreasing its prevalence. This is more likely to happen if there are not enough variations in the inspiration images or if the model has been trained for very long time. Let us know if you find ways of decreasing the amount of mode collapse that you experience in your results. To learn more, check out this technical blog post: https://aiden.nibali.org/blog/2017-01-18-mode-collapse-gans/
The results should look more and more similar to your input images as you train more snapshots/iterations. However, after a few hundred snapshots, any subsequent training will have very little effect on the results and can result in the images looking largely similar.