When generating a new person (or whatever it is that does not exist) can you know that it isn’t like any of the images that went into the training data set?
How likely is it for it to actually exist after all?
Some of the backgrounds are surprisingly consistent, but some are distorted in ways that can only be done by a deep NN that sort-of knows what it's doing. The fact that the face is (almost always) still completely believable seems like good evidence against overfitting.
The model has 18x512 parameters, and all 9000 parameters are 32-bit floats.
Even assuming only 16 bits of randomness for each parameter (to keep them small enough, so you don't get too wild, because the center of 0 is a pretty homogenized face), 16^9000 is a lot of permuations of faces.
They will, of course, borrow from all of the 70k faces on Flickr that power StyleGAN, in varying degrees.
> Even assuming only 16 bits of randomness for each parameter
Well, this is the question. How do you know that you can make that assumption? Also even though you have ~9000 parameters they could be highly dependent.
When generating a new person (or whatever it is that does not exist) can you know that it isn’t like any of the images that went into the training data set?
How likely is it for it to actually exist after all?