Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a question -

When generating a new person (or whatever it is that does not exist) can you know that it isn’t like any of the images that went into the training data set?

How likely is it for it to actually exist after all?



Some of the backgrounds are surprisingly consistent, but some are distorted in ways that can only be done by a deep NN that sort-of knows what it's doing. The fact that the face is (almost always) still completely believable seems like good evidence against overfitting.


Just from a dozen refreshes I'm pretty sure Zuckerberg is in their training set.


The model has 18x512 parameters, and all 9000 parameters are 32-bit floats.

Even assuming only 16 bits of randomness for each parameter (to keep them small enough, so you don't get too wild, because the center of 0 is a pretty homogenized face), 16^9000 is a lot of permuations of faces.

They will, of course, borrow from all of the 70k faces on Flickr that power StyleGAN, in varying degrees.


> Even assuming only 16 bits of randomness for each parameter

Well, this is the question. How do you know that you can make that assumption? Also even though you have ~9000 parameters they could be highly dependent.


Could it by chance recreate on an input image though?

Is it not possible, or just incredibly unlikely?

I also wonder how similar to an original person it needs to be before you would believe it's them anyway.


You could use this argument to say that GANs can't overfit if you provide them, say, 16 bits of randomness per parameter, which is patently not true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: