I appreciate the author acknowledging -- if belatedly -- the inbuilt essential bias of the training set,
> Training on photos of celebrities of Hollywood means that our model will be very good at generating photos of a predominantly white and attractive demographic... If we were to deploy this as an application, we would want to make sure that we have augmented our initial dataset to take into account the diversity of our users.
Could this system generate credible faces of people of color? If it has a "gender" axis, could it have a "melanin" adjustment axis? Or various ethnic axes?
> Training on photos of celebrities of Hollywood means that our model will be very good at generating photos of a predominantly white and attractive demographic... If we were to deploy this as an application, we would want to make sure that we have augmented our initial dataset to take into account the diversity of our users.
Could this system generate credible faces of people of color? If it has a "gender" axis, could it have a "melanin" adjustment axis? Or various ethnic axes?