As mentioned in another comment, the problem is that Open Source does not necessarily apply to all aspects of models. Open code allows everyone access to the "source" of an application. It does not mean the information that the code stores, when used, is also open to viewing.
In models, the training data (dataset) is frequently "closed", where it is not open to viewing. That's just the default behavior of publishing models. You don't need the dataset to use the model. The weights or tensors may be "open" in that we can see them, but they are fairly "not worth viewing" if we don't know the nature of the relationships between the tensors.
If we were able to figure out relationships between the tensors, and the dataset was not made open, then there might be a debate on whether or not certain use of that extracted or "transfer" knowledge is allowed.
For a "model" to be fully "open", it must publish the data it was trained on, the code it used to train itself, and its tensors or weights must not be encrypted or disallow establishing relationships in the weights.
In models, the training data (dataset) is frequently "closed", where it is not open to viewing. That's just the default behavior of publishing models. You don't need the dataset to use the model. The weights or tensors may be "open" in that we can see them, but they are fairly "not worth viewing" if we don't know the nature of the relationships between the tensors.
If we were able to figure out relationships between the tensors, and the dataset was not made open, then there might be a debate on whether or not certain use of that extracted or "transfer" knowledge is allowed.
For a "model" to be fully "open", it must publish the data it was trained on, the code it used to train itself, and its tensors or weights must not be encrypted or disallow establishing relationships in the weights.