• chebra@mstdn.io
    link
    fedilink
    arrow-up
    15
    arrow-down
    2
    ·
    7 months ago

    @vrighter @ylai
    That is a really bad analogy. If the “compilation” takes 6 months on a farm of 1000 GPUs and the results are random, then the dataset is basically worthless compared to the model. Datasets are easily available, always were, but if someone invests the effort in the training, then they don’t want to let others use the model as open-source. Which is why we want open-source models. But not “openwashed” where they call it “open” for non-commercial, no modifications, no redistribution