• stewsters@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    You can run smaller models locally, and they can get the job done, but they are not as good as the huge models that would not fit on a your graphics card.

    If you are technically adept and can run python, you can try using this:

    https://gpt4all.io/index.html

    It has a front end, and I can run queries against it in the same API format as sending them to openai.