• sunzu2
    link
    fedilink
    -191 day ago

    Normies get AI slop, prosumer uses local llm…

    Not sure about social media… Normie is allergic to reading anything beyond daddy’s propaganda slop. If it ain’t rage bait, he ain’t got time for it

    • @jim3692@discuss.online
      link
      fedilink
      English
      111 hours ago

      So, prosumers, leveraging computers that are not optimized for AI workloads, being limited to models that are typically inferior to commercial ones, are wasting more energy for even more slop?

      • sunzu2
        link
        fedilink
        511 hours ago

        That’s the price of privacy that I am willing to pay. With respect to electricity, I pay my bills at consumer rate while subsidizing corporate parasites who pay lower rates and get state aid on top of it.

        • @jim3692@discuss.online
          link
          fedilink
          English
          11 hour ago

          That’s the price of privacy I am currently paying.

          There was, however, a video from The Hated One, that presents a different perspective on this. Maybe privacy is more environment friendly than we think.

          A lot of energy is wasted on data collection and analysis for advertising. Devices with modified firmwares, like LineageOS and GrapheneOS, do not collect such data, reducing the load on analysis servers.

    • TheOneCurly
      link
      fedilink
      English
      1123 hours ago

      Home grown slop is still slop. The lying machine can’t make anything else.

      • sunzu2
        link
        fedilink
        320 hours ago

        At least my idiocy ain’t training the enemy.

        Also, AI ain’t there to be correct. AI is there to help you get something done if you already know the outcome mostly.

        It can really turbo charge a Linux experience for example.

        Also local is way less censored and can be tweaked ;)

      • sunzu2
        link
        fedilink
        220 hours ago

        https://ollama.org/

        You can pick something that fits your GPU size. Works well on apple silicon too. My fav’s now are qwen3 series. Prolly best performance for local single gpu

        Will work on CPU/RAM but slower

        If you got Linux, I would put into a docker container. Might too much for the first try. There easier options I think.

        • @Jakeroxs@sh.itjust.works
          link
          fedilink
          English
          319 hours ago

          I use oobabooga, little bit more options in the gguf space then ollama but not as easy to use imo. Does support openAI api connection though so can plug in other services to use it.