I think sometimes when we ask people something we’re not just seeking information. We’re also engaging with other humans. We’re connecting, signaling something, communicating something with the question, and so on. I use LLMs when I literally just want to know something, but I also try to remember the value of talking to other human beings as well.
You should pretty much assume everything that a chatbot says could be false to a much higher degree than human written content, making it effectively useless for your stated purpose.
I think sometimes when we ask people something we’re not just seeking information. We’re also engaging with other humans. We’re connecting, signaling something, communicating something with the question, and so on. I use LLMs when I literally just want to know something, but I also try to remember the value of talking to other human beings as well.
You should pretty much assume everything that a chatbot says could be false to a much higher degree than human written content, making it effectively useless for your stated purpose.