When I mean learning I do not mean translating big sentences or something like that, I mean things like: “Is this japanese sentence correctly written? 雲の日が好きです” In this example it answered: "
Yes, the Japanese sentence “雲の日が好きです” is correctly written. It translates to “I like cloudy days” in English. Here’s a breakdown of the sentence:
- 雲 (くも/kumo): This means “cloud.”
- の (no): This is a possessive particle that indicates the relationship between “cloud” and “day” in this context.
- 日 (ひ/hi): This means “day.”
- が (ga): This is a subject marker particle that marks “cloudy day” as the subject of the sentence.
- 好きです (すきです/suki desu): This means “like” or “to like” and is a polite way of expressing one’s preference. The verb 好き (suki) means “to like,” and です (desu) is a polite copula used to indicate politeness and formality.
Therefore, the sentence “雲の日が好きです” correctly conveys the meaning “I like cloudy days” in Japanese.
"
Is this correct,do you consider it reliable?
Sometimes I also ask some questions like: "What function does ‘こと’ grammatically fulfill on this sentence? … "
In my experience, ChatGPT gives wrong or incorrect information when asked about the specifics of literally any subject.
For example, if you ask ChatGPT about the “Piano Puzzle” in Silent Hill 1, it will reply that the puzzle exists. Interestingly, if you ask it about the poem that coincides with this puzzle by asking it to recite the poem by its name only: “A Tale of Birds Without A Voice,” I was surprised to see ChatGPT correctly recited the poem. ChatGPT forgot to include the lines at the end of the poem starting with the Crow, and while it correctly provided the answer to the riddle, in the correct order, but it did not include the Crow.
However, it also invented steps to solve the puzzle. It said the piano keys are numbered, and just said “press the key corresponding to [bird name],” which is not helpful with regards to the puzzle solution. The piano keys are not infact numbered, and the puzzle solution requires the player find which piano keys make no sound, and press those keys in the correct order corresponding to the order of the birds that the poem describes.
Now, I asked ChatGPT to analyze the specific language used in the riddle poem. This poem requires the reader have prior knowledge of the color of the birds feathers to determine the color of the piano keys each bird means. I asked ChatGPT whether this prior knowledge was bad for riddle design, and it responded with “It can be a drawback, here are points to consider.” It then listed four points (accessibility, fairness, clarity and guidance, immersion and engagement) and explained how each of those relate to how a puzzle is designed (in a generic sense, not specific). It concluded in classic AI fashion by rewording its first paragraph and then saying “Ideally, puzzles should provide players with the necessary information or clues within the game world to encourage exploration, deduction, and problem-solving. This allows for a more inclusive and engaging puzzle experience.” Which is basically buzzword diarrhea.
All in all I think while ChatGPT is a great tool for creative exercises and as a suggestion tool for world building and other creative facets, anything that relies on being factual or correct should not be relying on ChatGPT. It simply provides too much wrong information too often. And while it sometimes gets things correct, a lie is best told between two truths.
Lol, did you make this reply with ChatGPT or “All in all” is just a nod towards text generated by it? Overall, your comment is hilarious XD
I didn’t use it to write the comment nor was it intentional, but I will acquiesce that it does sound like something ChatGPT might say. The most important point I was trying to make still stands though. ChatGPT is really bad as a reliable source of information, and anyone expecting it to give factual information about everything they ask it is a moron.