DO NOT POST CHATGPT

 

cinefactus

Censor

  • Censor

  • Patronus

Location:
litore aureo
Don't think you are being smart. ChatGPT is still wrong. Comparing our answers with ChatGPT is incredibly insulting. You WILL be banned.

The SOLE exception to this rule is if you post a picture of a tattoo you have made using ChatGPT. We can then use you as an awful warning.

The problem with ChatGPT is that it lies. A lawyer is being threatened with sanctions after he did his research on ChatGPT.

This example is about as simple as it gets. Someone in their first week of Latin could point out that the sentence is wrong.
cinefactus dixit:
What does puer puellae amat mean?
ChatGPT dixit:
The phrase "puer puellae amat" translates to "The boy loves the girl" in Latin.
ChatGPT is completely untrustworthy with comparisons. The second sentence is completely wrong.
cinefactus dixit:
What is the difference between "puella rosam amat" and "puella rosa amat"
ChatGPT dixit:
So, the main difference lies in the case of the noun "rosa."
This distinction affects the interpretation of the sentence, specifying whether the girl loves a specific rose (first sentence) or any rose in general (second sentence).
and wrong again...
cinefactus dixit:
What does, "puer puellis amant" mean?
ChatGPT dixit:
To make it grammatically correct and have proper agreement, the phrase should be "pueri puellis amant."
Actually to be grammatically correct it should be pueri puellas amant.
 
Last edited:
 

Noctua

Member

Location:
Saxonia Inferior, Germania
I see two main issues:
The answers provided by ChatGPT always look convincing. You have to examine the answers very carefully to be sure there are no mistakes (or rather: to find the mistakes that are almost always there). And if you need to do that: Why ask ChatGPT at all?
ChatGPT does not reveal its sources. So it's impossible to tell whether it bases its replies on reliable information. It's just a mix of words that are arranged in a way that looks like an answer.

However: Whenever you point our a mistake, it reacts very politely :love: .... and continues in the same style.
 

kizolk

Civis Illustris

  • Civis Illustris

Location:
Bourgogne, France
Apparently some people have started using ChatGPT for writing things like rough outlines of presentations they have to give on a subject they're experts in: not for the production of actual content, but rather for its organization, i.e. something whose "validity" they can check by themselves. This could be an interesting use case, but at this point in time, trusting ChatGPT to teach you anything new is indeed a leap of faith.
 

Pacifica

grammaticissima

  • Aedilis

Location:
Belgium
Actually to be grammatically correct it should be pueri puellas amant.
In that sentence it at least corrected one of the two mistakes!


But yeah, generally speaking, I wouldn't advise anyone to trust ChatGPT overmuch regarding anything, and certainly not for Latin. ChatGPT can perhaps be helpful in some cases if used critically, and is probably right sometimes on certain topics, but the examples above suggest it has a poor knowledge of Latin.

If anyone bombards us with its suggestions, it's just a waste of everyone's time.
 
Last edited:
 

cinefactus

Censor

  • Censor

  • Patronus

Location:
litore aureo
But yeah, generally speaking, I wouldn't advise anyone to trust ChatGPT overmuch regarding anything, and certainly not for Latin.
It is not just Latin.

cinefactus dixit:
Can suxamethonium be used for burns debridement?
ChatGPT dixit:
Yes, suxamethonium (succinylcholine) can be used for burns debridement in certain cases....
...During burns debridement, suxamethonium may be administered intravenously to relax the patient's muscles and facilitate better access to the burn area. This allows the medical team to perform the debridement more effectively and efficiently.
The first statement would be an instant fail in an anaesthetic exam, or an instant cardiac arrest in actual practice. The second is just complete bullshit.

It lies so plausibly that unless you know the subject well, you would never pick the mistakes.
 

Pacifica

grammaticissima

  • Aedilis

Location:
Belgium
I guess many people can try this for themselves: ask the AI precise questions about a topic you know very well (be it your profession or your favorite TV series or video game), and see what happens. For instance, if you ask it something like "what happened right after character X said this and that to character Y in [insert your favorite TV series]", at least if the question is obscure enough (i.e. it isn't about a super-duper famous passage of a super-duper famous series), chances are the AI will just make up a story.
 

Pacifica

grammaticissima

  • Aedilis

Location:
Belgium
I guess many people can try this for themselves: ask the AI precise questions about a topic you know very well (be it your profession or your favorite TV series or video game), and see what happens. For instance, if you ask it something like "what happened right after character X said this and that to character Y in [insert your favorite TV series]", at least if the question is obscure enough (i.e. it isn't about a super-duper famous passage of a super-duper famous series), chances are the AI will just make up a story.
... and then imagine how it would feel if you were helping someone with a project on the topic you know well, and they kept questioning your answers/advice with "but ChatGPT said that blablabla..." XD
 
Top