Translation:I hoped that he had told the truth.
Wouldn't the correct Portuguese version of "I hoped he had told the truth" be "Eu esperei que ele tivesse dito a verdade"? And wouldn't the correct English translation of "Eu esperava que ele tivesse dito a verdade" accordingly be "I USED TO hope that he had told the truth"?
As a native speaker of English I think "spoken the truth" should absolutely be accepted. To speak the truth is a set phrase in English as well is to tell the truth. There is a slight difference in meaning but it should not affect the acceptance of one or the other as a correct translation. To tell the truth is more about clearing up a discrepancy and to speak the truth is more about giving and honest assessment or judgement about an action or situation.
The English translation sounds wrong to me. When I see "had told" or "had given", I expect the recipient of the action to immediately follow (e.g., "had told you" or "had given them"). However, removing the "had" from the answer also sounds fine to me (e.g., "I hoped he told the truth"). Does this "had " structure sound strange to anyone else? I'm a native American English speaker.
Without knowing what letter you got wrong, the algorithm that scores these responses is limited in scope. But there is an argument to be made for this result. Consider this if the target language is english and the right sentence is: He put on his shirt. The student mistranslates this as He put on his skirt. I think any grading algorithm should mark that wrong. And there's a worse one, I think you know. Just saying...
Verdada, instead the of verdade. There are actually many insignificant places in many of these further lessons that do this. Including dicidir vs decidir, esperava vs esparava, haverem vs haveram, homens vs homems.
You see, I get what you are saying, and it totally makes sense when you say “os mulheres vs as mulheres, or esta vs este, cadeira vs carteira, but not carteira vs cartera.” Yes, sometimes one letter of can write a complete different word, and I agree that those should be marked wrong.
Actually, your criticism is that the algorithm employed by Duo is not robust. It can accept some extremely clear typos, but not many and it should. Let me explain this w/o being too technical. Any given sentence has to be read in full, then compared to a range of standard sentences including mistyped ones. After a process of comparison and calculating a percentage of truth based on a model (called a sigmoid functon), it votes right or wrong. This is how the algorithm is trained. If it's not trained enough, the result is rejecting perfectly acceptable responses like the ones you labeled above. Solution? More training on a wider selection of deviations. Also, users reporting the rejections as wrong. They get fed back into the model.
Sadly, it is not duo. It is the contributors who approve or deny what is acceptable regarding range for right and wrong, and, there are not enough contributors in this course to make a difference. Sadly. I just like to report it, and write it here incase anyone else has a similar problem.