Suggestion: Use A/B testing to improve the quality of translations in immersion using a more sophisticated wiki model.
The idea is that the translation you see is not necessarily the most recent one.
Specifically, if the first translation is A and it is then modified to produce B, then A/B testing is used to determine whether A or B is better. This is done by having some users see A as the most recent translation while other uses see B as the most recent translation.
Of course, things get more complicated when A or B is then modified further, but I think it's possible to come up with a reasonable algorithm to handle all this variation by controlling which translation is shown as the most recent one in a more sophisticated way.
Moreover the previous translations shown in the history need not even have occurred in that order. Some might actually be more recent than the one shown as the most recent.
What you propose seems a bit complicated to me, but I do agree the translation grading could be improved. Sometimes you have a translation A, then changed to B, then to C = A (worst case scenario re-written from scratch), then to D = B again... It happens when you really have 2 (or more) ways to translate something, and people do not seem to agree, so they change it over and over again. Maybe we could come up with a system where cannot have, let's say, more than 5 translation, then you have to vote for one of the existing?...