The way I've learned my second language has made me process it very differently from my native one.
So I'm doing the German reverse tree and I come across the sentence "We did not play in the park." That's an easy one to translate, but the audio sounded different. So what was written as "We did not play in the park" sounded like "We do not play in the park". I typed in "Wir spielen im Park", the present tense version (and got it wrong).
Now I think this is interesting because even though I could clearly see that it said "did" and not "do" I went with what I heard instead of what I saw. As a native speaker of English I've of course had tons of speaking/listening practice. In fact that's how I learned my first words. So it seems that I've developed an audio preference where if I'm hearing something and seeing the words at the same time my brain will focus on the audio and just use the written words as a backup.
And this is probably why I (and many German learners) have a lot of trouble listening to German without subtitles. Because my learning has been mostly reading/writing based and although I do hear every new word I learn I see it written first, so it may be giving me a visual preference for German. I can definitely see this as when I try watching German dubbed TV shows and the subtitles don't match the audio I get very confused, while if it happens in English I barely even notice.
So is there anything to this observation or is there a better explanation?
I don't know if there have been any studies or anything formal, but I find myself frequently trying to picture how a word is printed/spelled in French so that I can understand what I'm hearing. Seeing the word (even in my mind's eye) helps me understand what it is I'm listening to.
I think you've hit on a very important point here - nearly all our second language learning now is primarily visual, being based on writing/ reading + grammar 'rules' that conversational language (which is what most of us want and which is how we all learnt our mother tongue) is very much demoted to second place, or not taught at all. Which means that when we hear speech we try to translate it into to writing in order to understand it, instead of just simply understanding! Sorry, this is a bit muddled, but it was interesting to find someone else having the same experience.
It's an interesting thing to think about. I learned a good bit of my Spanish vocabulary from listening/speaking (usually with taxi drivers in Costa Rica) but most of my grammar in written form on Duolingo. I definitely find that the vocabulary "sticks" better when I learn from listening/speaking, but I still have trouble with hearing words correctly, even if someone's carefully teaching a word to me. When you're a small child learning your first language your brain programs itself to "hear" some sounds and distinguish them from others, while lumping two or more sounds together--R and L being the same sound to Japanese or most English speakers being only minimally aware that there are two sounds written as TH. I find that I have a hard time hearing the difference between final As and Os in Spanish--typical feminine and masculine endings--I suspect because it's seldom important in English.
We learn our first language before we learn to read or write. We on Duolingo are also people who are comfortable online (as witness the fact that we're here) which means that in the past few years more of our lives have been written rather than spoken. Our native languages are thoroughly enough ingrained to resist the switch from spoken to written, but a new language will probably adopt our current written-word dominated lives.
Sorry, this isn't the most coherent response. I guess the summation is that I wonder if it's the medium of instruction--written vs. spoken--or if it's the life stage that we're learning at, no longer pre-literate with the brain plasticity of childhood.
I think you uncovered an excellent point!
When we learn our first, native language (or languages, when they are two or more) as babies, we do it through the sounds and gestures of our mother, father and siblings, mainly.
As someone else said above, during the first 10-18 months of our lives, our brains "decide" what sounds are important for this first, native language (based on what the baby hears constantly from its mother/father/siblings when they are speaking), and the brain becomes really good at "separating" those crucial sounds (there are special areas in the brain that compute exactly that), whereas "the rest" of the sounds, not used or not that important for oral communication, are processed mosty by other areas in our brains.
When we are adults, and start learning a new language here on Duolingo, we do it mainly through visual data. Reading and writing.
We can become really good at reading and at writing (especially reading) with enough practice. I have experienced it myself. Eighteen months ago I could not read, not even a single phrase, in any language (bar English and Spanish).
Now, eighteen months later, I can read almost any book in Italian, French, Portuguese and Catalan, almost as fast and comfortably as in English or Spanish. I can also write rather properly without much effort in those four languages (though I sometimes make errors when writing).
This is probably because the areas of our brains that allow this (the computation/processing of all the relevant visual data necessary for reading comprehension and writing production) are really efficient in adult brains.
But of those four "new" languages (Italian, French, Portuguese and Catalan), when listening, I understand Italian much better, and then Catalan, Portuguese and finally French (even if I can read and write equally well in all of them).
Why is there such a difference in my listening ability with respect to those four languages?
Well, first of all, 99% of the time I spend on those languages is here on Duolingo and reading books, so it is reading and writing. Sometimes I listen to the radio or watch videos in those four languages too, but not enough by any means.
So under this limited listening practice, the one I understand better is Italian, probably because its sounds are very similar to Spanish sounds (and they are two very phonetic languages).
Another interesting point is the structure of the language. I can also read German, though I have to make much more conscious effort and I read slower than when reading French, Italian, Catalan or Portuguese.
When listening to German, I only understand (and not all by any means) if they speak really slow and clear (like the audio-book "Café in Berlin" for example).
So even though I feel the German sounds are maybe a bit easier for me than the French sounds, the completely different sentence structure makes it a bit more difficult for me to understand spoken German than spoken French, so familiarity with the structure of the language is also a big factor, not only when you are reading but also when listening.
I also think that learning a language through sounds (listening to people, and talking to people) is probably much more direct and much faster than what we do here (essencially processing visual data).