How about seeing the size of the feature vector as ‘resolution’, in the same way as the number of pixels used to represent an image? The more you have, the closer you get to human perception (e.g. x amount of millions of pixels for photographic accuracy).

For word vectors, one might be able to discover the threshold of word vector complexity/size that corresponds to what humans use (and, of course, go beyond that).

*

(Still hoping to go through the Stanford Course for Deep NLP.)

*

(New discovery: Andrew Ng, an important figure of machine learning, giving a course specialisation on deep learning on Coursera, which is also available on YouTube.)