Thanks Doug.
It is funny that you should mention that. It is very hard trying to
convince people that just because words are somehow related, we really
don't know how they are related. This is especially true when they are
handed the results of a shallow neural net that took a research team a few
You may already know this, but just be very careful. Embeddings are useful,
but people often think of them as detecting synonyms, but really just
encode contexts. For example antonyms and words with similar functions
often are seen as similar.
There's also issues with terms that occur in sparsely
Oh very cool. I will have to look into this more. This is something up and
coming I take it?
Thanks,
~Ben
On Tue, Oct 30, 2018 at 4:36 PM Alexandre Rafalovitch
wrote:
> Simon Hughes presentation on just finished Activate may be relevant:
>
> https://www.slideshare.net/SimonHughes13/vectors-in-s
Simon Hughes presentation on just finished Activate may be relevant:
https://www.slideshare.net/SimonHughes13/vectors-in-search-towards-more-semantic-matching
The video will be available in a couple of weeks, I am guessing from
LucidWorks channel.
Related repos:
*) https://github.com/DiceTechJobs/
Hello all,
We came up with a fascinating question. We actually have for our corpora,
word2vec, doc2vec, and GloVe results. Is it possible to use these datasets
within the search engine? If so, could you please point me to documentation
on how to get Solr to use them?
Thank you so much,
~Ben