Skip to content

Commit a5ea4b4

Browse files
committed
add tf-idf user guide
1 parent 8c5a222 commit a5ea4b4

File tree

1 file changed

+79
-3
lines changed

1 file changed

+79
-3
lines changed

docs/mllib-feature-extraction.md

Lines changed: 79 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
77
* Table of contents
88
{:toc}
99

10+
11+
## TF-IDF
12+
13+
[Term frequency-inverse document frequency (TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature
14+
vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus.
15+
Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
16+
Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears in document `$d$`.
17+
And document frequency `$DF(t, D)$` is the number of documents that contains term `$t$`.
18+
If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that
19+
appear very often but carry little information about the document, e.g., "a", "the", and "of".
20+
If a term appears very often across the corpus, it means it doesn't carry special information about
21+
a particular document.
22+
Inverse document frequency is a numerical measure of how much information a term provides:
23+
`\[
24+
IDF(t, D) = \log \frac{|D| + 1}{DF(t, D) + 1},
25+
\]`
26+
where `$|D|$` is the total number of documents in the corpus.
27+
Since logarithm is used, if a term appears in all documents, its IDF value becomes 0.
28+
Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus.
29+
The TF-IDF measure is simply the product of TF and IDF:
30+
`\[
31+
TFIDF(t, d, D) = TF(t, d) \cdot IDF(t, D).
32+
\]`
33+
There are several variants on the definition of term frequency and document frequency.
34+
In MLlib, we separate TF and IDF to make them flexible.
35+
36+
Our implementation of term frequency utilizes the
37+
[hashing trick](http://en.wikipedia.org/wiki/Feature_hashing).
38+
A raw feature is mapped into an index (term) by applying a hash function.
39+
Then term frequencies are calculated based on the mapped indices.
40+
This approach saves the global term-to-index map, which is expensive for a large corpus,
41+
but it suffers from hash collision, where different raw features may become the same term after hashing.
42+
To reduce the chance of collision, we can increase the target feature dimension, i.e.,
43+
the number of buckets of the hash table.
44+
45+
**Note:** MLlib doesn't provide tools for text segmentation.
46+
We refer users to the [Stanford NLP Group](http://nlp.stanford.edu/) and
47+
[scalanlp/chalk](https://github.com/scalanlp/chalk).
48+
49+
<div class="codetabs">
50+
<div data-lang="scala" markdown="1">
51+
52+
TF and IDF are implemented in [HashingTF](api/scala/index.html#org.apache.spark.mllib.feature.HashingTF)
53+
and [IDF](api/scala/index.html#org.apache.spark.mllib.feature.IDF).
54+
`HashingTF` takes an `RDD[Iterable[_]]` as the input.
55+
Each record could be an iterable of strings or other types.
56+
57+
{% highlight scala %}
58+
import org.apache.spark.rdd.RDD
59+
import org.apache.spark.SparkContext
60+
import org.apache.spark.mllib.feature.HashingTF
61+
import org.apache.spark.mllib.linalg.Vector
62+
63+
val sc: SparkContext = ...
64+
65+
// Load documents (one per line).
66+
val documents: RDD[Seq[String]] = sc.textFile("...").map(_.split(" ").toSeq)
67+
68+
val numFeatures = 1000000
69+
val hashingTF = new HashingTF(numFeatures)
70+
val tf: RDD[Vector] = hasingTF.transform(documents)
71+
{% endhighlight %}
72+
73+
While applying `HashingTF` only needs a single pass to the data, applying `IDF` needs two passes:
74+
first to compute the IDF vector and second to scale the term frequencies by IDF.
75+
76+
{% highlight scala %}
77+
import org.apache.spark.mllib.feature.IDF
78+
79+
// ... continue from the previous example
80+
tf.cache()
81+
val idf = new IDF().fit(tf)
82+
val tfidf: RDD[Vector] = idf.transform(tf)
83+
{% endhighlight %}
84+
</div>
85+
</div>
86+
1087
## Word2Vec
1188

12-
Word2Vec computes distributed vector representation of words. The main advantage of the distributed
89+
[Word2Vec](https://code.google.com/p/word2vec/) computes distributed vector representation of words.
90+
The main advantage of the distributed
1391
representations is that similar words are close in the vector space, which makes generalization to
1492
novel patterns easier and model estimation more robust. Distributed vector representation is
1593
showed to be useful in many natural language processing applications such as named entity
@@ -69,5 +147,3 @@ for((synonym, cosineSimilarity) <- synonyms) {
69147
{% endhighlight %}
70148
</div>
71149
</div>
72-
73-
## TFIDF

0 commit comments

Comments
 (0)