From 1dc5b2c9fbbac7a3d0debc2cc2a3dc20795bfbc3 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Dar=C3=ADo=20Here=C3=B1=C3=BA?= <magallania@gmail.com>
Date: Mon, 13 May 2019 23:43:52 -0300
Subject: [PATCH] Syntax issue on string 495

---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 1098d5c7..f7570980 100644
--- a/README.md
+++ b/README.md
@@ -492,7 +492,7 @@ Main Functions
 * The `jieba.cut` function accepts three input parameters: the first parameter is the string to be cut; the second parameter is `cut_all`, controlling the cut mode; the third parameter is to control whether to use the Hidden Markov Model.
 * `jieba.cut_for_search` accepts two parameter: the string to be cut; whether to use the Hidden Markov Model. This will cut the sentence into short words suitable for search engines.
 * The input string can be an unicode/str object, or a str/bytes object which is encoded in UTF-8 or GBK. Note that using GBK encoding is not recommended because it may be unexpectly decoded as UTF-8.
-* `jieba.cut` and `jieba.cut_for_search` returns an generator, from which you can use a `for` loop to get the segmentation result (in unicode).
+* `jieba.cut` and `jieba.cut_for_search` returns a generator, from which you can use a `for` loop to get the segmentation result (in unicode).
 * `jieba.lcut` and `jieba.lcut_for_search` returns a list.
 * `jieba.Tokenizer(dictionary=DEFAULT_DICT)` creates a new customized Tokenizer, which enables you to use different dictionaries at the same time. `jieba.dt` is the default Tokenizer, to which almost all global functions are mapped.