-
-
Notifications
You must be signed in to change notification settings - Fork 317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Emphasis with CJK punctuation #650
Comments
This and the above issues are caused by the change in #618. It is mixed in only v0.30 spec. https://spec.commonmark.org/0.30/changes
The definition of left- and -right-franking emphasis for * and ** must use ASCII punctuation characters instead of Unicode ones. does not cause such problem, so remark depended by MDX v2+ is affected. |
Again, there is no change in 618. That PR is just about words, terminology. MDX 1 did not follow CM correctly and had other bugs. Can you please read what I say, and please stop spamming, and actually contribute? |
The extension by MDX is not the culprit. https://codesandbox.io/s/remark-playground-wmfor?file=/package.json As of Not reproduced in The latest Prettier (uses
This means that the credit for the change goes to the fact that it turns to be clear that this specification is a terrible one that should be revised. Old |
https://spec.commonmark.org/0.29/
You are right. I'm sorry. I will look for another version. |
I finally found that the current broken definition sentences were introduced in 0.14. https://spec.commonmark.org/0.14/changes https://spec.commonmark.org/0.13/ I will investigate why these are introduced. |
https://github.com/commonmark/commonmark-spec/blob/0.14/changelog.spec.txt
http://talk.commonmark.org/t/903/6
Note I replaced the link with a cache by the Wayback machine. I conclude that this problem was caused by a lack of consideration for Chinese and Japanese by |
I would like to ask them why they included non-ASCII punctuation characters and why only ASCII punctuation characters are not sufficient. |
I will blame https://github.com/vfmd/vfmd-spec/blob/gh-pages/specification.md later. The test cases in vfmd considered only ASCII punctuation. |
I found the commit containing the initial definition in the spec of vfmd:
|
@tats-u dude, here and in your comments on #618 you come off as arrogant and very disrespectful. You make absolutist claims and then frequently correcting yourself because it turns out you didn't do your homework. You need to have the humility to realize that your perception that "something broke or is broken" might have to do with you not understanding one or more of the following (I don't have the time to figure out which ones, the responsibility is on you):
A more reasoned, respectful and helpful approach would be to have a discussion with other people who are affected by what you claim is broken, including the makers and other users of the downstream tool that you claim is now broken. Diagnose the problem with them, assuming they agree with you that there is a problem, before making a claim that the source of the problem is upstream in CommonMark. If it turns out that you are alone in this, that should tell you something. |
@tats-u This issue is still open, so indeed it is looking for a solution. It is also something I have heard from others. However, it is not easy to solve. There are also legitimate cases where you do want to use an asterisk or underscore but don’t want it to result in emphasis/strong. Also in East-Asian languages. One idea I have, that could potentially help emphasis/strong, is the Unicode line breaking algorithm: https://unicode.org/reports/tr14/. |
@vassudanagunta I had got too much angry at that time. I do think it was over the limit now.
Let me say there are never in each framework. This problem can be reproduced in the most major JS Markdown frameworks, remark (unified) and markdown-it. Remark-related issues that I have raised are closed immediately with the reason that they are on spec.
I never have. This is why I have looked into the background and the impact of my proposed changes now.
It looks like a lot of work to study the impact of breaking changes and decide whether or not to apply them.
Due to this problem, it became necessary for me (us) to tell all Japanese (and some Chinese) Markdown writers to refrain from surrounding whole sentences with <!-- What would you feel if Markdown would not recognize ** here as <strong> if you remove 4 or 5 spaces? -->
**Don't surround the whole sentence with the double-asterisk without adding extra spaces!** The Foobar language which is spoken by most CommonMark maintainers use as many as 6 spaces to split sentences.
This is what I have looked into by digging through rummaging through the Git history, change logs, and test cases now.
It is not surprising that maintainers and you lower the priority of this problem, since it does not affect any European language family, which puts space next to punctuation or parentheses.
I clearly doubt this. @wooorm I apologize again at this time for my anger and for being too militant in my remarks. My humble suggestions and comments on them:
I know. It is the background of this problem.
I have looked for ones and their frequency. Escaping them does not modify the rendered content itself, but I have been disgusted of having to modify the content by adding extra space or to depend on the inline raw JSX tag (
I will look into it later. (I do not expect it either) |
Checking the general Unicode categories Pc, Pd, Pe, Pf, Pi, Po and Ps, U+3001 Ideographic Comma and U+3002 Ideographic Full Stop are of course included in what Commonmark considers punctuation marks, which are all treated alike. For its definitions of flanking, CM could start to handle Open/Start Possibly affected Examples are, for instance: 363, 367+368, 371+372, 376 and 392–394. |
I checked the raised test cases. 367 is most affected in them. However, there are some ones not raised but more important. I am not convinced in the test case 378 (
Does it not mean that FYI, as of https://hypestat.com/info/github.com, one in six visitors in GitHub live in China or Japan. This percentage would not be able to be ignored or underestimated. |
The “Permitted content: Phrasing content” bit allow it for both.
I don’t think anybody is underestimating that. Practically, this is also open source, which implies that somebody has to do the work for free here, probably because they think it’s fun or important to do. And then folks working on markdown parsers need to do it too. To illustrate, GitHub hasn’t really done anything in the last 3 years (just security vulnerabilities / new fancy footnote footnotes feature). |
Getting emphasis right in markdown (especially nested emphasis) is very difficult. Changing the existing rules without messing up cases that currently work is highly nontrivial. For what it's worth, my rationalized syntax djot has simpler rules for emphasis, gives you what you want in the above Japanese example, and allows you to use braces to clarify nesting in cases where it's unclear, e.g. |
This is technically possible but not practical or necessary. It is much easier and faster to type "「" & "」" from the keyboard directly, and you cannot copy these brackets in
Almost all description on Markdown for newbies including the following say that
I do not know of SaaSes in Japan that customize the style of The current behavior of CommonMark forces newbies in China or Japan to try to decipher its spec. It is for developers of Markdown parsers, not for users except for experts. CommonMark has now grown to the point where it can manipulate the largest Markdown implementations (remark, markdonw-it, goldmark (used by Hugo), commonmarker (possibly used by GitHub), and so on) from behind the scenes. We may well lobby to revise its specification. (unenforceable of course though!) It would not be difficult to create a new specification of Markdown, but is difficult to give sufficient power to it. These are why I had tried to stop the left- and right-flanking, but I have found a convincing plan to recently. We have only to change by my plan:
We do not have to change the other. I hope most Chinese and Japanese can be convinced by it. Also, you can continue to nest
I am a little relieved to hear that. I apologize for the misunderstanding.
It would affect too many documents if the left- & right-flanking rule were abolished. However, the new plan will not affect on most existing documents except for ones that abuse the details of the spec. Do you mean that they are also included in "all existing" ones? I suggest new terms "punctuation run preceded by space" & "puncuation run followed by space".
(2a) and (2b) is going to be changed like the following:
This change treats punctuation characters that are not adjacent to space as normal letters. To see if the " **これは太字になりません。**ご注意ください。
カッコに注意**(太字にならない)**文が続く場合に要警戒。
**[リンク](https://example.com)**も注意。(画像も同様)
先頭の**`コード`も注意。**
**末尾の`コード`**も注意。 Also, we can parse even the following English as intended: You should write “John**'s**” instead. We do not concatenate too many punctuation characters, so we do not have to search more than ten and some (e.g. 16) punctuation characters for space from the previous or next of the target delimiter run. To check if the delimiter run is "the last characters in punctuation run preceded by space" (without using cache): flowchart TD
Next{"Is the<br>next character<br>an Unicode punctuation<br>chracter?"}
Next--> |YES| F["<code>return false</code>"]
Next--> |NO| Init["<code>current =</code><br>(previous character)<br><code>n =</code><br>(Length of delimiter run)"]
Init--> Exceed{"<code>n >= 16</code>?"}
Exceed--> |YES| F
Exceed --> |NO| Previous{"What type is <code>current</code>?"}
Previous --> |Not punctuation or space| F
Previous --> |Space| T["<code>return true</code>"]
Previous --> |Unicode punctuation| Iter["<code>n++<br>current =</code><br>(previous character)"]
Iter --> Exceed
In the current spec, to non-advanced users especially in China or Japan, " |
0.31 changes the wording slightly, but as far as I can tell this does not change flanking behavior at all.
|
The change made the situation even worse.
The few improvements are only that it is easier to explain the condition to beginners (we can now use the single word “symbols”) and more consistent with ASCII punctuation characters. |
This particular change was not intended to address this issue; it was just intended to make things more consistent. @tats-u I am sorry, I have not yet had time to give your proposal proper consideration. |
I guess it, but as a result it did cause a breaking change and break some documents (much less than ones affected by 0.14 though), which is a kind of regressions you have mostly feared and cared about. For the first place, we cannot easily access to convincing and practical examples that describe how legitimate controversial parts of specifications and changes are; we can easily find only ones that are designed only for testing and do not have meaning (e.g. What is needed is like: Price: **€**10 per month (note: you cannot pay in US$!)
FYI you do not have evaluate how optimize the algorithm in the above flowchart; it is too naive and can be optimized. All I want you to do first is to evaluate how acceptable breaking changes brought by my revision are. It might be better for me to make a PoC to make it easy to do it. |
To be honest, I didn't anticipate these breaking changes, and I would have thought twice about the change if I had. Having a parser to play with that implements your idea would make it easier to see what its consequences would be. (Ideally, a minimally altered cmark or commonmark.js.) It's also important to have a plan that can be implemented without significantly degrading the parser's performance. But my guess is that if it's just a check that has to be run once for each delimiter + punctuation run, it should be okay. |
I am testing a minimal change to cmark, as discussed above:
I've added some tests based on @tats-u 's list above: Currently 26 of these fail, but I haven't yet had a chance to analyze why.
Obviousl/y this needs refinement. Feel free to look at this branch if you like. |
I've not been able to fix the above yet, but I found we've forgotten Bofomofo Extended (U+31A0–U+31BF).
I believe libraries MAY (not mandatory but optional) treat these unused regions as CJK to optimize the conditional expression to determine if the character is CJK or not by reducing the number of product terms. U+323B0–U+3FFFF is currently unassigned, but only CJK characters are very likely to be assigned there in the future since the name of its plane is Tertiary Ideographic Plane. |
With @tats-u 's PR + some fixes to the expected test output, we are down to two failures:
EDIT:
So maybe that's okay as it is? In Example 85 I'm not sure what is being tested. |
It's SVS test, so it must be fixed. 塚︀ (before the left parenthesis) = 塚 (after "or") + U+FE00 (VS1) |
The last 塚 is intended to be 塚 (U+FA10), but unintentionally normalized to be 塚 (U+585A; the most common form).
Current form:
|
We have forgotten the SVS (U+FE0E; VS15) preventing the previous character from rendered as emoji. U+303D: 〽 (without U+FE0E/U+FE0F) / 〽︎ (with U+FE0E) / 〽️ (with U+FE0F) Note: not included in commonmark/cmark#556; will be submitted as another PR |
We've ignored Yijing Hexagram Symbols. (易経記号 / 易經六十四卦符號) |
With @tats-u 's latest PR, all tests pass. |
So, this minor change to the spec seems promising on the basis of our tests.
(We'd have to define CJK as well.) |
Agree. The current (w/o CJK support) implementation of cmark seems to pass some tests in cjk branch. |
You think the cjk_emphasis tests should be removed? Why? (EDIT: To clarify: right now these are just tests in cmark. If we do change the spec, we'd need at least some examples in the spec to illustrate the CJK emphasis rules, and these would perhaps be a selection of the tests in cjk_emphasis. But we could still keep those tests around in cmark.) |
Not all. Just some redundant ones.
I don't know what these tests assures. |
Sure - want to submit a PR for the branch to remove these? |
I forgot the existence of U+200B before the space, but there is |
I submitted a PR to the cjk branch (see #650 (comment)) of micromark. I think it's easier to be tried than cmark. |
I found Yijing symbols other than Yijing Hexagram Symbols. |
Also, if the next character of VS-16 (U+FE0F) is required to be recognized as an emoji there.
|
Want to create a PR for this?
I'd love to avoid making the rules too complex. This suggestion goes in that direction. |
Sure, reverting is sufficient.
It is very unlikely that we have to look more than two code points from e.g. 1️⃣: (U+0031 (ASCII digit 1) U+FE0F (VS-16) U+20E3 (keycap ⃣ )) |
I found a spec bug caused by the combination of a keycap emoji and
https://unicode.org/Public/emoji/latest/emoji-test.txt Of course we can hotfix this by adding |
@tats-u because this doesn't have to do with CJK specifically, perhaps you should raise it in a separate issue. |
It’s #646 |
Both of the keycap of
I see. We might be able to fix both at the same time. |
Looks like the keycap is more difficult due to |
Hi, I encountered some strange behavior when using CJK full-width punctuation and trying to add emphasis.
Original issue here
Example punctuation that causes this issue:
。!?、
To my mind, all of these should work as emphasis, but some do and some don't:
I'm not sure if this is the spec as intended, but in Japanese, as a general rule there are no spaces in sentences, which leads to the following kind of problem when parsing emphasis.
In English, this is emphasized as expected:
This is **what I wanted to do.** So I am going to do it.
But the same sentence emphasized in the same way in Japanese fails:
これは**私のやりたかったこと。**だからするの。
The text was updated successfully, but these errors were encountered: