Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Emphasis with CJK punctuation #650

Open
ptmkenny opened this issue May 26, 2020 · 189 comments
Open

Emphasis with CJK punctuation #650

ptmkenny opened this issue May 26, 2020 · 189 comments

Comments

@ptmkenny
Copy link

Hi, I encountered some strange behavior when using CJK full-width punctuation and trying to add emphasis.

Original issue here

Example punctuation that causes this issue:

。!?、

To my mind, all of these should work as emphasis, but some do and some don't:

**テスト。**テスト

**テスト**。テスト

**テスト、**テスト

**テスト**、テスト

**テスト?**テスト

**テスト**?テスト

cjk_punctuation_nospace_commonmark

I'm not sure if this is the spec as intended, but in Japanese, as a general rule there are no spaces in sentences, which leads to the following kind of problem when parsing emphasis.

In English, this is emphasized as expected:

This is **what I wanted to do.** So I am going to do it.

But the same sentence emphasized in the same way in Japanese fails:

これは**私のやりたかったこと。**だからするの。

whatiwanted_markdown_emphasis

@tats-u
Copy link

tats-u commented Nov 13, 2023

This and the above issues are caused by the change in #618. It is mixed in only v0.30 spec.

https://spec.commonmark.org/0.30/changes

A left-flanking delimiter run is a delimiter run that is (1) not followed by Unicode whitespace, and either (2a) not followed by a Unicode punctuation character, or (2b) followed by a Unicode punctuation character and preceded by Unicode whitespace or a Unicode punctuation character. For purposes of this definition, the beginning and the end of the line count as Unicode whitespace.

A right-flanking delimiter run is a delimiter run that is (1) not preceded by Unicode whitespace, and either (2a) not preceded by a Unicode punctuation character, or (2b) preceded by a Unicode punctuation character and followed by Unicode whitespace or a Unicode punctuation character. For purposes of this definition, the beginning and the end of the line count as Unicode whitespace.

  1. A single * character can open emphasis iff (if and only if) it is part of a left-flanking delimiter run.
  2. A single * character can close emphasis iff it is part of a right-flanking delimiter run.
  3. A double ** can open strong emphasis iff it is part of a left-flanking delimiter run.
  4. A double ** can close strong emphasis iff it is part of a right-flanking delimiter run.

The definition of left- and -right-franking emphasis for * and ** must use ASCII punctuation characters instead of Unicode ones.

https://v1.mdxjs.com/

does not cause such problem, so remark depended by MDX v2+ is affected.

@wooorm
Copy link
Contributor

wooorm commented Nov 13, 2023

Again, there is no change in 618. That PR is just about words, terminology.

MDX 1 did not follow CM correctly and had other bugs.

Can you please read what I say, and please stop spamming, and actually contribute?

@tats-u
Copy link

tats-u commented Nov 13, 2023

MDX 1 did not follow CM correctly and had other bugs.

The extension by MDX is not the culprit.

https://codesandbox.io/s/remark-playground-wmfor?file=/package.json

image

As of remark-parse v7, this problem is not reproduced, either.

https://prettier.io/playground/#N4Igxg9gdgLgprEAuEAqVhT00DTmg8qMHYMgZtGBSKoOGmgQAzrEl6A7EYM2xZIANCBAA4wCW0AzsqAEMATkIgB3AArCEfFAIA2YgQE8+LAEZCBYANZwYAZQEBbOABlOUOMgBmCnnA1bd+g222WA5shhCAro4gDsacPv6BPF7ycACKfhDwtvaBAFY8AB4GUbHxiUh28g4sAI65cBKibLIgAjwAtFZwACbNzCC+ApzyXgDCEMbGAsg18vJtkVCe0QCCML6c6n7wEnBCFlZJhYEAFjDG8gDq25zwPO5gcAYyJ5wAbifKw2A8aiC3AQCSUC2wBmBCnA402+BhgymimyKIDYogcBy0bGGMLgDiEt2sLEsqJgFQEnkGkMC7iEqOGgyEOia4igbRhlhgB04TRg22QAA4AAwsIRwUqcHm4-FDfLJFgwATqRnM1lIABMLD8DgAKhLZAUoXBjOpmi0mmYBJM-Hi4AAxCBCQZzLzDARLCAgAC+DqAA

Not reproduced in The latest Prettier (uses remark-parse v8), either.

That PR is just about words, terminology.

This means that the credit for the change goes to the fact that it turns to be clear that this specification is a terrible one that should be revised. Old remark-parse were based on an older ambiguous specification and consequently avoided this problem.

@tats-u
Copy link

tats-u commented Nov 13, 2023

https://spec.commonmark.org/0.29/

A punctuation character is an ASCII punctuation character or anything in the general Unicode categories Pc, Pd, Pe, Pf, Pi, Po, or Ps.

You are right. I'm sorry. I will look for another version.

@tats-u
Copy link

tats-u commented Nov 13, 2023

I finally found that the current broken definition sentences were introduced in 0.14.

https://spec.commonmark.org/0.14/changes

https://spec.commonmark.org/0.13/

I will investigate why these are introduced.

@tats-u
Copy link

tats-u commented Nov 13, 2023

https://github.com/commonmark/commonmark-spec/blob/0.14/changelog.spec.txt

  • Improved rules for emphasis and strong emphasis. This improves parsing of emphasis around punctuation. For background see http://talk.commonmark.org/t/903/6. The basic idea of the change is that if the delimiter is part of a delimiter clump that has punctuation to the left and a normal character (non-space, non-punctuation) to the right, it can only be an opener. If it has punctuation to the right and a normal character (non-space, non-punctuation) to the left, it can only be a closer. This handles cases like
**Gomphocarpus (*Gomphocarpus physocarpus*, syn. *Asclepias physocarpa*)**

and

**foo "*bar*" foo**

http://talk.commonmark.org/t/903/6

There are some good ideas here 4. It looks hairy, but if I understand correctly, basic idea is fairly simple:

  1. Strings of * or _ are divided into “left flanking” and “right flanking,” based on two things: the character immediately before them and the character immediately after.
  2. Left-flanking delimiters can open emphasis, right flanking can close, and non-flanking delimiters are just regular text.
  3. A delimiter is left-flanking if the character to the left has a lower rank than the character to the right, according to the following ranking: spaces and newlines are 0, punctuation (unicode categories Pc, Pd, Ps, Pe, Pi, Pf, Po, Sc, Sk, Sm or So) is 1, the rest 2. And similarly a delimiter is right-flanking if the character to the left has a higher rank than the character to the right.

Note

I replaced the link with a cache by the Wayback machine.

I conclude that this problem was caused by a lack of consideration for Chinese and Japanese by @jgm and the author of vfmd(@roop or possibly @akavel).

@tats-u
Copy link

tats-u commented Nov 13, 2023

I would like to ask them why they included non-ASCII punctuation characters and why only ASCII punctuation characters are not sufficient.

@tats-u
Copy link

tats-u commented Nov 13, 2023

I found the commit containing the initial definition in the spec of vfmd:

vfmd/vfmd-spec@7b53f05

@roop seems to live in India, and this may be because he added non-ASCII punctuation characters, but the trouble is that I do not know Hindi at all. I wonder if a space is always adjacent to punctuation characters in that language like European ones.

@vassudanagunta
Copy link

@tats-u dude, here and in your comments on #618 you come off as arrogant and very disrespectful. You make absolutist claims and then frequently correcting yourself because it turns out you didn't do your homework. You need to have the humility to realize that your perception that "something broke or is broken" might have to do with you not understanding one or more of the following (I don't have the time to figure out which ones, the responsibility is on you):

  • your specific perspective, which may not be universal, which may miss the forest for the single tree that you are most focused on
  • the problem, if there actually is one, might be downstream of CommonMark, in the tool you are using
  • if CommonMark is involved:
    • the facts, the history, or the priorities of CommonMark
    • the impossible expectation that CommonMark can be all things to all people.
    • the difficulty in maintaining a spec where many users expect it to work how they want it without understanding

A more reasoned, respectful and helpful approach would be to have a discussion with other people who are affected by what you claim is broken, including the makers and other users of the downstream tool that you claim is now broken. Diagnose the problem with them, assuming they agree with you that there is a problem, before making a claim that the source of the problem is upstream in CommonMark.

If it turns out that you are alone in this, that should tell you something.

@wooorm
Copy link
Contributor

wooorm commented Nov 14, 2023

@tats-u This issue is still open, so indeed it is looking for a solution. It is also something I have heard from others.

However, it is not easy to solve.
Many languages do use whitespace.
No languages use only ASCII.
Not using unicode would harm many users, too.

There are also legitimate cases where you do want to use an asterisk or underscore but don’t want it to result in emphasis/strong. Also in East-Asian languages.

One idea I have, that could potentially help emphasis/strong, is the Unicode line breaking algorithm: https://unicode.org/reports/tr14/.
It has to be researched, but it might come up with line breaking points that are better indicators than solely relying on whitespace/punctuation.
It might also be worse.

@tats-u
Copy link

tats-u commented Nov 14, 2023

@vassudanagunta I had got too much angry at that time. I do think it was over the limit now. I wish GitHub would provide the draft comment feature out of box, and I could post many things at once without editing or additional ones.

the problem, if there actually is one, might be downstream of CommonMark, in the tool you are using

Let me say there are never in each framework. This problem can be reproduced in the most major JS Markdown frameworks, remark (unified) and markdown-it. Remark-related issues that I have raised are closed immediately with the reason that they are on spec.

image

the impossible expectation that CommonMark can be all things to all people.

I never have. This is why I have looked into the background and the impact of my proposed changes now.

the difficulty in maintaining a spec where many users expect it to work how they want it without understanding

It looks like a lot of work to study the impact of breaking changes and decide whether or not to apply them.

many users expect it to work how they want it without understanding

Due to this problem, it became necessary for me (us) to tell all Japanese (and some Chinese) Markdown writers to refrain from surrounding whole sentences with **, to use JSX <strong>, or to compromise with adding an extra space after the full-width punctuation marks and if they are going to continue additional sentences.

<!-- What would you feel if Markdown would not recognize ** here as <strong> if you remove 4 or 5 spaces?   -->
**Don't surround the whole sentence with the double-asterisk without adding extra spaces!**      The Foobar language which is spoken by most CommonMark maintainers use as many as 6 spaces to split sentences.

the facts, the history

This is what I have looked into by digging through rummaging through the Git history, change logs, and test cases now.

the priorities of CommonMark

It is not surprising that maintainers and you lower the priority of this problem, since it does not affect any European language family, which puts space next to punctuation or parentheses.
I had got angry because I assumed that Japanese and Chinese were not even seen as third-class citizens in the Markdown world due to the background of this problem. (The change causing this problem assumes that all languages puts space next to punctuation or parentheses)

If it turns out that you are alone in this, that should tell you something.

I clearly doubt this.
You had better know many of users of specific languages (and they are not minor ones!) are (going to be) suffered by this problem.


@wooorm I apologize again at this time for my anger and for being too militant in my remarks.


My humble suggestions and comments on them:

  • Revert the concept of left- and right-flanking to prior to 0.14 (0.14 itself is not included)
    • Old remark v8 used in Prettier, which is said to violate CM 0.14+ spec, correctly parses the cases presented in the change log in CM v0.14.
    • I would like to know and have to investigate the impact of this change because it is a breaking change
  • Left- and right-flanking + ASCII punctuation (Unicode punctuation can be used in other parts)
    • In addition to the issues you mentioned, the combination with link **[製品ほげ](./product-foo)**と**[製品ふが](./product-bar)**をお試しください still cannot be parsed as expected. Compromised solution
  • Left- and right-flanking + exclude Chinese- and Japanese-related punctuation from list
    • Some users use ( ) without adjacent space. Compromised solution

Many languages do use whitespace.

I know. It is the background of this problem.

There are also legitimate cases where you do want to use an asterisk or underscore but don’t want it to result in emphasis/strong. Also in East-Asian languages.

I have looked for ones and their frequency. Escaping them does not modify the rendered content itself, but I have been disgusted of having to modify the content by adding extra space or to depend on the inline raw JSX tag (<strong>) to avoid this problem, which puts the shackles on Markdown's expressive power.

Unicode line breaking algorithm

I will look into it later. (I do not expect it either)

@Crissov
Copy link
Contributor

Crissov commented Nov 15, 2023

Checking the general Unicode categories Pc, Pd, Pe, Pf, Pi, Po and Ps, U+3001 Ideographic Comma and U+3002 Ideographic Full Stop are of course included in what Commonmark considers punctuation marks, which are all treated alike.

For its definitions of flanking, CM could start to handle Open/Start Ps (e.g. () and Initial Pi () differently than Close/End Pe ()) and Final Pf (), and both differently than the rest of Connector Pc (_), Dash Pd (-) and Other Po. However, this could only (somewhat) help with brackets and quotation marks or in contexts where they are present, since the characters in question are all part of that last category Po, which is the largest and most diverse by far.

Possibly affected Examples are, for instance: 363, 367+368, 371+372, 376 and 392–394.

@tats-u
Copy link

tats-u commented Nov 26, 2023

@Crissov

Possibly affected Examples are, for instance: 363, 367+368, 371+372, 376 and 392–394.

I checked the raised test cases. 367 is most affected in them.
I wonder how many Markdown writers use nested <em> for casual documents suitable for Markdown and whether if we can ask users to combine * and _ or use the raw <em> powered by MDX if they want to nest <em>.
CJK languages do not use italic. They use https://en.wikipedia.org/wiki/Emphasis_mark, brackets (「」), or quotes (“”) for emphasizing words.
Emphasizing parens in that case may less natural for humans but is a simpler specification and easier to expect the behavior.
Japanese and Chinese do not use _-related syntax because it has too many restrictions, so 371 does not matter. You can keep the current behavior on _.
Other raised cases are not affected.

However, there are some ones not raised but more important. I am not convinced in the test case 378 (a**"foo"**\n→as is).
We may as well treat ** in it as <strong>.
It is popular to make text bold even in Chinese and Japanese and ** is used much more frequently than *.
MDN says that <em> can be nested but does not say that <strong> is also nested.
It would be appreciated if the behavior of ** would be changed first. It is the highest priority for Chinese and Japanese.

handle Open/Start Ps (e.g. () and Initial Pi (“) differently than Close/End Pe ()) and Final Pf (”), and both differently than the rest of Connector Pc (_), Dash Pd (-) and Other Po.

Does it not mean that ** in 単語と**[単語と](word-and)**単語 is going to be treated as <strong> by that change, does it?


FYI, as of https://hypestat.com/info/github.com, one in six visitors in GitHub live in China or Japan. This percentage would not be able to be ignored or underestimated.

@wooorm
Copy link
Contributor

wooorm commented Dec 4, 2023

CJK languages do not use italic.

<em> elements have a default styling in HTML (italic), but you can change that. You can add 「」 before/after if you want, with CSS. Markdown does not dictate italic.

MDN says that <em> can be nested but does not say that <strong> is also nested.

The “Permitted content: Phrasing content” bit allow it for both.

This percentage would not be able to be ignored or underestimated.

I don’t think anybody is underestimating that.
You can’t ignore all existing markdown users either, though, and break them.

Practically, this is also open source, which implies that somebody has to do the work for free here, probably because they think it’s fun or important to do. And then folks working on markdown parsers need to do it too. To illustrate, GitHub hasn’t really done anything in the last 3 years (just security vulnerabilities / new fancy footnote footnotes feature).

@jgm
Copy link
Member

jgm commented Dec 4, 2023

Getting emphasis right in markdown (especially nested emphasis) is very difficult. Changing the existing rules without messing up cases that currently work is highly nontrivial.

For what it's worth, my rationalized syntax djot has simpler rules for emphasis, gives you what you want in the above Japanese example, and allows you to use braces to clarify nesting in cases where it's unclear, e.g. {*foo{*bar*}*}. It might be worth a look.

@tats-u
Copy link

tats-u commented Dec 11, 2023

<em> elements have a default styling in HTML (italic), but you can change that. You can add 「」 before/after if you want, with CSS.

This is technically possible but not practical or necessary. It is much easier and faster to type "「" & "」" from the keyboard directly, and you cannot copy these brackets in ::before and ::after from the text.

Markdown does not dictate italic.

Almost all description on Markdown for newbies including the following say that * is for italic.

I do not know of SaaSes in Japan that customize the style of <em>.

The current behavior of CommonMark forces newbies in China or Japan to try to decipher its spec. It is for developers of Markdown parsers, not for users except for experts.

CommonMark has now grown to the point where it can manipulate the largest Markdown implementations (remark, markdonw-it, goldmark (used by Hugo), commonmarker (possibly used by GitHub), and so on) from behind the scenes. We may well lobby to revise its specification. (unenforceable of course though!)

It would not be difficult to create a new specification of Markdown, but is difficult to give sufficient power to it.

These are why I had tried to stop the left- and right-flanking, but I have found a convincing plan to recently.

We have only to change by my plan:

  • The definitions of (2a) & (2b) in the left- and right-flanking delimiter run
  • Example 352 & 379, which should not occur in English and other many languages that are not suffered by this problem, because a space is mostly adjacent to punctuation in them.

Getting emphasis right in markdown (especially nested emphasis) is very difficult. Changing the existing rules without messing up cases that currently work is highly nontrivial.

We do not have to change the other. I hope most Chinese and Japanese can be convinced by it. Also, you can continue to nest <em> and <strong> in other than Chinese or Japanese as you can do today. (We rarely need that feature in these languages) This will not break almost all existing documents written without abusing the details of the spec.

I don’t think anybody is underestimating that.

I am a little relieved to hear that. I apologize for the misunderstanding.

You can’t ignore all existing markdown users either, though, and break them.

It would affect too many documents if the left- & right-flanking rule were abolished. However, the new plan will not affect on most existing documents except for ones that abuse the details of the spec. Do you mean that they are also included in "all existing" ones?
For the first place, this feature is just an Easter egg. A little modification of that could be accepted. I would be appreciated if you could provide me some links to famous sites that describe on Markdown for intermediate level people and that mention the <em> & <strong> nesting if you have time. I could not find one.

I suggest new terms "punctuation run preceded by space" & "puncuation run followed by space".

  • "... preceded ..." means: a sequence of Unicode punctuation characters preceded by Unicode whitespace
  • "... followed ..." means: a sequence of Unicode punctuation characters followed by Unicode whitespace

(2a) and (2b) is going to be changed like the following:

  • A left-flanking delimiter run is a delimiter run that is (1) not followed by Unicode whitespace, and either (2a) preceded by a Unicode whitespace, or (2b) not the first characters in puncuation run followed by space. For purposes of this definition, the beginning and the end of the line count as Unicode whitespace.
  • A right-flanking delimiter run is a delimiter run that is (1) not preceded by Unicode whitespace, and either (2a) followd by a Unicode whitespace, or (2b) not the last characters in puncuation run preceded by space. For purposes of this definition, the beginning and the end of the line count as Unicode whitespace.

This change treats punctuation characters that are not adjacent to space as normal letters. To see if the "**" works as intended, one need only check the nearest whitespace and the punctuation characters around it. It make it possible to parse all of the followings:

**これは太字になりません。**ご注意ください。

カッコに注意**(太字にならない)**文が続く場合に要警戒。

**[リンク](https://example.com)**も注意。(画像も同様)

先頭の**`コード`も注意。**

**末尾の`コード`**も注意。

Also, we can parse even the following English as intended:

You should write “John**'s**” instead.

We do not concatenate too many punctuation characters, so we do not have to search more than ten and some (e.g. 16) punctuation characters for space from the previous or next of the target delimiter run.


To check if the delimiter run is "the last characters in punctuation run preceded by space" (without using cache):

flowchart TD
    Next{"Is the<br>next character<br>an Unicode punctuation<br>chracter?"}
    Next--> |YES| F["<code>return false</code>"]
    Next--> |NO| Init["<code>current =</code><br>(previous character)<br><code>n =</code><br>(Length of delimiter run)"]
    Init--> Exceed{"<code>n >= 16</code>?"}
    Exceed--> |YES| F
    Exceed --> |NO| Previous{"What type is <code>current</code>?"}
    Previous --> |Not punctuation or space| F
    Previous --> |Space| T["<code>return true</code>"]
    Previous --> |Unicode punctuation| Iter["<code>n++<br>current =</code><br>(previous character)"]
    Iter --> Exceed
Loading

In the current spec, to non-advanced users especially in China or Japan, "*" and "**" sometimes appear to be abandoning its duties. We must not let non-advanced users write Markdown in fear of this hidden feature.

@Crissov
Copy link
Contributor

Crissov commented Feb 1, 2024

0.31 changes the wording slightly, but as far as I can tell this does not change flanking behavior at all.

A Unicode punctuation character is …

  • old:

    an [ASCII punctuation character] or anything in the general Unicode categories Pc, Pd, Pe, Pf, Pi, Po, or Ps.

  • new:

    a character in the Unicode P (puncuation) or S (symbol) general categories.

@tats-u
Copy link

tats-u commented Feb 4, 2024

The change made the situation even worse.
The following sentences are now unable to be parsed properly.

税込**¥10,000**で入手できます。

正解は**④**です。

The few improvements are only that it is easier to explain the condition to beginners (we can now use the single word “symbols”) and more consistent with ASCII punctuation characters.

@jgm
Copy link
Member

jgm commented Feb 4, 2024

This particular change was not intended to address this issue; it was just intended to make things more consistent.

@tats-u I am sorry, I have not yet had time to give your proposal proper consideration.

@tats-u
Copy link

tats-u commented Feb 5, 2024

This particular change was not intended to address this issue; it was just intended to make things more consistent.

I guess it, but as a result it did cause a breaking change and break some documents (much less than ones affected by 0.14 though), which is a kind of regressions you have mostly feared and cared about.
This change will be the basis for determining what kind of breaking changes will be acceptable in the future.

For the first place, we cannot easily access to convincing and practical examples that describe how legitimate controversial parts of specifications and changes are; we can easily find only ones that are designed only for testing and do not have meaning (e.g. *$*a. and *$*alpha.)

What is needed is like:

Price: ****10 per month (note: you cannot pay in US$!)

I have not yet had time to give your proposal proper consideration.

FYI you do not have evaluate how optimize the algorithm in the above flowchart; it is too naive and can be optimized. All I want you to do first is to evaluate how acceptable breaking changes brought by my revision are. It might be better for me to make a PoC to make it easy to do it.

@jgm
Copy link
Member

jgm commented Feb 5, 2024

To be honest, I didn't anticipate these breaking changes, and I would have thought twice about the change if I had.

Having a parser to play with that implements your idea would make it easier to see what its consequences would be. (Ideally, a minimally altered cmark or commonmark.js.) It's also important to have a plan that can be implemented without significantly degrading the parser's performance. But my guess is that if it's just a check that has to be run once for each delimiter + punctuation run, it should be okay.

@jgm
Copy link
Member

jgm commented Aug 17, 2024

I am testing a minimal change to cmark, as discussed above:

Make left/right flankingness depend on CJK characters.

If either the character before or the character after is CJK
the delimiter run is counted as both left- and right-flanking.

The intent is to better support emphasis in languages that
do not use spaces. See #650.

I've added some tests based on @tats-u 's list above:
https://github.com/commonmark/cmark/blob/cjk/test/cjkemphasis.txt

Currently 26 of these fail, but I haven't yet had a chance to analyze why.

Internal ctest changing into directory: /Users/jgm/src/cmark/build
Test project /Users/jgm/src/cmark/build
    Start 10: cjkemphasistest_executable
1/1 Test #10: cjkemphasistest_executable .......***Failed    0.18 sec
Example 3 (lines 18-22) CJK emphasis
単語と**[単語と](word-and)**単語

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>単語と<strong><a href="word-and">単語と</a></strong>単語</p>
+<p>単語と**<a href="word-and">単語と</a>**単語</p>

Example 4 (lines 25-29) CJK emphasis
**これは太字になりません。**ご注意ください。

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p><strong>これは太字になりません。</strong>ご注意ください。</p>
+<p>**これは太字になりません。**ご注意ください。</p>

Example 6 (lines 39-43) CJK emphasis
**[リンク](https://example.com)**も注意。(画像も同様)

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p><strong><a href="https://example.com">リンク</a></strong>も注意。(画像も同様)</p>
+<p>**<a href="https://example.com">リンク</a>**も注意。(画像も同様)</p>

Example 7 (lines 46-50) CJK emphasis
先頭の**`コード`も注意。**

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>先頭の<strong><code>コード</code>も注意。</strong></p>
+<p>先頭の**<code>コード</code>も注意。**</p>

Example 8 (lines 53-57) CJK emphasis
**末尾の`コード`**も注意。

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p><strong>末尾の<code>コード</code><strong>も注意。</p>
+<p>**末尾の<code>コード</code>**も注意。</p>

Example 13 (lines 88-92) CJK emphasis
太郎は​**「こんにちわ」**​といった

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>太郎は​<strong>「こんにちわ」<strong>​といった</p>
+<p>太郎は​**「こんにちわ」**​といった</p>

Example 14 (lines 95-99) CJK emphasis
太郎は​ **「こんにちわ」**​ といった

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>太郎は​ <strong>「こんにちわ」<strong>​ といった</p>
+<p>太郎は​ **「こんにちわ」**​ といった</p>

Example 15 (lines 102-106) CJK emphasis
太郎は**「こんにちわ」**といった

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>太郎は<strong>「こんにちわ」<strong>といった</p>
+<p>太郎は**「こんにちわ」**といった</p>

Example 16 (lines 109-113) CJK emphasis
太郎は**"こんにちわ"**といった

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>太郎は<strong>&quot;こんにちわ&quot;<strong>といった</p>
+<p>太郎は**&quot;こんにちわ&quot;**といった</p>

Example 18 (lines 123-127) CJK emphasis
太郎は**「Hello」**といった

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>太郎は<strong>「Hello」</strong>といった</p>
+<p>太郎は**「Hello」**といった</p>

Example 19 (lines 130-134) CJK emphasis
太郎は**"Hello"**といった

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>太郎は<strong>&quot;Hello&quot;</strong>といった</p>
+<p>太郎は**&quot;Hello&quot;**といった</p>

Example 21 (lines 144-148) CJK emphasis
太郎は**「Oh my god」**といった

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>太郎は<strong>「Oh my god」</strong>といった</p>
+<p>太郎は**「Oh my god」**といった</p>

Example 22 (lines 151-155) CJK emphasis
太郎は**"Oh my god"**といった

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>太郎は<strong>&quot;Oh my god&quot;</strong>といった</p>
+<p>太郎は**&quot;Oh my god&quot;**といった</p>

Example 27 (lines 186-190) CJK emphasis
Go**「初心者」**を対象とした記事です。

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>Go<strong>「初心者」</strong>を対象とした記事です。</p>
+<p>Go**「初心者」**を対象とした記事です。</p>

Example 28 (lines 193-197) CJK emphasis
**[リンク](https://example.com)**も注意。

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p><strong><a href="https://example.com">リンク</a></strong>も注意。</p>
+<p>**<a href="https://example.com">リンク</a>**も注意。</p>

Example 64 (lines 445-449) CJK emphasis
私は**⻲田太郎**と申します

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>私は<strong>⻲田太郎</strong>と申します</p>
+<p>私は**⻲田太郎**と申します</p>

Example 70 (lines 487-491) CJK emphasis
太郎は**「こんにちわ」**といった。

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>太郎は<strong>「こんにちわ」</strong>といった。</p>
+<p>太郎は**「こんにちわ」**といった。</p>

Example 73 (lines 508-512) CJK emphasis
ハイパーテキストコーヒーポット制御プロトコル**(HTCPCP)**

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>ハイパーテキストコーヒーポット制御プロトコル<strong>(HTCPCP)</strong></p>
+<p>ハイパーテキストコーヒーポット制御プロトコル**(HTCPCP)**</p>

Example 76 (lines 529-533) CJK emphasis
㐧**(第の俗字)**

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>㐧<strong>(第の俗字)</strong></p>
+<p>㐧**(第の俗字)**</p>

Example 83 (lines 578-582) CJK emphasis
葛󠄀**(こちらが正式表記)**城市

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>葛󠄀<strong>(こちらが正式表記)</strong>城市</p>
+<p>葛󠄀**(こちらが正式表記)**城市</p>

Example 84 (lines 585-589) CJK emphasis
禰󠄀**(こちらが正式表記)**豆子

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>禰󠄀<strong>(こちらが正式表記)</strong>豆子</p>
+<p>禰󠄀**(こちらが正式表記)**豆子</p>

Example 85 (lines 592-596) CJK emphasis
**(U+317DB)**

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p><strong>(U+317DB)</strong></p>
+<p>**(U+317DB)**</p>

Example 86 (lines 599-603) CJK emphasis
阿寒湖アイヌシアターイコㇿ**(Akanko Ainu Theater Ikor)**

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>阿寒湖アイヌシアターイコㇿ<strong>(Akanko Ainu Theater Ikor)</strong></p>
+<p>阿寒湖アイヌシアターイコㇿ**(Akanko Ainu Theater Ikor)**</p>

Example 87 (lines 606-610) CJK emphasis
あ𛀙**(か)**よろし

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>あ𛀙<strong>(か)</strong>よろし</p>
+<p>あ𛀙**(か)**よろし</p>

Example 88 (lines 613-617) CJK emphasis
𮹝**(simplified form of 龘 in China)**

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>𮹝<strong>(simplified form of 龘 in China)</strong></p>
+<p>𮹝**(simplified form of 龘 in China)**</p>

Example 89 (lines 620-624) CJK emphasis
大塚︀**(or 大塚 / 大塚)**

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>大塚︀<strong>(or 大塚 / 大塚)</strong></p>
+<p>大塚︀**(or 大塚 / 大塚)**</p>

63 passed, 26 failed, 0 errored, 0 skipped


0% tests passed, 1 tests failed out of 1

Total Test time (real) =   0.18 sec

The following tests FAILED:
	 10 - cjkemphasistest_executable (Failed)

Obviousl/y this needs refinement. Feel free to look at this branch if you like.

@tats-u
Copy link

tats-u commented Aug 18, 2024

I've not been able to fix the above yet, but I found we've forgotten Bofomofo Extended (U+31A0–U+31BF).
Also there are 3 unused regions in Unicode:

  • U+2FE0–U+2FEF (Between Kangxi Radicals and Ideographic Description Characters)
  • U+2EBF0–U+2F7FF (Between CJK Unified Ideographs Extension F and CJK Compatibility Ideographs Supplement)
  • U+2FA20–U+2FFFF (Between CJK Compatibility Ideographs Supplement and CJK Unified Ideographs Extension G)

I believe libraries MAY (not mandatory but optional) treat these unused regions as CJK to optimize the conditional expression to determine if the character is CJK or not by reducing the number of product terms.

U+323B0–U+3FFFF is currently unassigned, but only CJK characters are very likely to be assigned there in the future since the name of its plane is Tertiary Ideographic Plane.
(U+31350–U+323AF: CJK Unified Ideographs Extension H)

@jgm
Copy link
Member

jgm commented Aug 18, 2024

With @tats-u 's PR + some fixes to the expected test output, we are down to two failures:

Example 85 (lines 592-596) CJK emphasis
**(U+317DB)**

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p><strong>(U+317DB)</strong></p>
+<p>**(U+317DB)**</p>

Example 89 (lines 620-624) CJK emphasis
大塚︀**(or 大塚 / 大塚)**

--- expected HTML
+++ actual HTML
@@ -1 +1 @@
-<p>大塚︀<strong>(or 大塚 / 大塚)</strong></p>
+<p>大塚︀**(or 大塚 / 大塚)**</p>

87 passed, 2 failed, 0 errored, 0 skipped

EDIT:
In Example 89 the actual behavior is just like what we see with latin text.

 % commonmark
a**(or b)**c
<p>a**(or b)**c</p>

So maybe that's okay as it is?

In Example 85 I'm not sure what is being tested.

@tats-u
Copy link

tats-u commented Aug 18, 2024

@jgm

So maybe that's okay as it is?

It's SVS test, so it must be fixed.

塚︀ (before the left parenthesis) = 塚 (after "or") + U+FE00 (VS1)

@tats-u
Copy link

tats-u commented Aug 18, 2024

The last 塚 is intended to be 塚 (U+FA10), but unintentionally normalized to be 塚 (U+585A; the most common form).
Old form:

  • U+585A U+FE00 (not normalized)
  • U+FA10 (can be normalized to the current form)

Current form:

  • U+585A

@tats-u
Copy link

tats-u commented Aug 18, 2024

We have forgotten the SVS (U+FE0E; VS15) preventing the previous character from rendered as emoji.
It must be treated in the same way as U+FE00–U+FE02.

U+303D: 〽 (without U+FE0E/U+FE0F) / 〽︎ (with U+FE0E) / 〽️ (with U+FE0F)
U+3297: ㊙ (without U+FE0E/U+FE0F) / ㊙︎ (with U+FE0E) / ㊙️ (with U+FE0F)

Note: not included in commonmark/cmark#556; will be submitted as another PR

@tats-u
Copy link

tats-u commented Aug 18, 2024

We've ignored Yijing Hexagram Symbols. (易経記号 / 易經六十四卦符號)
They're symbols but not used outside of east Asia.
Their block is between CJK Unified Ideographs Extension A and CJK Unified Ideographs. If we accept it as one of CJK blocks, we can reduce one more product term in (we can simplify) the conditional expression to confirm if the character is CJK or not.

@jgm
Copy link
Member

jgm commented Aug 18, 2024

With @tats-u 's latest PR, all tests pass.

@jgm
Copy link
Member

jgm commented Aug 18, 2024

So, this minor change to the spec seems promising on the basis of our tests.

If either the character before or the character after is CJK
the delimiter run is counted as both left- and right-flanking.

(We'd have to define CJK as well.)

@tats-u
Copy link

tats-u commented Aug 23, 2024

So, this minor change to the spec seems promising on the basis of our tests.

Agree.

The current (w/o CJK support) implementation of cmark seems to pass some tests in cjk branch.
I think they should be removed because they're verbose (EDIT: redundant).

@jgm
Copy link
Member

jgm commented Aug 23, 2024

You think the cjk_emphasis tests should be removed? Why?

(EDIT: To clarify: right now these are just tests in cmark. If we do change the spec, we'd need at least some examples in the spec to illustrate the CJK emphasis rules, and these would perhaps be a selection of the tests in cjk_emphasis. But we could still keep those tests around in cmark.)

@tats-u
Copy link

tats-u commented Aug 24, 2024

Not all. Just some redundant ones.

  • 太郎は​ **「こんにちわ」**​ といった
  • 太郎は\ **「こんにちわ」**\ といった
  • 先頭の**
  • も注意。**

I don't know what these tests assures.
These passes even if letters and symbols there are other than CJK ones.

@jgm
Copy link
Member

jgm commented Aug 24, 2024

Sure - want to submit a PR for the branch to remove these?

@tats-u
Copy link

tats-u commented Aug 24, 2024

@jgm

commonmark/cmark#561

太郎は​ **「こんにちわ」**​ といった

I forgot the existence of U+200B before the space, but there is 太郎は[[U+200B]]**「こんにちわ」**[[U+200B]]といった.

@tats-u
Copy link

tats-u commented Aug 25, 2024

I submitted a PR to the cjk branch (see #650 (comment)) of micromark.

micromark/micromark#179

I think it's easier to be tried than cmark.

@tats-u
Copy link

tats-u commented Sep 23, 2024

I found Yijing symbols other than Yijing Hexagram Symbols.
U+2630 (☰)–U+2637 (☷) (Yijing Trigram Symbols) in Miscellaneous Symbols block
To keep consistency with Hexagram symbols, I want to remove Hexagram symbols from CJK list.

@tats-u
Copy link

tats-u commented Sep 23, 2024

Also, if the next character of * is one of U+3030 (〰︎), U+303D (〽︎), U+3297 (㊗︎), U+3299 (㊙︎), we should see the next character of such a character (U+3030/303D/3299) to check whether that character is emoji or CJK character.

VS-16 (U+FE0F) is required to be recognized as an emoji there.

3030 FE0F     ; Basic_Emoji                  ; wavy dash                                                      # E0.6   [1] (〰️)
303D FE0F     ; Basic_Emoji                  ; part alternation mark                                          # E0.6   [1] (〽️)
3297 FE0F     ; Basic_Emoji                  ; Japanese “congratulations” button                              # E0.6   [1] (㊗️)
3299 FE0F     ; Basic_Emoji                  ; Japanese “secret” button                                       # E0.6   [1] (㊙️)

https://unicode.org/Public/emoji/16.0/emoji-sequences.txt

@jgm
Copy link
Member

jgm commented Sep 23, 2024

To keep consistency with Hexagram symbols, I want to remove Hexagram symbols from CJK list.

Want to create a PR for this?

if the next character of * is one of U+3030 (〰︎), U+303D (〽︎), U+3297 (㊗︎), U+3299 (㊙︎), we should see the next character of such a character (U+3030/303D/3299) to check whether that character is emoji or CJK character.

I'd love to avoid making the rules too complex. This suggestion goes in that direction.

@tats-u
Copy link

tats-u commented Sep 23, 2024

Want to create a PR for this?

Sure, reverting is sufficient.

I'd love to avoid making the rules too complex. This suggestion goes in that direction.

It is very unlikely that we have to look more than two code points from * to determine if a character is an emoji. Unicode and emojis are now too complex. I hate those who integrated those emojis into normal text symbols in Unicode.

e.g. 1️⃣: (U+0031 (ASCII digit 1) U+FE0F (VS-16) U+20E3 (keycap ⃣ ))
This emoji is very irregular (stars with an ASCII digit) but you have only to see up to the second codepoint to check if it is an emoji.

@tats-u
Copy link

tats-u commented Sep 23, 2024

commonmark/cmark#564

@tats-u
Copy link

tats-u commented Oct 5, 2024

I found a spec bug caused by the combination of a keycap emoji and * in Markdown:

まず、*️⃣を押してください。番号を入力したら、もう一度*️⃣を押してください。
<p>まず、<em>️⃣を押してください。番号を入力したら、もう一度</em>️⃣を押してください。</p>

https://spec.commonmark.org/dingus/?text=%E3%81%BE%E3%81%9A%E3%80%81*%EF%B8%8F%E2%83%A3%E3%82%92%E6%8A%BC%E3%81%97%E3%81%A6%E3%81%8F%E3%81%A0%E3%81%95%E3%81%84%E3%80%82%E7%95%AA%E5%8F%B7%E3%82%92%E5%85%A5%E5%8A%9B%E3%81%97%E3%81%9F%E3%82%89%E3%80%81%E3%82%82%E3%81%86%E4%B8%80%E5%BA%A6*%EF%B8%8F%E2%83%A3%E3%82%92%E6%8A%BC%E3%81%97%E3%81%A6%E3%81%8F%E3%81%A0%E3%81%95%E3%81%84%E3%80%82

002A FE0F 20E3                                         ; fully-qualified     # *️⃣ E2.0 keycap: *
002A 20E3                                              ; unqualified         # *⃣ E2.0 keycap: *

https://unicode.org/Public/emoji/latest/emoji-test.txt
https://www.unicode.org/emoji/charts/emoji-style.txt

Of course we can hotfix this by adding \ before *️⃣, but it's a pitfall for non-experts.

@jgm
Copy link
Member

jgm commented Oct 5, 2024

@tats-u because this doesn't have to do with CJK specifically, perhaps you should raise it in a separate issue.

@wooorm
Copy link
Contributor

wooorm commented Oct 7, 2024

It’s #646

@tats-u
Copy link

tats-u commented Oct 7, 2024

because this doesn't have to do with CJK specifically

Both of the keycap of * and CJK emojis require the preload of up to 2 characters from * to convince us that they're not keycap or CJK text symbols.

  • * ㊙︎ VS16 (*㊙️)
  • * VS16 Keycap (*️⃣)

It’s #646

I see. We might be able to fix both at the same time.

@tats-u
Copy link

tats-u commented Oct 13, 2024

Looks like the keycap is more difficult due to **foo**️⃣.
The 3rd * (just before *️⃣) should be only right-flanking but the last (4th) * (the 1st codepoint of *️⃣) should be neither-flanking.
VS16 two next to * doesn't break the concept of the flanking delimiter run unlike the keycap. (Update: do we have only to exclude such a * from the delimiter run?)
However this doesn't change the fact that we need the capability to peek two next characters to fix either of such problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants