Skip to content

Commit

Permalink
Remove the cross-domain phras from README.md
Browse files Browse the repository at this point in the history
The part about `Cross-domain Ajax and Flash` from the `README.md` file
isn't accurate, as by default:

 * the `crossdomain.xml` file doesn't grant a web client — such as Adobe
   Flash Player, Adobe Reader, etc. — permission to handle data across
   multiple domains

 * the Apache server configs, do not allow cross-origin access to all
   resources, unless the user enables that behavior
  • Loading branch information
alrra committed Sep 20, 2014
1 parent f134545 commit 131697f
Show file tree
Hide file tree
Showing 3 changed files with 2 additions and 3 deletions.
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ Choose one of the following options:
* An optimized Google Analytics snippet.
* Apache server caching, compression, and other configuration defaults for
Grade-A performance.
* Cross-domain Ajax and Flash.
* "Delete-key friendly." Easy to strip out parts you don't need.
* Extensive inline and accompanying documentation.

Expand Down
2 changes: 1 addition & 1 deletion dist/doc/misc.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ If you want to disallow certain pages you will need to specify the path in a
`Disallow` directive (e.g.: `Disallow: /path`) or, if you want to disallow
crawling of all content, use `Disallow: /`.

The '/robots.txt' file is not intended for access control, so don't try to
The `/robots.txt` file is not intended for access control, so don't try to
use it as such. Think of it as a "No Entry" sign, rather than a locked door.
URLs disallowed by the `robots.txt` file might still be indexed without being
crawled, and the content from within the `robots.txt` file can be viewed by
Expand Down
2 changes: 1 addition & 1 deletion src/doc/misc.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ If you want to disallow certain pages you will need to specify the path in a
`Disallow` directive (e.g.: `Disallow: /path`) or, if you want to disallow
crawling of all content, use `Disallow: /`.

The '/robots.txt' file is not intended for access control, so don't try to
The `/robots.txt` file is not intended for access control, so don't try to
use it as such. Think of it as a "No Entry" sign, rather than a locked door.
URLs disallowed by the `robots.txt` file might still be indexed without being
crawled, and the content from within the `robots.txt` file can be viewed by
Expand Down

0 comments on commit 131697f

Please sign in to comment.