Skip to content

Commit

Permalink
Merge pull request kubernetes#163 from pengqinglan/fix-some-markdown-…
Browse files Browse the repository at this point in the history
…formats

fix some markdown formats
  • Loading branch information
chrislovecnm authored Dec 20, 2016
2 parents 1f17402 + f7bb640 commit f55aa71
Show file tree
Hide file tree
Showing 20 changed files with 111 additions and 137 deletions.
88 changes: 30 additions & 58 deletions disk-accounting.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ Another option is to expose disk usage of all images together as a first-class f

##### Overlayfs and Aufs

####### `du`
###### `du`

We can list all the image layer specific directories, excluding container directories, and run `du` on each of those directories.

Expand All @@ -200,7 +200,7 @@ We can list all the image layer specific directories, excluding container direct
* Can block container deletion by keeping file descriptors open.


####### Linux gid based Disk Quota
###### Linux gid based Disk Quota

[Disk quota](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-disk-quotas.html) feature provided by the linux kernel can be used to track the usage of image layers. Ideally, we need `project` support for disk quota, which lets us track usage of directory hierarchies using `project ids`. Unfortunately, that feature is only available for zfs filesystems. Since most of our distributions use `ext4` by default, we will have to use either `uid` or `gid` based quota tracking.

Expand Down Expand Up @@ -417,15 +417,12 @@ Tested on Debian jessie

8. Check usage using quota and group ‘x’

```shell
$ quota -g x -v

Disk quotas for group x (gid 9000):

Filesystem **blocks** quota limit grace files quota limit grace

/dev/sda1 **10248** 0 0 3 0 0
```
```shell
$ quota -g x -v
Disk quotas for group x (gid 9000):
Filesystem blocks quota limit grace files quota limit grace
/dev/sda1 10248 0 0 3 0 0
```

Using the same workflow, we can add new sticky group IDs to emptyDir volumes and account for their usage against pods.

Expand Down Expand Up @@ -484,29 +481,24 @@ Overlayfs works similar to Aufs. The path to the writable directory for containe
* Check quota before and after running the container.

```shell
$ quota -g x -v

Disk quotas for group x (gid 9000):

Filesystem blocks quota limit grace files quota limit grace

/dev/sda1 48 0 0 19 0 0
```
$ quota -g x -v
Disk quotas for group x (gid 9000):
Filesystem blocks quota limit grace files quota limit grace
/dev/sda1 48 0 0 19 0 0
```

* Start the docker container

* `docker start b8`

* ```shell
quota -g x -v
Disk quotas for group x (gid 9000):
Filesystem **blocks** quota limit grace files quota limit grace
Notice the **blocks** has changed

/dev/sda1 **10288** 0 0 20 0 0
```
```sh
$ quota -g x -v
Disk quotas for group x (gid 9000):
Filesystem blocks quota limit grace files quota limit grace
/dev/sda1 10288 0 0 20 0 0
```

##### Device mapper

Expand All @@ -518,70 +510,50 @@ These devices can be loopback or real storage devices.

The base device has a maximum storage capacity. This means that the sum total of storage space occupied by images and containers cannot exceed this capacity.

By default, all images and containers are created from an initial filesystem with a 10GB limit.
By default, all images and containers are created from an initial filesystem with a 10GB limit.

A separate filesystem is created for each container as part of start (not create).

It is possible to [resize](https://jpetazzo.github.io/2014/01/29/docker-device-mapper-resize/) the container filesystem.

For the purposes of image space tracking, we can

####Testing notes:
For the purposes of image space tracking, we can

* ```shell
#### Testing notes:
Notice the **Pool Name**
```shell
$ docker info
...
Storage Driver: devicemapper
Pool Name: **docker-8:1-268480-pool**
Pool Name: docker-8:1-268480-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 2.059 GB
Data Space Total: 107.4 GB
Data Space Available: 48.45 GB
Metadata Space Used: 1.806 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.146 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.99 (2015-06-20)
```

```shell
$ dmsetup table docker-8\:1-268480-pool
0 209715200 thin-pool 7:1 7:0 **128** 32768 1 skip_block_zeroing
$ dmsetup table docker-8\:1-268480-pool
0 209715200 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing
```

128 is the data block size

Usage from kernel for the primary block device

```shell
$ dmsetup status docker-8\:1-268480-pool
0 209715200 thin-pool 37 441/524288 **31424/1638400** - rw discard_passdown queue_if_no_space -
$ dmsetup status docker-8\:1-268480-pool
0 209715200 thin-pool 37 441/524288 31424/1638400 - rw discard_passdown queue_if_no_space -
```

Usage/Available - 31424/1638400
Expand Down
22 changes: 11 additions & 11 deletions downward_api_resources_limits_requests.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ Volumes are pod scoped, so a selector must be specified with a container name.
Full json path selectors will use existing `type ObjectFieldSelector`
to extend the current implementation for resources requests and limits.

```
```go
// ObjectFieldSelector selects an APIVersioned field of an object.
type ObjectFieldSelector struct {
APIVersion string `json:"apiVersion"`
Expand All @@ -154,7 +154,7 @@ type ObjectFieldSelector struct {

These examples show how to use full selectors with environment variables and volume plugin.

```
```yaml
apiVersion: v1
kind: Pod
metadata:
Expand All @@ -178,7 +178,7 @@ spec:
fieldPath: spec.containers[?(@.name=="test-container")].resources.limits.cpu
```
```
```yaml
apiVersion: v1
kind: Pod
metadata:
Expand Down Expand Up @@ -221,7 +221,7 @@ relative to the container spec. These will be implemented by introducing a
`ContainerSpecFieldSelector` (json: `containerSpecFieldRef`) to extend the current
implementation for `type DownwardAPIVolumeFile struct` and `type EnvVarSource struct`.

```
```go
// ContainerSpecFieldSelector selects an APIVersioned field of an object.
type ContainerSpecFieldSelector struct {
APIVersion string `json:"apiVersion"`
Expand Down Expand Up @@ -300,7 +300,7 @@ Volumes are pod scoped, the container name must be specified as part of

These examples show how to use partial selectors with environment variables and volume plugin.

```
```yaml
apiVersion: v1
kind: Pod
metadata:
Expand Down Expand Up @@ -337,7 +337,7 @@ spec:
resources:
requests:
memory: "64Mi"
cpu: "250m"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Expand Down Expand Up @@ -388,7 +388,7 @@ For example, if requests.cpu is `250m` (250 millicores) and the divisor by defau
exposed value will be `1` core. It is because 250 millicores when converted to cores will be 0.25 and
the ceiling of 0.25 is 1.

```
```go
type ResourceFieldSelector struct {
// Container name
ContainerName string `json:"containerName,omitempty"`
Expand Down Expand Up @@ -462,7 +462,7 @@ Volumes are pod scoped, the container name must be specified as part of

These examples show how to use magic keys approach with environment variables and volume plugin.

```
```yaml
apiVersion: v1
kind: Pod
metadata:
Expand Down Expand Up @@ -493,7 +493,7 @@ spec:
In the above example, the exposed values of CPU_LIMIT and MEMORY_LIMIT will be 1 (in cores) and 128 (in Mi), respectively.
```
```yaml
apiVersion: v1
kind: Pod
metadata:
Expand Down Expand Up @@ -578,7 +578,7 @@ in a shell script, and then export `JAVA_OPTS` (assuming your container image su
and GOMAXPROCS environment variables inside the container image. The spec file for the
application pod could look like:

```
```yaml
apiVersion: v1
kind: Pod
metadata:
Expand Down Expand Up @@ -609,7 +609,7 @@ spec:
Note that the value of divisor by default is `1`. Now inside the container,
the HEAP_SIZE (in bytes) and GOMAXPROCS (in cores) could be exported as:

```
```sh
export JAVA_OPTS="$JAVA_OPTS -Xmx:$(HEAP_SIZE)"
and
Expand Down
8 changes: 4 additions & 4 deletions enhance-pluggable-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ accepts creates. The caller POSTs a SubjectAccessReview to this URL and he gets
a SubjectAccessReviewResponse back. Here is an example of a call and its
corresponding return:

```
```json
// input
{
"kind": "SubjectAccessReview",
Expand Down Expand Up @@ -172,7 +172,7 @@ only accepts creates. The caller POSTs a PersonalSubjectAccessReview to this URL
and he gets a SubjectAccessReviewResponse back. Here is an example of a call and
its corresponding return:

```
```json
// input
{
"kind": "PersonalSubjectAccessReview",
Expand Down Expand Up @@ -202,7 +202,7 @@ accepts creates. The caller POSTs a LocalSubjectAccessReview to this URL and he
gets a LocalSubjectAccessReviewResponse back. Here is an example of a call and
its corresponding return:

```
```json
// input
{
"kind": "LocalSubjectAccessReview",
Expand Down Expand Up @@ -353,7 +353,7 @@ accepts creates. The caller POSTs a ResourceAccessReview to this URL and he gets
a ResourceAccessReviewResponse back. Here is an example of a call and its
corresponding return:

```
```json
// input
{
"kind": "ResourceAccessReview",
Expand Down
4 changes: 2 additions & 2 deletions expansion.md
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ func (r *objectRecorderImpl) Event(reason, message string) {
}

func ObjectEventRecorderFor(object runtime.Object, recorder EventRecorder) ObjectEventRecorder {
return &objectRecorderImpl{object, recorder}
return &objectRecorderImpl{object, recorder}
}
```

Expand Down Expand Up @@ -367,7 +367,7 @@ No other variables are defined.
| `"--$($($($($--"` | `"--$($($($($--"` |
| `"$($($($($--foo$("` | `"$($($($($--foo$("` |
| `"foo0--$($($($("` | `"foo0--$($($($("` |
| `"$(foo$$var)` | `$(foo$$var)` |
| `"$(foo$$var)"` | `"$(foo$$var)"` |

#### In a pod: building a URL

Expand Down
24 changes: 12 additions & 12 deletions external-lb-source-ip-preservation.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,27 +194,27 @@ The cases we should test are:

1. Core Functionality Tests

1.1 Source IP Preservation
1.1 Source IP Preservation

Test the main intent of this change, source ip preservation - use the all-in-one network tests container
with new functionality that responds with the client IP. Verify the container is seeing the external IP
of the test client.
Test the main intent of this change, source ip preservation - use the all-in-one network tests container
with new functionality that responds with the client IP. Verify the container is seeing the external IP
of the test client.

1.2 Health Check responses
1.2 Health Check responses

Testcases use pods explicitly pinned to nodes and delete/add to nodes randomly. Validate that healthchecks succeed
and fail on the expected nodes as endpoints move around. Gather LB response times (time from pod declares ready to
time for Cloud LB to declare node healthy and vice versa) to endpoint changes.
Testcases use pods explicitly pinned to nodes and delete/add to nodes randomly. Validate that healthchecks succeed
and fail on the expected nodes as endpoints move around. Gather LB response times (time from pod declares ready to
time for Cloud LB to declare node healthy and vice versa) to endpoint changes.

2. Inter-Operability Tests

Validate that internal cluster communications are still possible from nodes without local endpoints. This change
is only for externally sourced traffic.
Validate that internal cluster communications are still possible from nodes without local endpoints. This change
is only for externally sourced traffic.

3. Backward Compatibility Tests

Validate that old and new functionality can simultaneously exist in a single cluster. Create services with and without
the annotation, and validate datapath correctness.
Validate that old and new functionality can simultaneously exist in a single cluster. Create services with and without
the annotation, and validate datapath correctness.

# Beta Design

Expand Down
Loading

0 comments on commit f55aa71

Please sign in to comment.