Skip to content

Commit

Permalink
spell fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
murali-reddy committed Nov 1, 2017
1 parent 74e0ddc commit ff3f2ec
Show file tree
Hide file tree
Showing 5 changed files with 14 additions and 14 deletions.
2 changes: 1 addition & 1 deletion index.html
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ <h2 class="post-title">Kube-router: Highly-available and scalable ingress for ba

<div class="post-entry">

Over the years many webscale companies have desinged massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completley replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&rsquo;s Ananta, Google&rsquo;s Maglev, Facebook&rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.
Over the years many webscale companies have designed massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completely replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&rsquo;s Ananta, Google&rsquo;s Maglev, Facebook&rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.
<a href="https://cloudnativelabs.github.io/post/2017-11-01-kube-high-available-ingress/" class="post-read-more">[Read More]</a>

</div>
Expand Down
2 changes: 1 addition & 1 deletion index.xml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
<pubDate>Wed, 01 Nov 2017 00:00:00 +0000</pubDate>
<author>[email protected] (Cloudnative Labs)</author>
<guid>https://cloudnativelabs.github.io/post/2017-11-01-kube-high-available-ingress/</guid>
<description>Over the years many webscale companies have desinged massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completley replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&amp;rsquo;s Ananta, Google&amp;rsquo;s Maglev, Facebook&amp;rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.</description>
<description>Over the years many webscale companies have designed massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completely replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&amp;rsquo;s Ananta, Google&amp;rsquo;s Maglev, Facebook&amp;rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.</description>
</item>

<item>
Expand Down
20 changes: 10 additions & 10 deletions post/2017-11-01-kube-high-available-ingress/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@
<title>Kube-router: Highly-available and scalable ingress for baremetal Kubernetes clusters</title>
<meta property="og:title" content="Kube-router: Highly-available and scalable ingress for baremetal Kubernetes clusters" />
<meta name="twitter:title" content="Kube-router: Highly-available and scalable ingress for baremetal …" />
<meta name="description" content="Over the years many webscale companies have desinged massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completley replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&rsquo;s Ananta, Google&rsquo;s Maglev, Facebook&rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.">
<meta property="og:description" content="Over the years many webscale companies have desinged massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completley replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&rsquo;s Ananta, Google&rsquo;s Maglev, Facebook&rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.">
<meta name="twitter:description" content="Over the years many webscale companies have desinged massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are …">
<meta name="description" content="Over the years many webscale companies have designed massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completely replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&rsquo;s Ananta, Google&rsquo;s Maglev, Facebook&rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.">
<meta property="og:description" content="Over the years many webscale companies have designed massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completely replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&rsquo;s Ananta, Google&rsquo;s Maglev, Facebook&rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.">
<meta name="twitter:description" content="Over the years many webscale companies have designed massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are …">
<meta name="author" content="Cloudnative Labs"/>
<meta name="twitter:card" content="summary" />
<meta name="twitter:site" content="@cloudnativelabs" />
Expand Down Expand Up @@ -124,7 +124,7 @@ <h1>Kube-router: Highly-available and scalable ingress for baremetal Kubernetes
<article role="main" class="blog-post">


<p>Over the years many webscale companies have desinged massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completley replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&rsquo;s <a href="http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p207.pdf">Ananta</a>,
<p>Over the years many webscale companies have designed massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completely replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&rsquo;s <a href="http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p207.pdf">Ananta</a>,
Google&rsquo;s <a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44824.pdf">Maglev</a>,
Facebook&rsquo;s <a href="https://www.usenix.org/conference/srecon15europe/program/presentation/shuff">Shiv</a>, Github <a href="https://githubengineering.com/introducing-glb/">GLB</a> and Yahoo <a href="https://nanog.org/meetings/nanog51/presentations/Monday/NANOG51.Talk45.nanog51-Schaumann.pdf">L3 DSR</a>. We will see how Kube-router has implemented some of these building blocks for Kuberentes,
and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.</p>
Expand All @@ -135,32 +135,32 @@ <h2 id="network-desgin">Network Desgin</h2>

<p><img src="/img/webscale-ingress.png" alt="Network requirements" /></p>

<p>Below are some of the standard mechanisams used.</p>
<p>Below are some of the standard mechanisms used.</p>

<h3 id="use-of-bgp-ecmp">Use of BGP + ECMP</h3>

<p>You have second tier fleet of L4 directors, each of which is a BGP speaker and advertising service VIP to the BGP router. Routers has equal cost mutliple paths to the VIP through the L4 directors.
<p>You have second tier fleet of L4 directors, each of which is a BGP speaker and advertising service VIP to the BGP router. Routers has equal cost multiple paths to the VIP through the L4 directors.
Running the BGP protocol on the L4 director provides automatic failure detection and recovery. If a L4 director fails or shuts down unexpectedly, the router detects this failure via the BGP
protocol and automatically stops sending traffic to that L4 director. Similarly, when the L4 director comes up, it can start announcing the routes and the router will start forwarding traffic to it.</p>

<h3 id="l3-l4-network-load-balancing">L3/L4 network load balancing</h3>

<p>Since router has multiple paths to advertised vip, it can perform ECMP load balancing. In case router does L3 does balancing, router distributes the traffic across the tier-2 L4 directors.
Router can also do hash (on packets source, destination ip and port etc) based load balancing. Where traffic corresponding to a same flow always gets forwarded to same L4 director. Even if there are
more than one router (for redundency) even then traffic can get forwarded to same L4 director by both the routers if consistent hashing is used.</p>
more than one router (for redundancy) even then traffic can get forwarded to same L4 director by both the routers if consistent hashing is used.</p>

<h3 id="l4-director">L4 director</h3>

<p>A L4 director does not proxy the connection but simply forwards the packets to selected endpoint. So L4 director is stateless. But they can use ECMP to shard traffic using consistent hashing so that, each L4 director selects same endpoint for a particular flow. So even if a L4 director goes down traffic still ends up at the same endpoint. Linux&rsquo;s LVS/IPVS is commanly used as L4 director.</p>

<h3 id="direct-server-return">Direct server return</h3>

<p>In typical load balancer acting as proxy, packets are DNAT&rsquo;ed to real server IP. Return traffic must go through the same loadbalancer so that packets gets SNAT&rsquo;ed (to VIP as source IP). This hinders scale-out approach particulalry when routers are sharding traffic across the L4 directors. To overcome the limitation, as mentioned above L4 director simply forward the packet. It also does tunnel the packets so that original packet is delivered to the service point as is. Various solution are
available (IPVS/LVS DR mode, use of GRE/IPIP tunnels etc) to send the traffic to endpoint. Since endpoint when it recives the packets, it sees the traffic destined to the VIP (ofcourse endoint needs to be setup to accept traffic to VIP) from the original client. Return traffic is directly sent to the client.</p>
<p>In typical load balancer acting as proxy, packets are DNAT&rsquo;ed to real server IP. Return traffic must go through the same loadbalancer so that packets gets SNAT&rsquo;ed (to VIP as source IP). This hinders scale-out approach particularly when routers are sharding traffic across the L4 directors. To overcome the limitation, as mentioned above L4 director simply forward the packet. It also does tunnel the packets so that original packet is delivered to the service point as is. Various solution are
available (IPVS/LVS DR mode, use of GRE/IPIP tunnels etc) to send the traffic to endpoint. Since endpoint when it receives the packets, it sees the traffic destined to the VIP (ofcourse endoint needs to be setup to accept traffic to VIP) from the original client. Return traffic is directly sent to the client.</p>

<h3 id="l4-l7-split-design">L4/L7 split design</h3>

<p>Above basic mechanisams can be extended to implement application load balancing. Whats is called L4/L7 split design as shown below.</p>
<p>Above basic mechanisms can be extended to implement application load balancing. Whats is called L4/L7 split design as shown below.</p>

<p><img src="/img/webscale-ingress-l4-l7-split.png" alt="Network requirements" /></p>

Expand Down
2 changes: 1 addition & 1 deletion post/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ <h2 class="post-title">Kube-router: Highly-available and scalable ingress for ba
</p>
<div class="post-entry">

Over the years many webscale companies have desinged massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completley replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&rsquo;s Ananta, Google&rsquo;s Maglev, Facebook&rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.
Over the years many webscale companies have designed massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completely replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&rsquo;s Ananta, Google&rsquo;s Maglev, Facebook&rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.
<a href="https://cloudnativelabs.github.io/post/2017-11-01-kube-high-available-ingress/" class="post-read-more">[Read More]</a>

</div>
Expand Down
2 changes: 1 addition & 1 deletion post/index.xml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
<pubDate>Wed, 01 Nov 2017 00:00:00 +0000</pubDate>
<author>[email protected] (Cloudnative Labs)</author>
<guid>https://cloudnativelabs.github.io/post/2017-11-01-kube-high-available-ingress/</guid>
<description>Over the years many webscale companies have desinged massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completley replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&amp;rsquo;s Ananta, Google&amp;rsquo;s Maglev, Facebook&amp;rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.</description>
<description>Over the years many webscale companies have designed massivley scalable and highly available services using loadbalancer solutions based on commodity Linux servers. Traditional middleboxes are completely replaced with software loadbalancers. In this blog we will see common building blocks across Microsoft&amp;rsquo;s Ananta, Google&amp;rsquo;s Maglev, Facebook&amp;rsquo;s Shiv, Github GLB and Yahoo L3 DSR. We will see how Kube-router has implemented some of these building blocks for Kuberentes, and how you can leverage them to build a highly-available and scalable ingress in bare-metal deployments.</description>
</item>

<item>
Expand Down

0 comments on commit ff3f2ec

Please sign in to comment.