<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>proxy</title>
	<atom:link href="https://crafthub.events/tag/proxy/feed/" rel="self" type="application/rss+xml" />
	<link></link>
	<description>The craft that you need!</description>
	<lastBuildDate>Mon, 24 Apr 2023 13:26:42 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>eBPF for Service Mesh? Yes, but Envoy Proxy is here to stay</title>
		<link>https://crafthub.events/ebpf-for-service-mesh-yes-but-envoy-proxy-is-here-to-stay/</link>
		
		<dc:creator><![CDATA[Christian Posta]]></dc:creator>
		<pubDate>Tue, 29 Mar 2022 07:12:01 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[craftconference]]></category>
		<category><![CDATA[softwaredevelopment]]></category>
		<category><![CDATA[proxy]]></category>
		<category><![CDATA[servicemesh]]></category>
		<category><![CDATA[networkingoverlays]]></category>
		<category><![CDATA[techconference]]></category>
		<guid isPermaLink="false">https://crafthub.events/?post_type=blog&#038;p=89367</guid>

					<description><![CDATA[<p>Read through the newest article on the CraftHub website from Christian Posta who is going to be a speaker at [&#8230;]</p>
<p>A <a href="https://crafthub.events/ebpf-for-service-mesh-yes-but-envoy-proxy-is-here-to-stay/">eBPF for Service Mesh? Yes, but Envoy Proxy is here to stay</a> bejegyzés először <a href="https://crafthub.events">CraftHub</a>-én jelent meg.</p>
]]></description>
										<content:encoded><![CDATA[<h1></h1>
<p>Read through the newest article on the <a href="https://crafthub.events/">CraftHub</a> website from <a href="https://craft-conf.com/speaker/ChristianPosta">Christian Posta</a> who is going to be a speaker at this year&#8217;s <a href="https://craft-conf.com/">Craft Conference</a> in May. One of the biggest tech conferences in Europe is waiting for you in the heart of Budapest.</p>
<h1><span style="font-weight: 400;">eBPF for Service Mesh? Yes, but Envoy Proxy is here to stay</span></h1>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">Our goal here at</span><a href="http://solo.io/"><span style="font-weight: 400;"> Solo.io</span></a><span style="font-weight: 400;"> is to bring valuable solutions </span><a href="https://www.solo.io/customers/"><span style="font-weight: 400;">to our customers</span></a><span style="font-weight: 400;"> around </span><a href="https://www.solo.io/products/gloo-mesh/"><span style="font-weight: 400;">application networking and service connectivity.</span></a> <a href="https://servicemeshconna21.sched.com/event/mH1h"><span style="font-weight: 400;">Back in October</span></a><span style="font-weight: 400;">, we announced our plans to enhance our enterprise service-mesh product (Gloo Mesh Enterprise) with eBPF to optimize the functionality around networking, observability, and security. To what extent can eBPF play a role in a service mesh? How does the role of the service proxy change?  In this blog we will dig into the role of eBPF for a service mesh data plane and what are some of the tradeoffs of various data-plane architectures.</span></p>
<h2><span style="font-weight: 400;">Goodbye to the service proxy?</span></h2>
<p><span style="font-weight: 400;">A service mesh provides complex application-networking behaviors for services such as service discovery, traffic routing, resilience (timeout/retry/circuit breaking), authentication/authorization, observability (logging/metrics/tracing) and more. Can we rewrite all of this functionality into the Kernel with eBPF?</span></p>
<p><span style="font-weight: 400;">The short answer: this would be quite difficult and may not be the right approach. eBPF is an event-handler model that has some constraints around how it runs. You can think of the eBPF model as “functions as a service” for the Kernel. For example, eBPF execution paths must be fully known and verified before safely executing in the Kernel. eBPF programs cannot have arbitrary loops where the verifier will not know when the program will stop execution. In short, eBPF is Turing incomplete.</span></p>
<p><span style="font-weight: 400;">Layer 7 handling (like various protocol codecs, retries, header manipulations, etc) can be very complex to implement in eBPF alone and without better native support from the Kernel. Maybe this support comes, but that is likely years off and wouldn’t be available on older versions. In many ways, eBPF is ideal for O(1) complexity (such as inspecting a packet, manipulating some bits, and sending it on its way). Implementing complex protocols like HTTP/2 and gRPC can be O(n) complexity and very difficult to debug. So where could these L7 functionalities reside?</span></p>
<p><a href="https://www.solo.io/blog/getting-started-with-envoy-proxy-in-15-minutes/"><span style="font-weight: 400;">Envoy proxy</span></a><span style="font-weight: 400;"> has become the de-facto proxy for service mesh implementations and has very good support for Layer 7 capabilities that most of our customers need. Although eBPF and the Kernel can be used to improve the execution of the network (short circuiting optimal paths, offloading TLS/mTLS, observability collection, etc), complex protocol negotiations, parsing, and user-extensions can remain in user space. For the complexities of Layer 7, Envoy remains the data plane for the service mesh. </span></p>
<h2><span style="font-weight: 400;">One shared proxy vs sidecar proxies?</span></h2>
<p><span style="font-weight: 400;">Another consideration when attempting to optimize the data path for a service mesh is whether to run a sidecar per workload or to use a single, shared proxy per node. For example when running massive clusters with hundreds of pods and thousands of nodes, a shared-proxy model can deliver optimizations around memory and configuration overhead. But is this the right approach for everyone? Absolutely not. For many enterprise users, some memory overhead is worth the better tenancy and workload isolation gains with sidecar proxies.</span></p>
<p><span style="font-weight: 400;">Both architectures come with their benefits and tradeoffs around memory and networking overhead, tenancy, operations, and simplicity, and both can equally benefit from eBPF-based optimization. These two are not the only architectures, however. Let’s dig into the options we have along the following dimensions:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Memory / CPU overhead – Configuring the routing and cluster details for an L7 proxy consists of proxy-specific configurations which can be verbose; the more services with which a particular workload needs to communicate, the more configurations it will need.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Feature isolation – Applications are finicky and tend to need per-workload optimizations of connection pools, socket buffers, retry semantics/budgets, external-auth, and rate limiting. We see a lot of need for customizing the data path which is why we’ve introduced Wasm extensions. Debugging these features and behaviors also becomes demanding. We need to figure out a way to isolate these features between workloads.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Security granularity – A big part of the zero-trust philosophy is to establish trust to peers at runtime based on current context; scoping these trust boundaries as small as possible is usually desirable.</span></li>
<li style="font-weight: 400;" aria-level="1"><span style="font-weight: 400;">Upgrade impact – A service mesh is incredibly important infrastructure since it’s on the request path; we need to have very controlled upgrades of service-mesh data-plane components to minimize outages</span></li>
</ul>
<p><span style="font-weight: 400;">Let’s look at four possible architectures where eBPF is used to optimize and short-circuit the network paths and leverage Envoy proxy for Layer 7 capabilities. For each architecture, we evaluate the benefits and tradeoffs of </span><i><span style="font-weight: 400;">where</span></i><span style="font-weight: 400;"> to run the Layer 7 proxy along the lines of overhead, isolation, security, and upgrades.</span></p>
<h3><span style="font-weight: 400;">Sidecar proxy (service proxy)</span></h3>
<p><span style="font-weight: 400;"><img fetchpriority="high" decoding="async" class="alignnone size-full wp-image-89368" src="https://crafthub.events/wp-content/uploads/eBPF-blog.png" alt="" width="2195" height="646" srcset="https://crafthub.events/wp-content/uploads/eBPF-blog.png 2195w, https://crafthub.events/wp-content/uploads/eBPF-blog-300x88.png 300w, https://crafthub.events/wp-content/uploads/eBPF-blog-1024x301.png 1024w, https://crafthub.events/wp-content/uploads/eBPF-blog-768x226.png 768w, https://crafthub.events/wp-content/uploads/eBPF-blog-1536x452.png 1536w, https://crafthub.events/wp-content/uploads/eBPF-blog-2048x603.png 2048w" sizes="(max-width: 2195px) 100vw, 2195px" />In this model, we deploy a sidecar proxy with each application instance. The sidecar has all of the configurations it needs to route traffic on behalf of the workload and can be tailored to the workload.</span></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400;">With many workloads and proxies, this configuration is duplicated across workload instances and can present a “sub-optimal” amount of resource overhead. </span></p>
<p><span style="font-weight: 400;">This model does give the best feature isolation to reduce the blast radius of any noisy neighbors. Misconfigured or app-specific buffers/connection-pooling/timeouts are isolated to a specific workload. Extensions using Lua or Wasm (that could potentially take down a proxy) are also constrained to specific workloads. </span></p>
<p><span style="font-weight: 400;"><img decoding="async" class="alignnone size-full wp-image-89369" src="https://crafthub.events/wp-content/uploads/eBPF-blog-1.png" alt="" width="1368" height="645" srcset="https://crafthub.events/wp-content/uploads/eBPF-blog-1.png 1368w, https://crafthub.events/wp-content/uploads/eBPF-blog-1-300x141.png 300w, https://crafthub.events/wp-content/uploads/eBPF-blog-1-1024x483.png 1024w, https://crafthub.events/wp-content/uploads/eBPF-blog-1-768x362.png 768w" sizes="(max-width: 1368px) 100vw, 1368px" />From a security perspective, we originate and terminate connections directly with the applications. We can use the mTLS capabilities of the service mesh to prove the identity of the services on both ends of the connections scoped down to the level of the application process. We can then write fine-grained authorization policies based on this identity. Another benefit of this model comes if a single proxy ends up victim to an attacker, the compromised proxy is isolated to a specific workload; the blast radius is limited. On the downside, however, since sidecars must be deployed with the workload, there is the possibility that a workload opts not to inject the sidecar, or worse, finds a way to work around the sidecar. </span></p>
<p><span style="font-weight: 400;">Lastly, in this model, upgrades can be done per workload and follow a canary approach that affects only specific workloads. For example, we can upgrade Pod A’s data plane to a new version without affecting any of the other workloads on the node. The downside to this is injecting the sidecar is still tricky and if there are changes between versions, it could affect the app instance.</span></p>
<h3><span style="font-weight: 400;">Shared proxy per node</span></h3>
<p><span style="font-weight: 400;"><img decoding="async" class="alignnone size-full wp-image-89370" src="https://crafthub.events/wp-content/uploads/eBPF-blog-2.png" alt="" width="2195" height="629" srcset="https://crafthub.events/wp-content/uploads/eBPF-blog-2.png 2195w, https://crafthub.events/wp-content/uploads/eBPF-blog-2-300x86.png 300w, https://crafthub.events/wp-content/uploads/eBPF-blog-2-1024x293.png 1024w, https://crafthub.events/wp-content/uploads/eBPF-blog-2-768x220.png 768w, https://crafthub.events/wp-content/uploads/eBPF-blog-2-1536x440.png 1536w, https://crafthub.events/wp-content/uploads/eBPF-blog-2-2048x587.png 2048w" sizes="(max-width: 2195px) 100vw, 2195px" />The shared-proxy per node introduces optimizations that make sense for large clusters where memory overhead is a top concern and amortization of the cost of the memory is desirable. In this model, instead of each sidecar proxy configured with the routing and clusters needed to route traffic, that configuration is shared across all workloads on a node in a single proxy. </span></p>
<p><img decoding="async" class="alignnone size-full wp-image-89372" src="https://crafthub.events/wp-content/uploads/eBPF-blog-5.png" alt="" width="1365" height="641" srcset="https://crafthub.events/wp-content/uploads/eBPF-blog-5.png 1365w, https://crafthub.events/wp-content/uploads/eBPF-blog-5-300x141.png 300w, https://crafthub.events/wp-content/uploads/eBPF-blog-5-1024x481.png 1024w, https://crafthub.events/wp-content/uploads/eBPF-blog-5-768x361.png 768w" sizes="(max-width: 1365px) 100vw, 1365px" /></p>
<p><span style="font-weight: 400;">From a feature isolation perspective, you end up trying to solve all of the concerns for all of the workload instances in one process (one Envoy proxy) and this can have drawbacks. For example, could application configurations across multiple apps conflict with each other or have offsetting behaviors in the proxy? Can you safely load secrets or private-keys that must be separated for regulatory reasons? Can you deploy Wasm extensions without the risk of affecting the behavior of the proxy for other applications? Sharing a single proxy for a bunch of applications has isolation concerns that could potentially be better solved with separate processes/proxies. </span></p>
<p><span style="font-weight: 400;">Security boundaries also become shared in the shared-proxy per-node model. For example, workload identity is now handled at the node level and not the actual workload. What happens for the “last mile” between the proxy and the workload? Or worse, what happens if a shared proxy representing multiple workload identities (hundreds?) is compromised? </span></p>
<p><span style="font-weight: 400;">Lasly, upgrading a shared proxy per node could affect all of the workloads on the node if the upgrade has issues such as version conflicts, configuration conflicts, or extension incompatibilities. Any time shared infrastructure handling application requests is upgraded, care must be taken. On the plus side, upgrading a shared-node proxy does not have to account for any of the complexities of injecting a sidecar.</span></p>
<h3><span style="font-weight: 400;">Shared proxy per service account (per node)</span></h3>
<p><span style="font-weight: 400;"><img decoding="async" class="alignnone size-full wp-image-89373" src="https://crafthub.events/wp-content/uploads/eBPF-blog-6.png" alt="" width="2195" height="839" srcset="https://crafthub.events/wp-content/uploads/eBPF-blog-6.png 2195w, https://crafthub.events/wp-content/uploads/eBPF-blog-6-300x115.png 300w, https://crafthub.events/wp-content/uploads/eBPF-blog-6-1024x391.png 1024w, https://crafthub.events/wp-content/uploads/eBPF-blog-6-768x294.png 768w, https://crafthub.events/wp-content/uploads/eBPF-blog-6-1536x587.png 1536w, https://crafthub.events/wp-content/uploads/eBPF-blog-6-2048x783.png 2048w" sizes="(max-width: 2195px) 100vw, 2195px" />Instead of using a single shared proxy for the whole node, we can isolate proxies to a specific service account per node. In this model, we deploy a “shared proxy” per service account/identity, and any workload under that service account/identity uses that proxy. We can avoid some of the complexities of injecting a sidecar with this model.</span></p>
<p><span style="font-weight: 400;"><img decoding="async" class="alignnone size-full wp-image-89374" src="https://crafthub.events/wp-content/uploads/eBPF-blog-7.png" alt="" width="1407" height="657" srcset="https://crafthub.events/wp-content/uploads/eBPF-blog-7.png 1407w, https://crafthub.events/wp-content/uploads/eBPF-blog-7-300x140.png 300w, https://crafthub.events/wp-content/uploads/eBPF-blog-7-1024x478.png 1024w, https://crafthub.events/wp-content/uploads/eBPF-blog-7-768x359.png 768w" sizes="(max-width: 1407px) 100vw, 1407px" />This model tries to save memory in scenarios where multiple instances of the same identity are present on a single node and maintains some level of feature and noisy-neighbor isolation. This model has the same advantages of a sidecar for workload identity, however it does come with the drawbacks of a shared proxy: what happens to the last mile connections? How is authentication established all the way back to the workload instance? One thing we can do to improve this model is use a smaller “micro proxy” that lives with the application workload instances that can facilitate end-to-end mTLS down to the instance level. Let’s see that in the next pattern. </span></p>
<h3><span style="font-weight: 400;">Shared remote proxy with micro proxy</span></h3>
<p><span style="font-weight: 400;"><img decoding="async" class="alignnone size-full wp-image-89375" src="https://crafthub.events/wp-content/uploads/ch8.jpg" alt="" width="1138" height="433" srcset="https://crafthub.events/wp-content/uploads/ch8.jpg 1138w, https://crafthub.events/wp-content/uploads/ch8-300x114.jpg 300w, https://crafthub.events/wp-content/uploads/ch8-1024x390.jpg 1024w, https://crafthub.events/wp-content/uploads/ch8-768x292.jpg 768w" sizes="(max-width: 1138px) 100vw, 1138px" />In this model, a smaller, lightweight “micro proxy” (uProxy) that handles only mTLS (no L7 policies, smaller attack surface) is deployed as a sidecar with the workload instances. When Layer 7 policies need to be applied, traffic is directed from the workload instance through the Layer 7 (Envoy) proxy. The Layer 7 proxy can run as a shared-node proxy, per-serviceaccount, or even a remote proxy. This model also allows completely bypassing the Layer 7 proxy when those policies may not be needed (but keeping mTLS origination/negotiation/termination with the application instances.</span></p>
<p><span style="font-weight: 400;"><img decoding="async" class="alignnone size-full wp-image-89376" src="https://crafthub.events/wp-content/uploads/ch9.jpg" alt="" width="1123" height="405" srcset="https://crafthub.events/wp-content/uploads/ch9.jpg 1123w, https://crafthub.events/wp-content/uploads/ch9-300x108.jpg 300w, https://crafthub.events/wp-content/uploads/ch9-1024x369.jpg 1024w, https://crafthub.events/wp-content/uploads/ch9-768x277.jpg 768w" sizes="(max-width: 1123px) 100vw, 1123px" />This model reduces the configuration overhead of Layer 7 policies you see in sidecars but could introduce more hops. These hops may (or may not) contribute to more call latency. It’s possible that, for some calls, the L7 proxy is not even in the data path which would improve call latency. </span></p>
<p><span style="font-weight: 400;">This model combines the sidecar proxy benefits of feature isolation and security since the uProxy is still deployed with the workload instances. </span></p>
<p><span style="font-weight: 400;">From an upgrade standpoint, we can update the L7 proxy transparently to the application, however we now have more moving pieces. We need to also coordinate the upgrade of the uProxy which has some of the same drawbacks as the sidecar architecture we discussed as the first pattern. </span></p>
<h2><span style="font-weight: 400;">Parting thoughts</span></h2>
<p><span style="font-weight: 400;">As discussed in “</span><a href="https://www.youtube.com/watch?v=bmf0JQtDJL4"><span style="font-weight: 400;">The truth about the service mesh data plane</span></a><span style="font-weight: 400;">” back at Service Mesh Con 2019, architectures representing the data plane can vary and have different tradeoffs. At Solo.io, we see eBPF as a powerful way to optimize the service mesh, and we see Envoy proxy as the cornerstone of the data plane. Working with our many customers (of various sizes, including some of largest deployments of service mesh in the world), we are in a unique position to help balance the tradeoffs between optimizations, features, extensibility, debuggability and user experience. </span></p>
<p>&nbsp;</p>
<p><em>You can read more from the author on <a href="https://www.solo.io/">solo.io</a> or check out the original article <a href="https://www.solo.io/blog/ebpf-for-service-mesh/">here</a>.</em></p>
<p>&nbsp;</p>
<p>A <a href="https://crafthub.events/ebpf-for-service-mesh-yes-but-envoy-proxy-is-here-to-stay/">eBPF for Service Mesh? Yes, but Envoy Proxy is here to stay</a> bejegyzés először <a href="https://crafthub.events">CraftHub</a>-én jelent meg.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
