近年容器化(containerized)、微服務架構(microservices architecture)興起,軟體架構的複雜度日益增加,通常會使用容器管理工具自動部署及管理容器,開源的Kubernetes是目前最廣泛被使用的容器管理工具。以往叢集內服務間的同步溝通以HTTP/1.1(Hypertext Transfer Protocol Version 1.1)為主,隨著HTTP/2(Hypertext Transfer Protocol Version 2)的發布,許多系統改用HTTP/2做為服務間溝通的方式,期望透過持久性連線(persistent connection)提升傳輸效率,然而Kubernetes內部負載平衡器kube-proxy屬於network load balancer,只能處理L4的通訊協定,HTTP/2是在L7實作持久性連線的功能,透過此通訊協定連上Pod後,在後續的資料傳輸過程中沒辦法將流量導向到其他Pod,也失去自動擴展功能的意義。為解決此問題,許多文章建議使用服務網格(Service Mesh),透過其L7負載平衡器平衡流量,本篇論文設計數個實驗,比較Kubernetes使用kube-proxy與使用服務網格的進行負載平衡的效能差異,在Kubernetes使用服務網格平衡HTTP/2流量,須在合適的場景下才能提升系統效益。;In recent years, with the rise of containerization and microservice architecture, software architecture is getting more complex. Container orchestration systems are used to automatically deploy and manage multiple containers. Kubernetes is an open-source project and the most widely used in the world. In the past, the synchronous communication between services in the cluster was mainly HTTP/1.1. With the release of HTTP/2, many systems switch to HTTP/2 as a way of communicating between services, in order to improve transmission efficiency through persistent connections. However, the internal load balancer kube-proxy of Kubernetes is a network load balancer. It only handles the L4 communication protocol. While HTTP/2 implements persistent connection in L7, once the connection is established, there′s no direct traffic to other Pods in the subsequent data transmission process. In order to solve this problem, many articles suggest using service mesh, which can balance traffic through sidecar proxy. This paper designed several experiments to compare the performance differences between Kubernetes with kube-proxy load balancer and Kubernetes with service mesh load balancer. Using service mesh to balance HTTP/2 traffic in Kubernetes can improve system efficiency only in suitable scenarios.