Skip to main content

OSPP-2025 Automating Documentation and Release Workflows for Kmesh

· 5 min read
Yash Israni
Kmesh Contributor

Introduction

Hello everyone! I’m Yash Israni, an open-source enthusiast passionate about automation, DevOps practices, and building tools that eliminate repetitive manual work.

This summer, I had the privilege of participating in the Open-Source Promotion Plan (OSPP) 2025, where I collaborated with the Kmesh community to automate documentation and release workflows. Over the course of three months, I designed and implemented GitHub Actions pipelines that keep the Kmesh website always up-to-date, properly versioned, and reviewed for language quality.

In this blog, I’ll share my journey—from acceptance to project execution, the technical decisions I made, and the lessons I learned along the way.

OSPP-2025 Completing eBPF Unit Tests for Kmesh

· 6 min read
Wu Xi
Kmesh Contributor

Introduction

Hello everyone! I'm Wu Xi, an open source enthusiast with deep interests in kernel networking, eBPF, and test engineering.

This summer, I had the privilege to participate in Open Source Promotion Plan (OSPP) 2025 and collaborate with the Kmesh community, focusing on eBPF program UT enhancement. Over three months, I primarily completed unit testing work for Kmesh eBPF programs. I wrote and successfully ran UT test code for sendMsg and cgroup programs, and supplemented testing documentation based on this work. Kmesh community developers can now verify eBPF program logic without depending on real kernel mounting and traffic simulation, significantly improving development efficiency. In this blog, I'll share my complete experience—from acceptance to project execution, technical choices, and lessons learned along the way.

Experience of LFX Mentorship - Kmesh Tcp Long Connection Metrics

· 3 min read
Yash Patel
Kmesh Member

Introduction

Hello readers, I am Yash, a final Year student from India. I love building cool stuffs and solving real world problems. I’ve been working in the cloud-native space for the past three years, exploring technologies like Kubernetes, Cilium, Istio, and more.

I successfully completed my mentorship with Kmesh during the LFX 2025 Term-1 program, which was an enriching and invaluable experience. Over the past three months, I gained significant knowledge and hands-on experience while contributing to the project. In this blog, I’ve documented my mentorship journey and the work I accomplished as a mentee.

LFX Mentorship Program – Overview

The LFX Mentorship Program, run by the Linux Foundation, is designed to help students and early-career professionals gain hands-on experience in open source development by working on real-world projects under the guidance of experienced mentors

Participants contribute to high-impact projects hosted by foundations like CNCF, LF AI, LF Edge, and more. The program typically runs in 3 terms throughout the year, each lasting about three months.

More-info

My Acceptance

I am a regular opensource contributor and loves contributing to opensource. My interests heavily aligned with clound-native technologies. I was familiar with popular mentorship programs like LFX and GSoC, which are designed to help students get started in the open source world. Based on my work the Kmesh community also promoted for the member of Kmesh I had made up my mind to apply for LFX 2025 Term-1 and began exploring projects in early February. The projects under CNCF for LFX are listed in the cncf/mentoring GitHub repository. I came across the Kmesh project, a newly added CNCF sandbox project participating in LFX for the first time. I found the Kmesh project particularly exciting because of the problem it addresses—providing a sidecarless service mesh data plane. This approach can greatly benefit the community by improving performance and reducing overhead.

Kmesh came up with 4 projects in term-1, i selected long-connection-metrics projects as it allows me to works with eBPF a already have a prior experience on working with eBPF.

I began exploring the Kmesh project by reading the documentation and contributing to Good First Issues. As I became more involved, the mentors started to take notice. I also submitted a proposal for the long connection metrics project.

In late February, I received an email from LFX notifying me of my selection. email

Project Workthrough

The tcp long connection metrics project aims to implement access logs and metrics for TCP long connections, developing a continuous monitoring and reporting mechanisms that captures detailed, real-time data throughout the lifetime of long-lived TCP connections.

Ebpf hooks are used to collect connection stats such as send/received bytes, packets losts, retransmissions etc.

design

More-information

Mentorship Experience

The Kmesh maintainers were always available to help me with any doubts, whether on Slack or GitHub. Additionally, there is a community meeting held regularly every Thursday, where I could ask questions and discuss various topics. I’ve learned a lot from them, including how to approach problems effectively and consider edge cases during development in these three months.

Based on my contributions and active involvement, the Kmesh community recognized my efforts and promoted me to a member of the organization. This acknowledgment was truly encouraging and motivated me to continue contributing to Kmesh and help the project grow.

Kmesh V1.1.0 Officially Released!

· 6 min read

We are delighted to announce the release of ​​Kmesh v1.1.0​​, a milestone achieved through the collective efforts of our global community over the past three months. Special recognition goes to the contributors from the ​​LXF Project​​, whose dedication has been pivotal in driving this release forward.

Building on the foundation of v1.0.0, this release introduces significant enhancements to Kmesh’s architecture, observability, and ecosystem integration. The official Kmesh website has undergone a comprehensive redesign, offering an intuitive interface and streamlined documentation to empower both users and developers. Under the hood, we’ve refactored the DNS module and added metrics for long connections, providing deeper insights into more traffic patterns.

In Kernel-Native mode, we’ve reduced invasive kernel modifications. Also, we use global variables to replace the BPF config map to simplify the underlying complexity. Compatibility with ​​Istio 1.25​​ has been rigorously validated, ensuring seamless interoperability with the latest Istio version. Notably, the persistent TestKmeshRestart E2E test case flaky—a long-standing issue—has been resolved through long-term investigation and reconstruction of the underlying BPF program, marking a leap forward in runtime reliability.

Main Features

Website overhaul

The Kmesh official website has undergone a complete redesign, offering an intuitive user experience with improved documentation, reorganized content hierarchy and streamlined navigation. In addressing feedback from the previous iteration, we focused on key areas where user experience could be enhanced. The original interface presented some usability challenges that occasionally led to navigation difficulties. Our blog module in particular required attention, as its content organization and visual hierarchy impacted content discoverability and readability. From an engineering perspective, we recognized opportunities to improve the code structure through better component organization and more systematic styling approaches, as the existing implementation had grown complex to maintain over time.

To address these problems, we shifted to React with Docusaurus, a modern documentation framework that's much more developer-friendly. This allowed us to create modular components, eliminating redundant code through reusability. Docusaurus provides built-in navigation systems specifically designed for documentation and blogs, plus version-controlled documentation features. We've implemented multilingual support with both English and Chinese documentation, added advanced search functionality, and completely reorganized the content structure. The result is a dramatically improved experience that makes the Kmesh site more accessible and valuable for all users.

Long connection metrics

Before this release, Kmesh provides access logs during termination and establishment of a TCP connection with more detailed information about the connection, such as bytes sent, received, packet lost, rtt and retransmits. Kmesh also provides workload and service specific metrics such as bytes sent and received, lost packets, minimum rtt, total connection opened and closed by a pod. These metrics are only updated after a connection is closed.

In this release, we implement access logs and metrics for TCP long connections, developing a continuous monitoring and reporting mechanism that captures detailed, real-time data throughout the lifetime of long-lived TCP connections. Access logs are reported periodically with information such as reporting time, connection establishment time, bytes sent, received, packet loss, rtt, retransmits and state. Metrics such as bytes sent and received, packet loss, retransmits are also reported periodically for long connections.

DNS refactor

The current DNS process includes the CDS refresh process. As a result, DNS is deeply coupled with kernel-native mode and cannot be used in dual-engine mode.

image

In release 1.1 we refactored the DNS module of Kmesh. Instead of a structure containing cds, the data looped through the refresh queue in the Dns is now a domain, so that the Dns module no longer cares about the Kmesh mode, only providing the hostname to be resolved.

image

BPF config map optimization

Kmesh has eliminated the dedicated kmesh_config_map BPF map, which previously stored global runtime configurations such as BPF logging level and monitoring toggle. These settings are now managed through global variables. Leveraging global variables simplifies BPF configuration management, enhancing runtime efficiency and maintainability.

Optimise Kernel Native mode to reduce intrusive modifications to the kernel The kernel-native mode requires a large number of intrusive kernel reconstructions to implement HTTP-based traffic control. Some of these modifications may have a significant impact on the kernel, which makes the kernel-native mode difficult to deploy and use in a real production environment. To resolve this problem, we have modified the kernel in kernel-native mode and the involved ko and eBPF synchronously. Through the optimization of this release. In kernel 5.10, the kernel modification is limited to four, and in kernel 6.6, the kernel modification is reduced to only one. This last one will be eliminated as much as possible, with the goal of eventually running kernel-native mode on native version 6.6 and above.

image

Adopt istio 1.25

Kmesh has verified compatibility with istio 1.25 and has added the corresponding E2E test to CI. The Kmesh community maintains verification of the three istio versions in CI, so the E2E test of istio 1.22 has been removed from CI.

Critical Bug Fix

kmeshctl install waypoint error (#1287)

root analysis:

Remove the extra v before the version number when building the waypoint image.

TestKmeshRestart flaky (#1192)

root analysis:

This issue is actually not related Kmesh restart, and it can also be produced in non-restart scenario.

The root case is that it's not appropriate to use sk as the key of map map_of_orig_dst, because it is reused and the value of map will be incorrectly overwritten, resulting in the metadata is not being encoded when it should be encoded in the connection sent to the waypoint, resulting the reset error in this issue.

TestServiceEntrySelectsWorkloadEntry flaky (#1352)

root analysis:

before this test case, there is a test TestServiceEntryInlinedWorkloadEntry which will generate two workload objects, for example, Kubernetes/networking.istio.io/ServiceEntry/echo-1-21618/test-se-v4/10.244.1.103 and ServiceEntry/echo-1-21618/test-se-v6/10.244.1.103.

In the current use case, WorkloadEntry will generate the workload object Kubernetes/networking.istio.io/WorkloadEntry/echo-1-21618/test-we.

If the test case runs fast enough, the removal operation of the first two workload objects will be aggregated with the creation operation of the latter object.

Kmesh will process the new object first and then remove the old resources, reference.

The IP addresses of these three objects are the same, which will eventually lead to the inability to find the IP address in the Kmesh workload cache, which will cause auth failure and connection timeout.

Acknowledgment

Kmesh v1.1.0 includes 118 commits from 14 contributors. We would like to express our sincere gratitude to all contributors:

@hzxuzhonghu@LiZhenCheng9527@YaoZengzeng@silenceper
@weli-l@sancppp@Kuromesi@yp969803
@lec-bit@ravjot07@jayesh9747@harish2773
@Dhiren-Mhatre@Murdock9803

We have always developed Kmesh with an open and neutral attitude, and continue to build a benchmark solution for the Sidecarless service mesh industry, serving thousands of industries and promoting the healthy and orderly development of service mesh. Kmesh is currently in a stage of rapid development, and we sincerely invite people with lofty ideals to join us!

From Contributor to Maintainer: My LFX Mentorship Journey

· 5 min read
Jayesh Savaliya
Kmesh Maintainer

Introduction

Hi everyone! I'm Jayesh Savaliya, a B.Tech student at IIIT Pune passionate about backend technologies and open source. Over the last two years, I've been selected for the C4GT program twice (2024 & 2025) - yes, they let me back in - and recently completed LFX Mentorship 2025 (Term 1), where I somehow went from fixing typos to being responsible for reviewing other people's code at Kmesh.

In this blog, I'll share my journey and the strategies that actually worked (no generic "just be passionate" advice, I promise).


My Background

When I applied to LFX, I wasn't starting from scratch. I had already battle-tested myself with:

  • Sunbird (EkStep Foundation) via C4GT, where I learned that education tech is harder than it looks
  • Mifos, a GSoC organization focused on financial services (because debugging payment systems at 2 AM builds character)
  • Various backend projects where I definitely didn't break production. Much.

Choosing Kmesh

I shortlisted projects from the LFX portal based on three key criteria:

  1. Tech stack relevance - Technologies I wanted to master
  2. Learning potential - Projects that would challenge and grow my skills
  3. Active maintainers - Communities with responsive, helpful mentors

I chose Kmesh, a high-performance service mesh data plane built on eBPF and programmable kernel technologies. Kmesh's sidecarless architecture eliminates proxy overhead, resulting in better performance and lower resource consumption.

Honestly? It had "eBPF" in the description and I wanted to sound cool at tech meetups. But it turned out to be genuinely fascinating work with a great community.


How to Succeed in Open Source Programs

Here's my three-step approach that worked for LFX:

1. Make Meaningful Contributions

Start small and scale up gradually. Don't be the person who says "I'll rewrite the entire architecture!" on day one.

Instead:

  • Weeks 1-2: Fix typos, improve logs, update documentation
  • Weeks 3-4: Fix small bugs, add tests
  • Week 5+: Work on core features and refactoring

This progression shows mentors you're not just throwing random PRs at the wall hoping something sticks.

2. Write a Strong Proposal

Your proposal should be:

  • Clear: Explain your approach in straightforward language
  • Structured: Include a realistic timeline with milestones
  • Convincing: Demonstrate why you're the right person for the project

Make sure your proposal reflects genuine engagement with the project, not just surface-level research.

3. Be Actively Involved

Stay engaged in project channels (Slack, Discord, mailing lists). Communicate regularly with mentors, ask thoughtful questions, and contribute to discussions.

But also: don't be that person who asks questions Google could answer or pings everyone at 3 AM with "quick question." Balance is everything.

The Formula: Consistent contributions + Strong proposal + Active communication = Standing out


The Path to Maintainership

Becoming a maintainer wasn't planned. It happened naturally through sustained engagement after the mentorship period ended.

Consistency

I continued contributing regularly after my initial PRs were merged:

  • Fixing overlooked bugs
  • Adding requested features
  • Refactoring code for better maintainability

Learning Mindset

I embraced every learning opportunity, even when I had no idea what I was doing. eBPF concepts? Started clueless, ended slightly less clueless. Performance optimization? Learned by making things slower first. CI/CD improvements? Broke the build a few times, but now I own it.

Patience & Feedback

Code reviews can be humbling (read: brutal). I learned to take feedback seriously even when it stung, iterate quickly, and stay patient when things inevitably broke.

Taking Initiative

I started acting like a maintainer before having the title:

  • Suggesting project improvements
  • Writing comprehensive tests (because flaky tests are the worst)
  • Automating repetitive tasks (laziness is a virtue in programming)
  • Reviewing other contributors' work

By the end of my mentorship, the trust I built with the team led to being granted maintainer access. Going from "hey, can I fix this typo?" to "you're now responsible for reviewing PRs" was equal parts surreal and terrifying.


Key Takeaways

Here's what I learned that might help you:

Start small, stay consistent - Begin with simple contributions and build from there. Consistency matters more than individual genius.

Focus on learning - Getting selected is great, but learning enough to make real contributions is what counts.

Communicate effectively - Ask questions, share progress, and be helpful. Respectful, clear communication goes a long way.

Suggest improvements - If you see something that could be better, speak up. Good ideas are always welcome.

Embrace feedback - Your first PR won't be perfect. Nobody's is. Take feedback as learning opportunities, iterate, and move on. Arguing about semicolons is not a productive use of anyone's time.

You don't need to be a genius. You just need to show up, contribute meaningfully, and improve consistently.


Final Thoughts

The LFX Mentorship taught me more than just technical skills. I learned how to work with distributed teams across timezones, think critically about production software (logs are your friends!), and grow into a leadership role in an open source community.

If you're considering applying to LFX or any open source program, take the leap. With consistent effort and genuine engagement, you can make a real impact. If I can go from nervous first-time contributor to maintainer, so can you.


Connect With Me

Feel free to reach out if you want to discuss open source, eBPF, or systems programming:

Thanks for reading, and see you in the next PR!

Using Kmesh as the Data Plane for Alibaba Cloud Service Mesh (ASM) Sidecarless Mode

· 7 min read

Overview

Alibaba Cloud Service Mesh (ASM) supports both Sidecar and Sidecarless modes. The Sidecar mode, where a proxy runs alongside each service instance, is currently the most selected and stable solution. However, this architecture introduces latency and resource overhead. To address the latency and resource consumption inherent in the Sidecar mode, various Sidecarless mode solutions have emerged in recent years, such as Istio Ambient. Istio Ambient deploys a ztunnel on each node to perform layer-4 traffic proxying for the Pods running on the node and deploy waypoints for layer-7 traffic proxying. While the Sidecarless mode can reduce latency and resource consumption, its stability and completeness in functionality still require improvement.

Kmesh: Metrics and Accesslog in Detail

· 8 min read
lizhencheng
Kmesh Maintainer
Yash Patel
Kmesh Member

Introduction

Kmesh is kernel native sidecarless service mesh data plane. It sinks traffic governance into the OS kernel with the help of ebpf and programmable kernel. It reduces the resource overhead and network latency of the service mesh.

And the data of the traffic can be obtained directly in the kernel and can uses bpf map passed to the user space. This data is used to build metrics and accesslogs.

Kmesh Joins CNCF Cloud Native Landscape

· 4 min read

CNCF Landscape helps users understand specific software and product choices in each cloud-native practice phase. Kmesh joins CNCF Landscape and becomes a part of CNCF's best practice in building a cloud-native service mesh.

image

Kmesh: Kernel-Level Traffic Management Engine, Bring Ultimate Performance Experience

· 8 min read

Kmesh is a brand new kernel-level traffic management engine, which helps users build high-performance communication infrastructure in cloud-native scenarios through basic software innovation. Users can deploy Kmesh[1] with one click using helm in a service mesh environment, seamlessly connecting to Istiod. By sinking the traffic management down to the OS, Kmesh achieves more than a 50% reduction in forwarding latency compared to the Istio Sidecar solution, providing applications with an ultimate forwarding performance experience.

Kmesh: High-performance service mesh data plane

· 8 min read

What is a Service Mesh

The concept of a service mesh was introduced by Buoyant, the company behind the development of Linkerd software, in 2016. Willian Morgan, the CEO of Linkerd, provided the initial definition of a service mesh:

A service mesh is a dedicated layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud-native application. In practice, the service mesh is typically implemented as an array of lightweight network proxies that are deployed alongside application code, without the application needing to be aware.

In simple terms, a service mesh is an layer that handles communication between services. It ensures transparent and reliable network communication for modern cloud-native applications through an array of lightweight network proxies.

The essence of a service mesh is to address the challenge of how microservices can communicate effectively. By implementing governance rules such as load balancing, canary routing, and circuit breaking, the service mesh orchestrates traffic flow to maximize the capabilities of the service cluster. It is a product of the evolution of service governance.