mastouille.fr est l'un des nombreux serveurs Mastodon indépendants que vous pouvez utiliser pour participer au fédiverse.
Mastouille est une instance Mastodon durable, ouverte, et hébergée en France.

Administré par :

Statistiques du serveur :

689
comptes actifs

#k8s

2 messages2 participants1 message aujourd’hui
www.linkedin.com#kubernetes #devops #dns #observability #performance #networking #kubedns… | Adam Danko | 24 commentairesThe Kubernetes default that cost us performance: *** In Kubernetes, 'ndots:5' is set by default in '/etc/resolv.conf'. This means every hostname with fewer than 5 dots goes through every search domain - before even trying the actual FQDN. So when your app tries to resolve 'example.com', it might actually generate multiple DNS queries like: 'example.com.svc.cluster.local' 'example.com.google.internal' 'example.com.cluster.local' 'example.com' Each failed lookup = latency, DNS noise, and pointless retries. 🔍 As Tim Hockin (Kubernetes co-founder) explained back in 2016: “This is a tradeoff between automagic and performance.” The default 'ndots:5' isn’t about optimization - its about making things “just work” across SRV records, multi-namespace service lookups, and what were then called PetSets (now StatefulSets). Even if it means triggering multiple DNS lookups before hitting the actual domain. So yes - it comes at a performance cost. ✅ What are the possible workarounds? - Use FQDNs ("my-service.my-namespace.svc.cluster.local.") - dont forget the trailing dot to skip search paths - Lower the 'ndots' value with dnsConfig at the pod level or at a wider level using policy engines (Gatekeeper, Kyverno) - Reduce unnecessary search domains in your cluster setup 🔎 Real-world impact: After lowering ndots, we saw a clear drop in both conntrack table usage and application latency - confirming the reduction in DNS query volume and retries. (Image attached - green, yellow, and blue lines are the nodes with kube-dns on them.) The impact is most noticeable if your workloads involves: - Low-latency demands - Constant DNS resolution 👉 Have you tuned your DNS settings - or just lived with the default? What other Kubernetes defaults have surprised you in production? (Source of Tim's comment: https://lnkd.in/dBVDeCCD) #kubernetes #devops #dns #observability #performance #networking #kubedns #coredns #openshift | 24 commentaires sur LinkedIn

I don't use containerization ( #docker, #k8s or whatever) on my servers, I only use distrib packages or sources of the app I want to install... the old way, so.
Does dockerized applications need more resources? or is it insignificant?
Usually, I install small servers.

Man Prometheus is a pain to recover once its data store is in any way out of shape. Did NOT help that it was buried inside Kubernetes inside a PVC.

Thankfully it was only Dev environment today but if this ever pages on Prod we're losing data as it stands.

I'll write something up for a run book but eesh.

Et si l'on vous disait que le #GitOps, ce n'est pas que mettre dans #Git ce que vous déployez, mais aussi une philosophie et des concepts!

J'aurai l'honneur de vous en parler à @devoxxfr 2025! 🚀

Et pour m'accompagner, le plus mignons de tous les robots 🤖! #astro

link.davinkevin.fr/AstroGitOps

Si vous avez des questions et/ou des retours d'expériences, n'hésitez pas à me pinger 😇!
Et n'oubliez pas de mettre la prez dans vos fav ⭐

#Kubernetes#k8s#IaC

is Cilium native routing mode supposed to publish pod IPs on the interfaces in the host network namespace?

That would make sense to me as using the native network layer 2/3 routing.

Or am I required to turn on SNAT using the IP masquerading feature?

Pods are getting valid IPv6 GUAs in the LAN/host subnet, but of course nothing can return to them...

#kubernetes#k8s#Cilium

CoreDNS + Kubernetes question:

CoreDNS, in its stock configuration, assumes/uses the default service created for the Kubernetes API.

However, this gets a ClusterIP from the cluster's Service IP range as part of normal IPAM.

This IP is not known to the operating system or during cluster setup, so isn't in the IP SANs for the TLS certificate. This causes CoreDNS to error out trusting the Kubernetes API when trying to watch services.

The the default Kubernetes service is roughly well-known as it's the bottom of the service IP range + 1 but that still feels... odd.

I looked into automatic in-cluster certificate management and rotation but that seems more about Kubelet client certificates for the API server, and none of the actual TLS certificates. Which kinda makes sense cause otherwise cyclic dependencies.

kubernetes.io/docs/tasks/admin

KubernetesCustomizing DNS ServiceThis page explains how to configure your DNS Pod(s) and customize the DNS resolution process in your cluster. Before you begin You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

A little over 6 moths in my current role as DevSecOps Engineer. Some days are the most uncomfortable I've felt in my life. This picture is a pretty accurate description of what it feels like learning some of the tools we use 😅 . But the knowledge and experience I’m gaining? 100% worth it.

"A comfort zone is a beautiful place – but nothing ever grows there."

#DevOps#k8s#learnk8s