Kubernetes Deployment vs. Service: Managing Your Pods
Originally published by New Context.
It is hard to argue that the use of Kubernetes is not increasing. In fact, the Cloud Native Computing Foundation (CNCF) reports 83% Kubernetes usage during production from respondents to its 2020 survey—this is up from 23% in 2016. Apparently, the novelty of the Kubernetes container management architecture, or K8s, is subsiding and the benefits of K8s as compared to other container options are becoming clear, including the ability to combine multiple containers into a single pod.
Pods can be defined as the smallest computing unit that is assigned an individual IP address and can be deployed and managed. And how to best manage these pods is a primary question that must be answered when utilizing Kubernetes. The answer may be as simple as a Kubernetes deployment vs service decision. Or maybe not. In this article, we take a look at the options to help you decide whether a deployment alone, a service alone, or both is the best option for your cloud management.
What is a Kubernetes Deployment?
A Kubernetes deployment provides a means of changing or modifying the state of a pod, which may be one or more containers that are running, or a group of duplicate pods, known as ReplicaSets. Using a deployment allows you to easily keep a group of identical pods running with a common configuration. Once you have defined and deployed your deployment Kubernetes will then work to make sure all pods managed by the deployment meet whatever requirements you have set.
Example of Kubernetes deployment | The New Stack
As shown above, when a pod state changes, a replication is made to indicate the update. Deployment use cases include:
- Running multiple instances of an application
- Scaling the number of instances of an application up or down
- Updating every running instance of an application
- Rolling back all running instances of an application to another version
While deployments do define how your applications will run they do not guarantee where your applications will live within your cluster. For example, if your application requires an instance of a pod on every node you will want to use a DaemonSet. For stateful applications, a StatefulSet will provide unique network identifiers, persistent storage, and ordered deployment/scaling.
Always understand your application’s desired behavior before determining how to put it in a cluster. While deployments define how your application runs, it doesn’t tell anything else in the cluster how to find or communicate with the resources it manages. This is where kubernetes services come in.
What is a Kubernetes Service?
When using a Kubernetes service, each pod is assigned an IP address. As this address may not be directly knowable, the service provides accessibility, then automatically connects the correct pod, as the example below shows.
Example of Kubernetes service | kubernetes.io
When a service is created it publishes its own virtual address as either an environment variable to every pod or, if your cluster is using coredns, as a dns entry any pod can attempt to reach. In the event of any changes to the number of available pods the service will be updated and begin directing traffic accordingly with no manual action required.
Services are not just for pods. Services can abstract access to DBs, external hosts, or even other services. In some of these cases you may need an Endpoint object, but for internal communication this is not required.
Combining Kubernetes Deployment vs. Service
Deployments and Services are often used in tandem: Deployments working to define the desired state of the application and Services working to make sure communication between almost any kind of resource and the rest of the cluster is stable and adaptable. It is highly recommended that most workloads use both, but in some cases that may not make sense depending on application behavior. Here are some overviews of what would happen if you chose not to run a deployment or a service for your application.
Without a deployment
Running a pod without a deployment can be done, but it is generally not recommended. For very simple testing this may be an effective method to increase velocity but for anything of importance this approach has a number of flaws.
Without a deployment, Pods can still be created and run through unmanaged ReplicaSets. While you will still be able to scale your application you lose out on a lot of base functionality deployments provide and drastically increase your maintenance burden. Kubernetes now recommends running almost all Pods in Deployments instead of using custom ReplicaSets.
Without a service
Running a pod or deployment without a service is very possible, and in some cases it will be perfectly fine. If your workloads do not require communication with other resources either within or outside of the cluster there is no need to use a service. However, for anything that will need to communicate with other resources, a service should be strongly considered.
Without a service, Pods are assigned an IP address which allows access from within the cluster. Other pods within the cluster can hit that IP address and communication happens as normal. However, if that pod dies, a new pod will be created that comes online with a new IP address and anything trying to communicate with the dead pod somehow needs to know about this new address. While there are ways of circumventing this issue without using services they require a lot of manual configuration and management which will only cause more problems as the number of pods increases.
Keep in mind services can also be used to abstract things like database connections, which can make them invaluable when trying to figure out how to layout the networking within your kubernetes cluster.
When deciding on how best to utilize Kubernetes there are many choices that must be made to take full advantage of the platform. At the end of the day services and deployments are some of the foundational tools Kubernetes provides to help manage your applications effectively. Always make sure to understand how your applications will be functioning, and for larger projects it may be important to determine whether you should seek a development partner to ensure that your implementation is optimized, cost-efficient, and secure.