Kubernetes Blog

How Weave built a multi-deployment solution for Scope using Kubernetes

December 12 2015

Today we hear from Peter Bourgon, Software Engineer at Weaveworks, a company that provides software for developers to network, monitor and control microservices-based apps in docker containers. Peter tells us what was involved in selecting and deploying Kubernetes 

Earlier this year at Weaveworks we launched Weave Scope, an open source solution for visualization and monitoring of containerised apps and services. Recently we released a hosted Scope service into an Early Access Program. Today, we want to walk you through how we initially prototyped that service, and how we ultimately chose and deployed Kubernetes as our platform.

A cloud-native architecture 

Scope already had a clean internal line of demarcation between data collection and user interaction, so it was straightforward to split the application on that line, distribute probes to customers, and host frontends in the cloud. We built out a small set of microservices in the 12-factor model, which includes:

  • A users service, to manage and authenticate user accounts 
  • A provisioning service, to manage the lifecycle of customer Scope instances 
  • A UI service, hosting all of the fancy HTML and JavaScript content 
  • A frontend service, to route requests according to their properties 
  • A monitoring service, to introspect the rest of the system 

All services are built as Docker images, FROM scratch where possible. We knew that we wanted to offer at least 3 deployment environments, which should be as near to identical as possible. 

  • An “Airplane Mode” local environment, on each developer’s laptop 
  • A development or staging environment, on the same infrastructure that hosts production, with different user credentials 
  • The production environment itself 

These were our application invariants. Next, we had to choose our platform and deployment model.

Our first prototype 

There are a seemingly infinite set of choices, with an infinite set of possible combinations. After surveying the landscape in mid-2015, we decided to make a prototype with

  • Amazon EC2 as our cloud platform, including RDS for persistence 
  • Docker Swarm as our “scheduler” 
  • Consul for service discovery when bootstrapping Swarm 
  • Weave Net for our network and service discovery for the application itself 
  • Terraform as our provisioner 

This setup was fast to define and fast to deploy, so it was a great way to validate the feasibility of our ideas. But we quickly hit problems. 

  • Terraform’s support for Docker as a provisioner is barebones, and we uncovered some bugs when trying to use it to drive Swarm. 
  • Largely as a consequence of the above, managing a zero-downtime deploy of Docker containers with Terraform was very difficult. 
  • Swarm’s raison d’être is to abstract the particulars of multi-node container scheduling behind the familiar Docker CLI/API commands. But we concluded that the API is insufficiently expressive for the kind of operations that are necessary at scale in production. 
  • Swarm provides no fault tolerance in the case of e.g. node failure. 

We also made a number of mistakes when designing our workflow.

  • We tagged each container with its target environment at build time, which simplified our Terraform definitions, but effectively forced us to manage our versions via image repositories. That responsibility belongs in the scheduler, not the artifact store. 
  • As a consequence, every deploy required artifacts to be pushed to all hosts. This made deploys slow, and rollbacks unbearable. 
  • Terraform is designed to provision infrastructure, not cloud applications. The process is slower and more deliberate than we’d like. Shipping a new version of something to prod took about 30 minutes, all-in. 

When it became clear that the service had potential, we re-evaluated the deployment model with an eye towards the long-term.

Rebasing on Kubernetes 

It had only been a couple of months, but a lot had changed in the landscape.

  • HashiCorp released Nomad 
  • Kubernetes hit 1.0 
  • Swarm was soon to hit 1.0 

While many of our problems could be fixed without making fundamental architectural changes, we wanted to capitalize on the advances in the industry, by joining an existing ecosystem, and leveraging the experience and hard work of its contributors. 

After some internal deliberation, we did a small-scale audition of Nomad and Kubernetes. We liked Nomad a lot, but felt it was just too early to trust it with our production service. Also, we found the Kubernetes developers to be the most responsive to issues on GitHub. So, we decided to go with Kubernetes.

Local Kubernetes 

First, we would replicate our Airplane Mode local environment with Kubernetes. Because we have developers on both Mac and Linux laptops, it’s important that the local environment is containerised. So, we wanted the Kubernetes components themselves (kubelet, API server, etc.) to run in containers.

We encountered two main problems. First, and most broadly, creating Kubernetes clusters from scratch is difficult, as it requires deep knowledge of how Kubernetes works, and quite some time to get the pieces to fall in place together. local-cluster-up.sh seems like a Kubernetes developer’s tool and didn’t leverage containers, and the third-party solutions we found, like Kubernetes Solo, require a dedicated VM or are platform-specific.

Second, containerised Kubernetes is still missing several important pieces. Following the official Kubernetes Docker guide yields a barebones cluster without certificates or service discovery. We also encountered a couple of usability issues (#16586, #17157), which we resolved by submitting a patch and building our own hyperkube image from master.

In the end, we got things working by creating our own provisioning script. It needs to do things like generate the PKI keys and certificates and provision the DNS add-on, which took a few attempts to get right. We’ve also learned of a commit to add certificate generation to the Docker build, so things will likely get easier in the near term.

Kubernetes on AWS 

Next, we would deploy Kubernetes to AWS, and wire it up with the other AWS components. We wanted to stand up the service in production quickly, and we only needed to support Amazon, so we decided to do so without Weave Net and to use a pre-existing provisioning solution. But we’ll definitely revisit this decision in the near future, leveraging Weave Net via Kubernetes plugins.

Ideally we would have used Terraform resources, and we found a couple: kraken (using Ansible), kubestack (coupled to GCE), kubernetes-coreos-terraform (outdated Kubernetes) and coreos-kubernetes. But they all build on CoreOS, which was an extra moving part we wanted to avoid in the beginning. (On our next iteration, we’ll probably audition CoreOS.) If you use Ansible, there are playbooks available in the main repo. There are also community-drive Chef cookbooks and Puppet modules. I’d expect the community to grow quickly here.

The only other viable option seemed to be kube-up, which is a collection of scripts that provision Kubernetes onto a variety of cloud providers. By default, kube-up onto AWS puts the master and minion nodes into their own VPC, or Virtual Private Cloud. But our RDS instances were provisioned in the region-default VPC, which meant that communication from a Kubernetes minion to the DB would be possible only via VPC peering or by opening the RDS VPC’s firewall rules manually.

To get traffic to traverse a VPC peer link, your destination IP needs to be in the target VPC’s private address range. But it turns out that resolving the RDS instance’s hostname from anywhere outside the same VPC will yield the public IP. And performing the resolution is important, because RDS reserves the right to change the IP for maintenance. This wasn’t ever a concern in the previous infrastructure, because our Terraform scripts simply placed everything in the same VPC. So I thought I’d try the same with Kubernetes; the kube-up script ostensibly supports installing to an existing VPC by specifying a VPC_ID environment variable, so I tried installing Kubernetes to the RDS VPC. kube-up appeared to succeed, but service integration via ELBs broke andteardown via kube-down stopped working. After some time, we judged it best to let kube-up keep its defaults, and poked a hole in the RDS VPC.

This was one hiccup among several that we encountered. Each one could be fixed in isolation, but the inherent fragility of using a shell script to provision remote state seemed to be the actual underlying cause. We fully expect the Terraform, Ansible, Chef, Puppet, etc. packages to continue to mature, and hope to switch soon.

Provisioning aside, there are great things about the Kubernetes/AWS integration. For example, Kubernetes services of the correct type automatically generate ELBs, and Kubernetes does a great job of lifecycle management there. Further, the Kubernetes domain model—services, pods, replication controllers, the labels and selector model, and so on—is coherent, and seems to give the user the right amount of expressivity, though the definition files do tend to stutter needlessly. The kubectl tool is good, albeit daunting at first glance. The rolling-update command in particular is brilliant: exactly the semantics and behavior I’d expect from a system like this. Indeed, once Kubernetes was up and running, it just worked, and exactly as I expected it to. That’s a huge thing.

Conclusions 

After a couple weeks of fighting with the machines, we were able to resolve all of our integration issues, and have rolled out a reasonably robust Kubernetes-based system to production.

  • Provisioning Kubernetes is difficult , owing to a complex architecture and young provisioning story. This shows all signs of improving. 
  • Kubernetes’ non-optional security model takes time to get right
  • The Kubernetes domain language is a great match to the problem domain. 
  • We have a lot more confidence in operating our application (It’s a lot faster, too.). 
  • And we’re very happy to be part of a growing Kubernetes userbase , contributing issues and patches as we can and benefitting from the virtuous cycle of open-source development that powers the most exciting software being written today.   - Peter Bourgon, Software Engineer at Weaveworks

Weave Scope is an open source solution for visualization and monitoring of containerised apps and services. For a hosted Scope service, request an invite to Early Access program at scope.weave.works.

Creating a Raspberry Pi cluster running Kubernetes, the shopping list (Part 1)

November 25 2015

At Devoxx Belgium and Devoxx Morocco, Ray Tsang and I showed a Raspberry Pi cluster we built at Quintor running HypriotOS, Docker and Kubernetes. For those who did not see the talks, you can check out an abbreviated version of the demo or the full talk by Ray on developing and deploying Java-based microservices in Kubernetes. While we received many compliments on the talk, the most common question was about how to build a Pi cluster themselves! We’ll be doing just that, in two parts. This first post will cover the shopping list for the cluster, and the second will show you how to get it up and running . . .

Wait! Why the heck build a Raspberry Pi cluster running Kubernetes? 

We had two big reasons to build the Pi cluster at Quintor. First of all we wanted to experiment with container technology at scale on real hardware. You can try out container technology using virtual machines, but Kubernetes runs great on on bare metal too. To explore what that’d be like, we built a Raspberry Pi cluster just like we would build a cluster of machines in a production datacenter. This allowed us to understand and simulate how Kubernetes would work when we move it to our data centers.

Secondly, we did not want to blow the budget to do this exploration. And what is cheaper than a Raspberry Pi! If you want to build a cluster comprising many nodes, each node should have a good cost to performance ratio. Our Pi cluster has 20 CPU cores, which is more than many servers, yet cost us less than $400. Additionally, the total power consumption is low and the form factor is small, which is great for these kind of demo systems.

So, without further ado, let’s get to the hardware.

The Shopping List:

     
5 Raspberry Pi 2 model B ~$200
5 16 GB micro SD-card class 10 ~ $45
1 D-Link Switch GO-SW-8E 8-Port ~$15
1 Anker 60W 6-Port PowerPort USB Charger (white) ~$35
3 ModMyPi Multi-Pi Stackable Raspberry Pi Case ~$60
1 ModMyPi Multi-Pi Stackable Raspberry Pi Case - Bolt Pack ~$7
5 Micro USB cable (white) 1ft long ~ $10
5 UTP cat5 cable (white) 1ft long ~ $10


For a total of approximately $380 you will have a building set to create a Raspberry Pi cluster like we built! 1

Some of our considerations 

We used the Raspberry Pi 2 model B boards in our cluster rather than the Pi 1 boards because of the CPU power (quadcore @ 900MHz over a dualcore @ 700MHz) and available memory (1 GB over 512MB). These specs allowed us to run multiple containers on each Pi to properly experiment with Kubernetes.

We opted for a 16GB SD-card in each Pi to be at the save side on filesystem storage. In hindsight, 8GB seemed to be enough.

Note the GeauxRobot Stackable Case looks like an alternative for the ModMyPi Stackable Case, but it’s smaller which can cause a problem fitting in the Anker USB Adapter and placing the D-Link Network Switch. So, we stuck with the ModMyPi case.

Putting it together 

Building the Raspberry Pi cluster is pretty straight forward. Most of the work is putting the stackable casing together and mounting the Pi boards on the plexiglass panes. We mounted the network switch and USB Adapter using double side foam tape, which feels strong enough for most situations. Finally, we connected the USB and UTP cables. Next, we installed HypriotOS on every Pi. HypriotOS is a Raspbian based Linux OS for Raspberry Pi’s extended with Docker support. The Hypriot team has an excellent tutorial on Getting started with Docker on your Raspberry Pi. Follow this tutorial to get Linux and Docker running on all Pi’s.

With that, you’re all set! Next up will be running Kubernetes on the Raspberry Pi cluster. We’ll be covering this the next post, so stay tuned!

Arjen Wassink, Java Architect and Team Lead, Quintor

** ## [1] ## ** [1] To save ~$90 by making a stack of four Pi’s (instead of five). This also means you can use a 5-Port Anker USB Charger instead of the 6-Port one.

Monitoring Kubernetes with Sysdig

November 19 2015

Today we’re sharing a guest post by Chris Crane from Sysdig about their monitoring integration into Kubernetes. 

Kubernetes offers a full environment to write scalable and service-based applications. It takes care of things like container grouping, discovery, load balancing and healing so you don’t have to worry about them. The design is elegant, scalable and the APIs are a pleasure to use.

And like any new infrastructure platform, if you want to run Kubernetes in production, you’re going to want to be able to monitor and troubleshoot it. We’re big fans of Kubernetes here at Sysdig, and, well: we’re here to help.

Sysdig offers native visibility into Kubernetes across the full Sysdig product line. That includes sysdig, our open source, CLI system exploration tool, and Sysdig Cloud, the first and only monitoring platform designed from the ground up to support containers and microservices.

At a high level, Sysdig products are aware of the entire Kubernetes cluster hierarchy, including namespaces, services, replication controllers and labels. So all of the rich system and application data gathered is now available in the context of your Kubernetes infrastructure. What does this mean for you? In a nutshell, we believe Sysdig can be your go-to tool for making Kubernetes environments significantly easier to monitor and troubleshoot!

In this post I will quickly preview the Kubernetes visibility in both open source sysdig and Sysdig Cloud, and show off a couple interesting use cases. Let’s start with the open source solution.

Exploring a Kubernetes Cluster with csysdig 

The easiest way to take advantage of sysdig’s Kubernetes support is by launching csysdig, the sysdig ncurses UI:

 > csysdig -k http://127.0.0.1:8080
*Note: specify the address of your Kubernetes API server with the -k command, and sysdig will poll all the relevant information, leveraging both the standard and the watch API.

Now that csysdig is running, hit F2 to bring up the views panel, and you’ll notice the presence of a bunch of new views. The k8s Namespaces view can be used to see the list of namespaces and observe the amount of CPU, memory, network and disk resources each of them is using on this machine:

Similarly, you can select k8s Services to see the same information broken up by service:

or k8s Controllers to see the replication controllers:

or k8s Pods to see the list of pods running on this machine and the resources they use:

Drill Down-Based Navigation 

A cool feature in csysdig is the ability to drill down: just select an element, click on enter and – boom – now you’re looking inside it. Drill down is also aware of the Kubernetes hierarchy, which means I can start from a service, get the list of its pods, see which containers run inside one of the pods, and go inside one of the containers to explore files, network connections, processes or even threads. Check out the video below.

Actions! 

One more thing about csysdig. As recently announced, csysdig also offers “control panel” functionality, making it possible to use hotkeys to execute command lines based on the element currently selected. So we made sure to enrich the Kubernetes views with a bunch of useful hotkeys. For example, you can delete a namespace or a service by pressing “x,” or you can describe them by pressing “d.”

My favorite hotkeys, however, are “f,” to follow the logs that a pod is generating, and “b,” which leverages kubectl exec to give you a shell inside a pod. Being brought into a bash prompt for the pod you’re observing is really useful and, frankly, a bit magic. :-)

So that’s a quick preview of Kubernetes in sysdig. Note though, that all of this functionality is only for a single machine. What happens if you want to monitor a distributed Kubernetes cluster? Enter Sysdig Cloud.

Monitoring Kubernetes with Sysdig Cloud 

Let’s start with a quick review of Kubernetes’ architecture. From the physical/infrastructure point of view, a Kubernetes cluster is made up of a set of minion machines overseen by a master machine. The master’s tasks include orchestrating containers across minions, keeping track of state and exposing cluster control through a REST API and a UI.

On the other hand, from the logical/application point of view, Kubernetes clusters are arranged in the hierarchical fashion shown in this picture:

  • All containers run inside pods. A pod can host a single container, or multiple cooperating containers; in the latter case, the containers in the pod are guaranteed to be co-located on the same machine and can share resources. 
  • Pods typically sit behind services , which take care of balancing the traffic, and also expose the set of pods as a single discoverable IP address/port. 
  • Services are scaled horizontally by replication controllers (“RCs”) which create/destroy pods for each service as needed. 
  • Namespaces are virtual clusters that can include one or more services. 

So just to be clear, multiple services and even multiple namespaces can be scattered across the same physical infrastructure.

After talking to hundreds of Kubernetes users, it seems that the typical cluster administrator is often interested in looking at things from the physical point of view, while service/application developers tend to be more interested in seeing things from the logical point of view. 

With both these use cases in mind, Sysdig Cloud’s support for Kubernetes works like this: 

  • By automatically connecting to a Kubernetes’ cluster API Server and querying the API (both the regular and the watch API), Sysdig Cloud is able to infer both the physical and the logical structure of your microservice application. 
  • In addition, we transparently extract important metadata such as labels. 
  • This information is combined with our patent-pending ContainerVision technology, which makes it possible to inspect applications running inside containers without requiring any instrumentation of the container or application.  Based on this, Sysdig Cloud can provide rich visibility and context from both an infrastructure-centric and an application-centric point of view. Best of both worlds! Let’s check out what this actually looks like.

One of the core features of Sysdig Cloud is groups, which allow you to define the hierarchy of metadata for your applications and infrastructure. By applying the proper groups, you can explore your containers based on their physical hierarchy (for example, physical cluster > minion machine > pod > container) or based on their logical microservice hierarchy (for example, namespace > replication controller > pod > container – as you can see in this example). 

If you’re interested in the utilization of your underlying physical resource – e.g., identifying noisy neighbors – then the physical hierarchy is great. But if you’re looking to explore the performance of your applications and microservices, then the logical hierarchy is often the best place to start. 

For example: here you can see the overall performance of our WordPress service: 

Keep in mind that the pods implementing this service are scattered across multiple machines, but we can still total request counts, response times and URL statistics aggregated together for this service. And don’t forget: this doesn’t require any configuration or instrumentation of wordpress, apache, or the underlying containers! 

And from this view, I can now easily create alerts for these service-level metrics, and I can dig down into any individual container for deep inspection - down to the process level  – whenever I want, including back in time! 

Visualizing Your Kubernetes Services 

We’ve also included Kubernetes awareness in Sysdig Cloud’s famous topology view, at both the physical and logical level. 

The two pictures below show the exact same infrastructure and services. But the first one depicts the physical hierarchy, with a master node and three minion nodes; while the second one groups containers into namespaces, services and pods, while abstracting the physical location of the containers. 

Hopefuly it’s self-evident how much more natural and intuitive the second (services-oriented) view is. The structure of the application and the various dependencies are immediately clear. The interactions between various microservices become obvious, despite the fact that these microservices are intermingled across our machine cluster! 

Conclusion 

I’m pretty confident that what we’re delivering here represents a huge leap in visibility into Kubernetes environments and it won’t disappoint you. I also hope it can be a useful tool enabling you to use Kubernetes in production with a little more peace of mind. Thanks, and happy digging! 

Chris Crane, VP Product, Sysdig 

You can find open source sysdig on github and at sysdig.org, and you can sign up for free trial of Sysdig Cloud at sysdig.com

To see a live demo and meet some of the folks behind the project join us this Thursday for a Kubernetes and Sysdig Meetup in San Francisco.

One million requests per second: Dependable and dynamic distributed systems at scale

November 11 2015

Recently, I’ve gotten in the habit of telling people that building a reliable service isn’t that hard. If you give me two Compute Engine virtual machines, a Cloud Load balancer, supervisord and nginx, I can create you a static web service that will serve a static web page, effectively forever.

The real challenge is building agile AND reliable services. In the new world of software development it’s trivial to spin up enormous numbers of machines and push software to them. Developing a successful product must also include the ability to respond to changes in a predictable way, to handle upgrades elegantly and to minimize downtime for users. Missing on any one of these elements results in an unsuccessful product that’s flaky and unreliable. I remember a time, not that long ago, when it was common for websites to be unavailable for an hour around midnight each day as a safety window for software upgrades. My bank still does this. It’s really not cool.

Fortunately, for developers, our infrastructure is evolving along with the requirements that we’re placing on it. Kubernetes has been designed from the ground up to make it easy to design, develop and deploy dependable, dynamic services that meet the demanding requirements of the cloud native world.

To demonstrate exactly what we mean by this, I’ve developed a simple demo of a Container Engine cluster serving 1 million HTTP requests per second. In all honesty, serving 1 million requests per second isn’t really that exciting. In fact, it’s really so very 2013.

What is exciting is that while successfully handling 1 million HTTP requests per second with uninterrupted availability, we have Kubernetes perform a zero-downtime rolling upgrade of the service to a new version of the software while we’re  still serving 1 million requests per second.

This is only possible due to a large number of performance tweaks and enhancements that have gone into the Kubernetes 1.1 release. I’m incredibly proud of all of the features that our community has built into this release. Indeed in addition to making it possible to serve 1 million requests per second, we’ve also added an auto-scaler, so that you won’t even have to wake up in the middle of the night to scale your service in response to load or memory pressures.

If you want to try this out on your own cluster (or use the load test framework to test your own service) the code for the demo is available on github. And the full video is available.

I hope I’ve shown you how Kubernetes can enable developers of distributed systems to achieve both reliability and agility at scale, and as always, if you’re interested in learning more, head over to kubernetes.io or github and connect with the community on our Slack channel. 

“https://www.youtube.com/embed/7TOWLerX0Ps”

  • Brendan Burns, Senior Staff Software Engineer, Google, Inc.

Kubernetes 1.1 Performance upgrades, improved tooling and a growing community

November 09 2015

Since the Kubernetes 1.0 release in July, we’ve seen tremendous adoption by companies building distributed systems to manage their container clusters. We’re also been humbled by the rapid growth of the community who help make Kubernetes better everyday. We have seen commercial offerings such as Tectonic by CoreOS and RedHat Atomic Host emerge to deliver deployment and support of Kubernetes. And a growing ecosystem has added Kubernetes support including tool vendors such as Sysdig and Project Calico.

With the help of hundreds of contributors, we’re proud to announce the availability of Kubernetes 1.1, which offers major performance upgrades, improved tooling, and new features that make applications even easier to build and deploy.

Some of the work we’d like to highlight includes:

  • Substantial performance improvements : We have architected Kubernetes from day one to handle Google-scale workloads, and our customers have put it through their paces. In Kubernetes 1.1, we have made further investments to ensure that you can run in extremely high-scale environments; later this week, we will be sharing examples of running thousand node clusters, and running over a million QPS against a single cluster. 

  • Significant improvement in network throughput : Running Google-scale workloads also requires Google-scale networking. In Kubernetes 1.1, we have included an option to use native IP tables offering an 80% reduction in tail latency, an almost complete elimination of CPU overhead and improvements in reliability and system architecture ensuring Kubernetes can handle high-scale throughput well into the future. 

  • Horizontal pod autoscaling (Beta): Many workloads can go through spiky periods of utilization, resulting in uneven experiences for your users. Kubernetes now has support for horizontal pod autoscaling, meaning your pods can scale up and down based on CPU usage. Read more about Horizontal pod autoscaling

  • HTTP load balancer (Beta): Kubernetes now has the built-in ability to route HTTP traffic based on the packets introspection. This means you can have ‘http://foo.com/bar’ go to one service, and ‘http://foo.com/meep’ go to a completely independent service. Read more about the Ingress object

  • Job objects (Beta): We’ve also had frequent request for integrated batch jobs, such as processing a batch of images to create thumbnails or a particularly large data file that has been broken down into many chunks. Job objects introduces a new API object that runs a workload, restarts it if it fails, and keeps trying until it’s successfully completed. Read more about theJob object

  • New features to shorten the test cycle for developers : We continue to work on making developing for applications for Kubernetes quick and easy. Two new features that speeds developer’s workflows include the ability to run containers interactively, and improved schema validation to let you know if there are any issues with your configuration files before you deploy them. 

  • Rolling update improvements : Core to the DevOps movement is being able to release new updates without any affect on a running service. Rolling updates now ensure that updated pods are healthy before continuing the update. 

  • And many more. For a complete list of updates, see the 1.1. release notes on GitHub 

Today, we’re also proud to mark the inaugural Kubernetes conference, KubeCon, where some 400 community members along with dozens of vendors are in attendance supporting the Kubernetes project.

We’d love to highlight just a few of the many partners making Kubernetes better:

“We are betting our major product, Tectonic – which enables any company to deploy, manage and secure its containers anywhere – on Kubernetes because we believe it is the future of the data center. The release of Kubernetes 1.1 is another major milestone that will create more widespread adoption of distributed systems and containers, and puts us on a path that will inevitably lead to a whole new generation of products and services.” – Alex Polvi, CEO, CoreOS.

“Univa’s customers are looking for scalable, enterprise-caliber solutions to simplify managing container and non-container workloads in the enterprise. We selected Kubernetes as a foundational element of our new Navops suite which will help IT and DevOps rapidly integrate containerized workloads into their production systems and extend these workloads into cloud services.” – Gary Tyreman, CEO, Univa.

“The tremendous customer demand we’re seeing to run containers at scale with Kubernetes is a critical element driving growth in our professional services business at Redapt. As a trusted advisor, it’s great to have a tool like Kubernetes in our tool belt to help our customers achieve their objectives.” – Paul Welch, SR VP Cloud Solutions, Redapt

As we mentioned above, we would love your help:

  • Get involved with the Kubernetes project on GitHub 
  • Connect with the community on Slack
  • Follow us on Twitter @Kubernetesio for latest updates 
  • Post questions (or answer questions) on Stackoverflow 
  • Get started running, deploying, and using Kubernetes guides 

But, most of all, just let us know how you are transforming your business using Kubernetes, and how we can help you do it even faster. Thank you for your support!

 - David Aronchick, Senior Product Manager for Kubernetes and Google Container Engine

Kubernetes as Foundation for Cloud Native PaaS

November 03 2015

With Kubernetes continuing to gain momentum as a critical tool for building and scaling container based applications, we’ve been thrilled to see a growing number of platform as a service (PaaS) offerings adopt it as a foundation. PaaS developers have been drawn to Kubernetes by its rapid rate of maturation, the soundness of its core architectural concepts, and the strength of its contributor community. The Kubernetes ecosystem continues to grow, and these PaaS projects are great additions to it.

Deis is the leading Docker PaaS with over a million downloads, actively used by companies like Mozilla, The RealReal, ShopKeep and Coinbase. Deis provides software teams with a turn-key platform for running containers in production, featuring the ability to build and store Docker images, production-grade load balancing, a streamlined developer interface and an ops-ready suite of logging and monitoring infrastructure backed by world-class 24x7x365 support. After a community-led evaluation of alternative orchestrators, it was clear that Kubernetes represents a decade of experience running containers at scale inside Google. The Deis project is proud to be rebasing onto Kubernetes and is thrilled to join its vibrant community.” - Gabriel Monroy, CTO of Engine Yard, Inc.

OpenShift by Red Hat helps organizations accelerate application delivery by enabling development and IT operations teams to be more agile, responsive and efficient. OpenShift Enterprise 3 is the first fully supported, enterprise-ready, web-scale container application platform that natively integrates the Docker container runtime and packaging format, Kubernetes container orchestration and management engine, on a foundation of Red Hat Enterprise Linux 7, all fully supported by Red Hat from the operating system to application runtimes.

“Kubernetes provides OpenShift users with a powerful model for application orchestration, leveraging concepts like pods and services, to deploy (micro)services that inherently span multiple containers and application topologies that will require wiring together multiple services. Pods can be optionally mapped to storage, which means you can run both stateful and stateless services in OpenShift. Kubernetes also provides a powerful declarative management model to manage the lifecycle of application containers. Customers can then use Kubernetes’ integrated scheduler to deploy and manage containers across multiple hosts. As a leading contributor to both the Docker and Kubernetes open source projects, Red Hat is not just adopting these technologies but actively building them upstream in the community.”  - Joe Fernandes, Director of Product Management for Red Hat OpenShift.

Huawei, a leading global ICT technology solution provider, will offer container as a service (CaaS) built on Kubernetes in the public cloud for customers with Docker based applications. Huawei CaaS services will manage multiple clusters across data centers, and deploy, monitor and scale containers with high availability and high resource utilization for their customers. For example, one of Huawei’s current software products for their telecom customers utilizes tens of thousands of modules and hundreds of instances in virtual machines. By moving to a container based PaaS platform powered by Kubernetes, Huawei is migrating this product into a micro-service based, cloud native architecture. By decoupling the modules, they’re creating a high performance, scalable solution that runs hundreds, even thousands of containers in the system. Decoupling existing heavy modules could have been a painful exercise. However, using several key concepts introduced by Kubernetes, such as pods, services, labels, and proxies, Huawei has been able to re-architect their software with great ease.

Huawei has made Kubernetes the core runtime engine for container based applications/services, and they’ve been building other PaaS components or capabilities around Kubernetes, such as user access management, composite API, Portal and multiple cluster management. Additionally, as part of the migration to the new platform, they’re enhancing their PaaS solution in the areas of advanced scheduling algorithm, multi tenant support and enhanced container network communication to support customer needs.

“Huawei chose Kubernetes as the foundation for our offering because we like the abstract concepts of services, pod and label for modeling and distributed applications. We developed an application model based on these concepts to model existing complex applications which works well for moving legacy applications into the cloud. In addition, Huawei intends for our PaaS platform to support many scenarios, and Kubernetes’ flexible architecture with its plug-in capability is key to our platform architecture.”- Ying Xiong, Chief Architect of PaaS at Huawei.

Gondoris a PaaS with a focus on application hosting throughout the lifecycle, from development to testing to staging to production. It supports Python, Go, and Node.js applications as well as technologies such as Postgres, Redis and Elasticsearch. The Gondor team recently re-architected Gondor to incorporate Kubernetes, and discussed this in a blog post.

“There are two main reasons for our move to Kubernetes: One, by taking care of the lower layers in a truly scalable fashion, Kubernetes lets us focus on providing a great product at the application layer. Two, the portability of Kubernetes allows us to expand our PaaS offering to on-premises, private cloud and a multitude of alternative infrastructure providers.” - Brian Rosner, Chief Architect at Eldarion (the driving force behind Gondor)

  • Martin Buhr, Google Business Product Manager

Some things you didn’t know about kubectl

October 28 2015

kubectl is the command line tool for interacting with Kubernetes clusters. Many people use it every day to deploy their container workloads into production clusters. But there’s more to kubectl than just kubectl create -f or kubectl rolling-update. kubectl is a veritable multi-tool of container orchestration and management. Below we describe some of the features of kubectl that you may not have seen.

Important Note : Most of these features are part of the upcoming 1.1 release of Kubernetes. They are not present in the current stable 1.0.x release series.

Run interactive commands

kubectl run has been in kubectl since the 1.0 release, but recently we added the ability to run interactive containers in your cluster. That means that an interactive shell in your Kubernetes cluster is as close as:

$> kubectl run -i --tty busybox --image=busybox --restart=Never -- sh   
Waiting for pod default/busybox-tv9rm to be running, status is Pending, pod ready: false   
Waiting for pod default/busybox-tv9rm to be running, status is Running, pod ready: false   
$> # ls 
bin dev etc home proc root sys tmp usr var 
$> # exit  

The above kubectl command is equivalent to docker run -i -t busybox sh. Sadly we mistakenly used -t for template in kubectl 1.0, so we need to retain backwards compatibility with existing CLI user. But the existing use of -t is deprecated and we’ll eventually shorten --tty to -t.

In this example, -i indicates that you want an allocated stdin for your container and indicates that you want an interactive session, --restart=Never indicates that the container shouldn’t be restarted after you exit the terminal and --tty requests that you allocate a TTY for that session.

View your Pod’s logs

Sometimes you just want to watch what’s going on in your server. For this, kubectl logs is the subcommand to use. Adding the -f flag lets you live stream new logs to your terminal, just like tail -f.
$> kubectl logs -f redis-izl09

Attach to existing containers

In addition to interactive execution of commands, you can now also attach to any running process. Like kubectl logs, you’ll get stderr and stdout data, but with attach, you’ll also be able to send stdin from your terminal to the program. Awesome for interactive debugging, or even just sending ctrl-c to a misbehaving application.

      $> kubectl attach redis -i

1:C 12 Oct 23:05:11.848 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf

                _._                                                  
           _.-``__''-._                                             
      _.-`` `. `_. ''-._ Redis 3.0.3 (00000000/0) 64 bit
  .-`` .-```. ```\/ _.,_ ''-._                                   
 ( ' , .-` | `, ) Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
 | `-._ `._ / _.-' | PID: 1
  `-._ `-._ `-./ _.-' _.-'                                   
 |`-._`-._ `-.__.-' _.-'_.-'|                                  
 | `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'                                   
 |`-._`-._ `-.__.-' _.-'_.-'|                                  
 | `-._`-._ _.-'_.-' |                                  
  `-._ `-._`-.__.-'_.-' _.-'                                   
      `-._ `-.__.-' _.-'                                       
          `-._ _.-'                                           
              `-.__.-'                                               

1:M 12 Oct 23:05:11.849 # Server started, Redis version 3.0.3
Forward ports from Pods to your local machine

Often times you want to be able to temporarily communicate with applications in your cluster without exposing them to the public internet for security reasons. To achieve this, the port-forward command allows you to securely forward a port on your local machine through the kubernetes API server to a Pod running in your cluster. For example:

$> kubectl port-forward redis-izl09 6379

Opens port 6379 on your local machine and forwards communication to that port to the Pod or Service in your cluster. For example, you can use the ‘telnet’ command to poke at a Redis service in your cluster:

$> telnet localhost 6379   
INCR foo   
:1   
INCR foo 
:2  

Execute commands inside an existing container

In addition to being able to attach to existing processes inside a container, the “exec” command allows you to spawn new processes inside existing containers. This can be useful for debugging, or examining your pods to see what’s going on inside without interrupting a running service. kubectl exec is different from kubectl run, because it runs a command inside of an existing container, rather than spawning a new container for execution.

$> kubectl exec redis-izl09 -- ls /
bin
boot
data
dev
entrypoint.sh
etc
home
Add or remove Labels

Sometimes you want to dynamically add or remove labels from a Pod, Service or Replication controller. Maybe you want to add an existing Pod to a Service, or you want to remove a Pod from a Service. No matter what you want, you can easily and dynamically add or remove labels using the kubectl label subcommand:

$> kubectl label pods redis-izl09 mylabel=awesome 
pod "redis-izl09" labeled

Add annotations to your objects

Just like labels, you can add or remove annotations from API objects using the kubectl annotate subcommand. Unlike labels, annotations are there to help describe your object, but aren’t used to identify pods via label queries (more details on annotations). For example, you might add an annotation of an icon for a GUI to use for displaying your pods.

$> kubectl annotate pods redis-izl09 icon-url=http://goo.gl/XXBTWq 
pod "redis-izl09" annotated

Output custom format

Sometimes, you want to customize the fields displayed when kubectl summarizes an object from your cluster. To do this, you can use the custom-columns-file format. custom-columns-file takes in a template file for rendering the output. Again, JSONPath expressions are used in the template to specify fields in the API object. For example, the following template first shows the number of restarts, and then the name of the object:

$> cat cols.tmpl   
RESTARTS                                   NAME   
.status.containerStatuses[0].restartCount .metadata.name  

If you pass this template to the kubectl get pods command you get a list of pods with the specified fields displayed.

 $> kubectl get pods redis-izl09 -o=custom-columns-file --template=cols.tmpl                 RESTARTS           NAME   
 0                  redis-izl09   
 1                  redis-abl42  
Easily manage multiple Kubernetes clusters

If you’re running multiple Kubernetes clusters, you know it can be tricky to manage all of the credentials for the different clusters. Using the kubectl config subcommands, switching between different clusters is as easy as:

        $> kubectl config use-context

Not sure what clusters are available? You can view currently configured clusters with:

        $> kubectl config view

Phew, that outputs a lot of text. To restrict it down to only the things we’re interested in, we can use a JSONPath template:

        $> kubectl config view -o jsonpath="{.context[*].name}"

Ahh, that’s better.

Conclusion

So there you have it, nine new and exciting things you can do with your Kubernetes cluster and the kubectl command line. If you’re just getting started with Kubernetes, check out Google Container Engine or other ways to get started with Kubernetes.

  • Brendan Burns, Google Software Engineer
@Kubernetesio View on Github #kubernetes-users Stack Overflow Download Kubernetes