Kubernetes Blog

Kubernetes Community Meeting Notes - 20160225

March 01 2016

February 25th - Redspread demo, 1.2 update and planning 1.3, newbie introductions, SIG-networking and a shout out to CoreOS blog post.

The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project’s status via videoconference. Here are the notes from the latest meeting.

Note taker: [Ilan Rabinovich]

  • Quick call out for sharing presentations/slides [JBeda]
  • Demo (10 min): Redspread [Mackenzie Burnett, Dan Gillespie]
  • 1.2 Release Watch [T.J. Goltermann]
    • currently about 80 issues in the queue that need to be addressed before branching.
      • currently looks like March 7th may slip to later in the week, but up in the air until flakey tests are resolved.
      • non-1.2 changes may be delayed in review/merging until 1.2 stabilization work completes.
    • 1.3 release planning
  • Newbie Introductions
  • SIG Reports -
    • Networking [Tim Hockin]
    • Scale [Bob Wise]
    • meeting last Friday went very well. Discussed charter AND a working deployment
      • moved meeting to Thursdays @ 1 (so in 3 hours!)
      • Rob is posting a Cluster Ops announce on TheNewStack to recruit more members
  • GSoC participation – no application submitted. [Sarah Novotny]
  • Brian Grant has offered to review PRs that need attention for 1.2
  • Dynamic Provisioning
    • Currently overlaps a bit with the ubernetes work
    • PR in progress.
    • Should work in 1.2, but being targeted more in 1.3
  • Next meeting is March 3rd.
    • Demo from Weave on Kubernetes Anywhere
    • Another Kubernetes 1.2 update
    • Update from CNCF update
    • 1.3 commitments from google
  • No meeting on March 10th.

To get involved in the Kubernetes community consider joining our Slack channel, taking a look at the Kubernetes project on GitHub, or join the Kubernetes-dev Google group. If you’re really excited, you can do all of the above and join us for the next community conversation — March 3rd, 2016. Please add yourself or a topic you want to know about to the agenda and get a calendar invitation by joining this group.

The full recording is available on YouTube in the growing archive of Kubernetes Community Meetings. -- Kubernetes Community

KubeCon EU 2016: Kubernetes Community in London

February 24 2016

KubeCon EU 2016 is the inaugural European Kubernetes community conference that follows on the American launch in November 2015. KubeCon is fully dedicated to education and community engagement forKubernetes enthusiasts, production users and the surrounding ecosystem.

Come join us in London and hang out with hundreds from the Kubernetes community and experience a wide variety of deep technical expert talks and use cases.

Don’t miss these great speaker sessions at the conference:

Get your KubeCon EU tickets here.

Venue Location: CodeNode * 10 South Pl, London, United Kingdom
Accommodations: hotels
Website: kubecon.io
Twitter: @KubeConio #KubeCon Google is a proud Diamond sponsor of KubeCon EU 2016. Come to London next month, March 10th & 11th, and visit booth #13 to learn all about Kubernetes, Google Container Engine (GKE) and Google Cloud Platform!

KubeCon is organized by KubeAcademy, LLC, a community-driven group of developers focused on the education of developers and the promotion of Kubernetes.

-* Sarah Novotny, Kubernetes Community Manager, Google

Kubernetes Community Meeting Notes - 20160218

February 23 2016

February 18th - kmachine demo, clusterops SIG formed, new k8s.io website preview, 1.2 update and planning 1.3

The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project’s status via videoconference. Here are the notes from the latest meeting.

  • Note taker: Rob Hirschfeld
  • Demo (10 min): kmachine [Sebastien Goasguen]
    • started :01 intro video
    • looking to create mirror of Docker tools for Kubernetes (similar to machine, compose, etc)
    • kmachine (forked from Docker Machine, so has the same endpoints)
  • Use Case (10 min): started at :15
  • SIG Report starter
    • Cluster Ops launch meeting Friday (doc). [Rob Hirschfeld]
  • Time Zone Discussion [:22]
    • This timezone does not work for Asia.
    • Considering rotation - once per month
    • Likely 5 or 6 PT
    • Rob suggested moving the regular meeting up a little
  • k8s.io website preview [John Mulhausen] [:27]
    • using github for docs. you can fork and do a pull request against the site
    • will be its own kubernetes organization BUT not in the code repo
    • Google will offer a “doc bounty” where you can get GCP credits for working on docs
    • Uses Jekyll to generate the site (e.g. the ToC)
    • Principle will be to 100% GitHub Pages; no script trickery or plugins, just fork/clone, edit, and push
    • Hope to launch at Kubecon EU
    • Home Page Only Preview: http://kub.unitedcreations.xyz
  • 1.2 Release Watch [T.J. Goltermann] [:38]
  • 1.3 Planning update [T.J. Goltermann]
  • GSoC participation – deadline 2/19 [Sarah Novotny]
  • March 10th meeting? [Sarah Novotny]

To get involved in the Kubernetes community consider joining our Slack channel, taking a look at the Kubernetes project on GitHub, or join the Kubernetes-dev Google group. If you’re really excited, you can do all of the above and join us for the next community conversation — February 25th, 2016. Please add yourself or a topic you want to know about to the agenda and get a calendar invitation by joining this group.

“https://youtu.be/L5BgX2VJhlY?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ”

-- Kubernetes Community

Kubernetes Community Meeting Notes - 20160211

February 16 2016

February 11th - Pangaea Demo, #AWS SIG formed, release automation and documentation team introductions. 1.2 update and planning 1.3.

The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project’s status via videoconference. Here are the notes from the latest meeting.

Note taker: Rob Hirschfeld

  • Demo: Pangaea [Shahidh K Muhammed, Tanmai Gopal, and Akshaya Acharya]

    • Microservices packages
    • Focused on Application developers
    • Demo at recording +4 minutes
    • Single node kubernetes cluster — runs locally using Vagrant CoreOS image
    • Single user/system cluster allows use of DNS integration (unlike Compose)
    • Can run locally or in cloud
    • SIG Report:
      • Release Automation and an introduction to David McMahon
      • Docs and k8s website redesign proposal and an introduction to John Mulhausen
      • This will allow the system to build docs correctly from Github w/ minimal effort
      • Will be check-in triggered
      • Getting website style updates
      • Want to keep authoring really light
      • There will be some automated checks
      • Next week: preview of the new website during the community meeting
  • [@goltermann] 1.2 Release Watch (time +34 minutes)
    • code slush date: 2/9/2016
    • no major features or refactors accepted
    • discussion about release criteria: we will hold release date for bugs
  • Testing flake surge is over (one time event and then maintain test stability)
  • 1.3 Planning (time +40 minutes)
    • working to cleanup the Github milestones — they should be a source of truth. you can use Github for bug reporting
    • push off discussion while 1.2 crunch is under
    • Framework
      • dates
      • prioritization
      • feedback
    • Design Review meetings
    • General discussion about the PRD process — still at the beginning states
    • Working on a contributor conference
    • Rob suggested tracking relationships between PRD/Mgmr authors
    • PLEASE DO REVIEWS — talked about the way people are authorized to +2 reviews.

To get involved in the Kubernetes community consider joining our Slack channel, taking a look at the Kubernetes project on GitHub, or join the Kubernetes-dev Google group. If you’re really excited, you can do all of the above and join us for the next community conversation — February 18th, 2016. Please add yourself or a topic you want to know about to the agenda and get a calendar invitation by joining this group.

The full recording is available on YouTube in the growing archive of Kubernetes Community Meetings.

ShareThis: Kubernetes In Production

February 11 2016

Today’s guest blog post is by Juan Valencia, Technical Lead at ShareThis, a service that helps website publishers drive engagement and consumer sharing behavior across social networks.

ShareThis has grown tremendously since its first days as a tiny widget that allowed you to share to your favorite social services. It now serves over 4.5 million domains per month, helping publishers create a more authentic digital experience.

Fast growth came with a price. We leveraged technical debt to scale fast and to grow our products, particularly when it came to infrastructure. As our company expanded, the infrastructure costs mounted as well - both in terms of inefficient utilization and in terms of people costs. About 1 year ago, it became clear something needed to change.

TL;DRKubernetes has been a key component for us to reduce technical debt in our infrastructure by:

  • Fostering the Adoption of Docker
  • Simplifying Container Management
  • Onboarding Developers On Infrastructure
  • Unlocking Continuous Integration and Delivery We accomplished this by radically adopting Kubernetes and switching our DevOps team to a Cloud Platform team that worked in terms of containers and microservices. This included creating some tools to get around our own legacy debt.

The Problem

Alas, the cloud was new and we were young. We started with a traditional data-center mindset.  We managed all of our own services: MySQL, Cassandra, Aerospike, Memcache, you name it.  We set up VM’s just like you would traditional servers, installed our applications on them, and managed them in Nagios or Ganglia.

Unfortunately, this way of thinking was antithetical to a cloud-centric approach. Instead of thinking in terms of services, we were thinking in terms of servers. Instead of using modern cloud approaches such as autoscaling, microservices, or even managed VM’s, we were thinking in terms of scripted setups, server deployments, and avoiding vendor lock-in.

These ways of thinking were not bad per se, they were simply inefficient. They weren’t taking advantage of the changes to the cloud that were happening very quickly. It also meant that when changes needed to take place, we were treating those changes as big slow changes to a datacenter, rather than small fast changes to the cloud.

The Solution

Kubernetes As A Tool To Foster Docker Adoption

As Docker became more of a force in our industry, engineers at ShareThis also started experimenting with it to good effect. It soon became obvious that we needed to have a working container for every app in our company just so we could simplify testing in our development environment.

Some apps moved quickly into Docker because they were simple and had few dependencies.  For those that had small dependencies, we were able to manage using Fig (Fig was the original name of Docker Compose). Still, many of our data pipelines or interdependent apps were too gnarly to be directly dockerized. We still wanted to do it, but Docker was not enough.

In late 2015, we were frustrated enough with our legacy infrastructure that we finally bit the bullet. We evaluated Docker’s tools, ECS, Kubernetes, and Mesosphere. It was quickly obvious that Kubernetes was in a more stable and user friendly state than its competitors for our infrastructure. As a company, we could solidify our infrastructure on Docker by simply setting the goal of having all of our infrastructure on Kubernetes.

Engineers were skeptical at first. However, once they saw applications scale effortlessly into hundreds of instances per application, they were hooked. Now, not only was there the pain points driving us forward into Docker and by extension Kubernetes, but there was genuine excitement for the technology pulling us in. This has allowed us to make an incredibly difficult migration fairly quickly. We now run Kubernetes in multiple regions on about 65 large VMs and increasing to over 100 in the next couple months. Our Kubernetes cluster currently processes 800 million requests per day with the plan to process over 2 billion requests per day in the coming months.

Kubernetes As A Tool To Manage Containers

Our earliest use of Docker was promising for development, but not so much so for production. The biggest friction point was the inability to manage Docker components at scale. Knowing which containers were running where, what version of a deployment was running, what state an app was in, how to manage subnets and VPCs, etc, plagued any chance of it going to production. The tooling required would have been substantial.

When you look at Kubernetes, there are several key features that were immediately attractive:

  • It is easy to install on AWS (where all our apps were running)
  • There is a direct path from a Dockerfile to a replication controller through a yaml/json file
  • Pods are able to scale in number easily
  • We can easily scale the number of VM’s running on AWS in a Kubernetes cluster
  • Rolling deployments and rollback are built into the tooling
  • Each pod gets monitored through health checks
  • Service endpoints are managed by the tool
  • There is an active and vibrant community

Unfortunately, one of the biggest pain points was that the tooling didn’t solve our existing legacy infrastructure, it just provided an infrastructure to move onto. There were still a variety of network quirks which disallowed us from directly moving our applications onto a new VPC. In addition, the reworking of so many applications required developers to jump onto problems that have classically been solved by sys admins and operations teams.

Kubernetes As A Tool For Onboarding Developers On Infrastructure

When we decided to make the switch from what was essentially a Chef-run setup to Kubernetes, I do not think we understood all of the pain points that we would hit.  We ran our servers in a variety of different ways in a variety of different network configurations that were considerably different than the clean setup that you find on a fresh Kubernetes VPC.  

In production we ran in both AWS VPCs and AWS classic across multiple regions. This means that we managed several subnets with different access controls across different applications. Our most recent applications were also very secure, having no public endpoints. This meant that we had a combination of VPC peering, network address translation (NAT), and proxies running in varied configurations.

In the Kubernetes world, there’s only the VPC.  All the pods can theoretically talk to each other, and services endpoints are explicitly defined. It’s easy for the developer to gloss over some of the details and it removes the need for operations (mostly).  

We made the decision to convert all of our infrastructure / DevOps developers into application developers (really!). We had already started hiring them on the basis of their development skills rather than their operational skills anyway, so perhaps that is not as wild as it sounds.

We then made the decision to onboard our entire engineering organization onto Operations. Developers are flexible, they enjoy challenges, and they enjoy learning. It was remarkable.  After 1 month, our organization went from having a few DevOps folks, to having every engineer capable of modifying our architecture.

The training ground for onboarding on networking, productionization, problem solving, root cause analysis, etc, was getting Kubernetes into prod at scale. After the first month, I was biting my nails and worrying about our choices. After 2 months, it looked like it might some day be viable. After 3 months, we were deploying 10 times per week. After 4 months, 40 apps per week. Only 30% of our apps have been migrated, yet the gains are not only remarkable, they are astounding. Kubernetes allowed us to go from an infrastructure-is-slowing-us-down-ugh! organization, to an infrastructure-is-speeding-us-up-yay! organization.

Kubernetes As A Means To Unlock Continuous Integration And Delivery

How did we get to 40+ deployments per week? Put simply, continuous integration and deployment (CI/CD) came as a byproduct of our migration. Our first application in Kubernetes was Jenkins, and every app that went in also was added to Jenkins. As we moved forward, we made Jenkins more automatic until pods were being added and taken from Kubernetes faster than we could keep track.  

Interestingly, our problems with scaling are now about wanting to push out too many changes at once and people having to wait until their turn. Our goal is to get 100 deployments per week through the new infrastructure. This is achievable if we can continue to execute on our migration and on our commitment to a CI/CD process on Kubernetes and Jenkins.

Next Steps

We need to finish our migration. At this point the problems are mostly solved, the biggest difficulties are in the tedium of the task at hand. To move things out of our legacy infrastructure meant changing the network configurations to allow access to and from the Kubernetes VPC and across the regions. This is still a very real pain, and one we continue to address.  

Some services do not play well in Kubernetes – think stateful distributed databases. Luckily, we can usually migrate those to a 3rd party who will manage it for us. At the end of this migration, we will only be running pods on Kubernetes. Our infrastructure will become much simpler.

All these changes do not come for free; committing our entire infrastructure to Kubernetes means that we need to have Kubernetes experts.  Our team has been unblocked in terms of infrastructure and they are busy adding business value through application development (as they should). However, we do not (yet) have committed engineers to stay up to date with changes to Kubernetes and cloud computing.  

As such, we have transferred one engineer to a new “cloud platform team” and will hire a couple of others (have I mentioned we’re hiring!). They will be responsible for developing tools that we can use to interface well with Kubernetes and manage all of our cloud resources. In addition, they will be working in the Kubernetes source code, part of Kubernetes SIGs, and ideally, pushing code into the open source project.

Summary

All in all, while the move to Kubernetes initially seemed daunting, it was far less complicated and disruptive than we thought. And the reward at the other end was a company that could respond as fast as our customers wanted.Editor’s note: at a recent Kubernetes meetup, the team at ShareThis gave a talk about their production use of Kubernetes. Video is embedded below.

Kubernetes Community Meeting Notes - 20160204

February 09 2016

February 4th - rkt demo (congratulations on the 1.0, CoreOS!), eBay puts k8s on Openstack and considers Openstack on k8s, SIGs, and flaky test surge makes progress.

The Kubernetes contributing community meets most Thursdays at 10:00PT to discuss the project’s status via a videoconference. Here are the notes from the latest meeting.

  • Note taker: Rob Hirschfeld
  • Demo (20 min): CoreOS rkt + Kubernetes [Shaya Potter]
    • expect to see integrations w/ rkt & k8s in the coming months (“rkt-netes”). not integrated into the v1.2 release.
    • Shaya gave a demo (8 minutes into meeting for video reference)
      • CLI of rkt shown spinning up containers
      • [note: audio is garbled at points]
      • Discussion about integration w/ k8s & rkt
      • rkt community sync next week: https://groups.google.com/forum/#!topic/rkt-dev/FlwZVIEJGbY

      • Dawn Chen:
        • The remaining issues of integrating rkt with kubernetes: 1) cadivsor 2) DNS 3) bugs related to logging
        • But need more work on e2e test suites
  • Use Case (10 min): eBay k8s on OpenStack and OpenStack on k8s [Ashwin Raveendran]
    • eBay is currently running Kubernetes on OpenStack
    • Goal for eBay is to manage the OpenStack control plane w/ k8s. Goal would be to achieve upgrades
    • OpenStack Kolla creates containers for the control plane. Uses Ansible+Docker for management of the containers.
    • Working on k8s control plan management - Saltstack is proving to be a management challenge at the scale they want to operate. Looking for automated management of the k8s control plane.
  • SIG Report
  • Testing update [Jeff, Joe, and Erick]
    • Working to make the workflow about contributing to K8s easier to understanding
      • pull/19714 has flow chart of the bot flow to help users understand
    • Need a consistent way to run tests w/ hacking config scripts (you have to fake a Jenkins process right now)
    • Want to create necessary infrastructure to make test setup less flaky
    • want to decouple test start (single or full) from Jenkins
    • goal is to get to point where you have 1 script to run that can be pointed to any cluster
    • demo included Google internal views - working to try get that external.
    • want to be able to collect test run results
    • Bob Wise calls for testing infrastructure to be a blocker on v1.3
    • Long discussion about testing practices…
      • consensus that we want to have tests work over multiple platforms.
      • would be helpful to have a comprehensive state dump for test reports
      • “phone-home” to collect stack traces - should be available
  • 1.2 Release Watch
  • CoC [Sarah]
  • GSoC [Sarah]

To get involved in the Kubernetes community consider joining our Slack channel, taking a look at the Kubernetes project on GitHub, or join the Kubernetes-dev Google group. If you’re really excited, you can do all of the above and join us for the next community conversation — February 11th, 2016. Please add yourself or a topic you want to know about to the agenda and get a calendar invitation by joining this group.

“https://youtu.be/IScpP8Cj0hw?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ”

Kubernetes Community Meeting Notes - 20160128

February 02 2016

January 28 - 1.2 release update, Deis demo, flaky test surge and SIGs

The Kubernetes contributing community meets once a week to discuss the project’s status via a videoconference. Here are the notes from the latest meeting.

Note taker: Erin Boyd

  • Discuss process around code freeze/code slush (TJ Goltermann)
    • Code wind down was happening during holiday (for 1.1)
    • Releasing ~ every 3 months
    • Build stability is still missing
    • Issue on Transparency (Bob Wise)
      • Email from Sarah for call to contribute (Monday, January 25)
        • Concern over publishing dates / understanding release schedule /etc…
    • Release targeted for early March
      • Where does one find information on the release schedule with the committed features?
        • For 1.2 - Send email / Slack to TJ
        • For 1.3 - Working on better process to communicate to the community
          • Twitter
          • Wiki
          • GitHub Milestones
    • How to better communicate issues discovered in the SIG
      • AI: People need to email the kubernetes-dev@ mailing list with summary of findings
      • AI: Each SIG needs a note taker
  • Release planning vs Release testing
    • Testing SIG lead Ike McCreery
      • Also part of the testing infrastructure team at Google
      • Community being able to integrate into the testing framework
        • Federated testing
    • Release Manager = David McMahon
      • Request to  introduce him to the community meeting
  • Demo: Jason Hansen Deis
  • Testing
    • Called for community interaction
    • Need to understand friction points from community
      • Better documentation
      • Better communication on how things “should work”
    • Internally, Google is having daily calls to resolve test flakes
    • Started up SIG testing meetings (Tuesday at 10:30 am PT)
    • Everyone wants it, but no one want to pony up the time to make it happen
      • Google is dedicating headcount to it (3-4 people, possibly more)
    • https://groups.google.com/forum/?hl=en#!forum/kubernetes-sig-testing
  • Best practices for labeling
    • Are there tools built on top of these to leverage
    • AI: Generate artifact for labels and what they do (Create doc)
      • Help Wanted Label - good for new community members
      • Classify labels for team and area
        • User experience, test infrastructure, etc..
  • SIG Config (not about deployment)
    • Any interest in ansible, etc.. type
  • SIG Scale meeting (Bob Wise & Tim StClair)
    • Tests related to performance SLA get relaxed in order to get the tests to pass
      • exposed process issues
      • AI: outline of a proposal for a notice policy if things are being changed that are critical to the system (Bob Wise/Samsung)
        • Create a Best Practices of set of constants into well documented place

To get involved in the Kubernetes community consider joining our Slack channel, taking a look at the Kubernetes project on GitHub, or join the Kubernetes-dev Google group. If you’re really excited, you can do all of the above and join us for the next community conversation — February 4th, 2016. Please add yourself or a topic you want to know about to the agenda and get a calendar invitation by joining this group.

The full recording is available on YouTube in the growing archive of Kubernetes Community Meetings.

@Kubernetesio View on Github #kubernetes-users Stack Overflow Download Kubernetes