Documentation Contents

kuber Overview

The Python Kubernetes client exists to provide low-level access to the Kubernetes API. However, low-level access can be clunky to use and require an additional effort to achieve parity with common workflows provided by configuration-driven tooling.

kuber is a higher-level abstraction designed to be compliant with the general usage level of someone comfortable working with Kubernetes configuration files and managing them with tools like kubectl and/or helm.

Configuring Individual Resources

kuber allows Kubernetes resources to be defined entirely in Python code, or defined in configuration files and loaded and modified by code. Examples of the two approaches are shown below:

The Pure Python Approach

Here’s an example of how a Deployment can be created with kuber:

from kuber.latest import apps_v1

# Create a deployment using the most recent stable Kubernetes version
# from the apps/v1 API version.
d = apps_v1.Deployment()

with d.metadata as md:
    md.name = "my-deployment"
    md.namespace = "my-app"
    md.labels.update(app="foo", component="application")

d.spec.selector.match_labels.update(app="foo")
d.spec.template.metadata.labels.update(app="foo")

d.append_container(
    name="app",
    image="my-app:1.0",
    ports=[apps_v1.ContainerPort(container_port=8080, host_port=80)],
    tty=True,
    image_pull_policy="Always",
    resources=apps_v1.ResourceRequirements(
        limits={"cpu": "1.5", "memory": "1Gi"},
        requests={"cpu": "1.5", "memory": "800Mi"},
    )
)

# Render the results to YAML.
print(d.to_yaml())

The printed output of executing this would be:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: foo
    component: application
  name: my-deployment
  namespace: my-app
spec:
  template:
    spec:
      containers:
      - image: my-app:1.0
        imagePullPolicy: Always
        name: app
        ports:
        - containerPort: 8080
          hostPort: 80
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: "1"
            memory: 800Mi
        tty: true

The Hybrid Approach

In many cases it is convenient to use standard Kubernetes configuration as a base template. The common approach in these cases used by projects like Helm is to introduce a templating language into the configuration files that gets rendered prior to using the configuration. However, a templated approach has a number of drawbacks - a primary one being that if the template doesn’t support a necessary piece custom configuration it means forking that template and managing yourself. Instead kuber facilitates flexible modification and augmentation of resource configurations that have been loaded from configuration files.

Following from the example above, let’s say we have a YAML resource configuration file my-deployment.yaml with part of the contents from the example above:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    component: application
  name: my-deployment
  namespace: my-app
spec:
  template:
    spec:
      containers:
      - image: my-app:1.0
        imagePullPolicy: Always
        name: app
        tty: true

We want to load that configuration file and modify the loaded definition to match the results from the Pure Python Approach example in the previous section. That would look like this:

import kuber
from kuber.latest import apps_v1

# Load YAML configuration file into a Deployment object.
d: apps_v1.Deployment = kuber.from_yaml_file("./my-deployment.yaml")

d.metadata.labels.update(app="foo")

with d.get_container("app") as c:
    c.resources.limits.update(cpu="1.5", memory="1Gi")
    c.resources.requests.update(cpu="1.5", memory="800Mi")
    c.ports.append(apps_v1.ContainerPort(container_port=8080, host_port=80))

# Render the results to YAML.
print(d.to_yaml())

The printed configuration matches the configuration printed in the previous example.

Managing Multiple Resources

Often times multiple resources are needed to support a single application within a Kubernetes cluster. This is where explicit configuration can get increasingly complex and has resulted in a number of tools, like Helm, that try to simplify the process. kuber supports high-level constructs as well that make it easier to manage multiple resources but without having to rely on templating.

import kuber
from kuber.latest import apps_v1
from kuber.latest import core_v1

# Load all YAML and/or JSON configuration files in the specified directory
# and return a kuber ResourceBundle object that contains those loaded
# resources.
bundle = kuber.from_directory("../my-application")

# Add environment label to all loaded resources.
for r in bundle.resources:
    r.metadata.labels.update(environment="production")

# Change the number of replicas in the deployment named "my-app" that has
# the label `component=web`.
d: apps_v1.Deployment = bundle.get(
    name="my-app",
    kind="Deployment",
    component="web"
)
d.spec.replicas = 20

# Change the service port to 443 for the service named "my-app" that has the
# label `component=web`.
s: core_v1.Service = bundle.get(
    name="my-app",
    kind="Service",
    component="web"
)
s.spec.ports = [core_v1.ServicePort(port=443, target_port=8080)]

# Render to consolidated YAML configuration file
print(bundle.render_yaml_bundle())

The flexibility of this approach comes in part from the ability to define a working base configuration in standard configuration files, but then load and modify that configuration before deployment.

Creating Resources

kuber offers a number of different ways to create Kubernetes resources depending upon the desired usage pattern. We’ll start by looking at how to create resources individually as that will be the most familiar, but grouping resources into bundling is a useful alternative pattern shown further below.

Individual Resources

A high-level new_resource function exists to conveniently load resources from the top-level package.

import kuber
from kuber.latest import apps_v1

d: apps_v1.Deployment = kuber.new_resource(
    api_version="apps/v1",
    kind="Deployment",
    name="my-deployment"
)

However, resources can also be created directly from an import in much the same way:

from kuber.latest import batch_v1

j = batch_v1.Job()
j.metadata.name = "my-job`

Both approaches end up producing the same result, an instance of the desired Kubernetes resource on which to operate.

However, kuber also has a number of ways to load resources from configuration data in JSON or YAML format.

import kuber
from kuber.latest import batch_v1

job: batch_v1.Job = kuber.from_yaml_file("my-job.yaml")
job.metadata.labels.update(component="app")
import kuber
from kuber.latest import batch_v1

pod: core_v1.Pod = kuber.from_yaml(
    """
    apiVersion: core/v1
    kind: Pod
    metadata:
      name: my-pod
    spec:
      containers:
        - image: python:3.8
    """
)
pod.spec.containers[0].name = "python"
import kuber
from kuber.latest import core_v1

service: core_v1.Service = kuber.from_yaml_file("my-service.yaml")
service.spec.selector.update(environment="production")
import kuber
from kuber.latest import batch_v1

pod: core_v1.Pod = kuber.from_yaml({
    "apiVersion": "core/v1",
    "kind": "Pod",
    "metadata": {"name": "my-pod"},
    "spec": {
        "containers": [{"image": "python:3.8"}]
    }
})
pod.spec.containers[0].name = "python"

Multiple Resources

Creating and managing multiple resources collectively in kuber is done through ResourceBundle objects that contain a list of resource objects and have convenience functions for managing that list of resources collectively. There are a few top-level convenience functions available for initializing bundles from existing configuration files:

Empty ResourceBundles can also be created and then populated after creation using the same functionality available as methods on the bundle object.

Accessing Resources

When using resource bundles, the Kubernetes resources are stored within the resources property of the ResourceBundle. This resources property behaves like a normal Python list, but it has additional functionality for conveniently accessing resources by namespace, kind and name filtering.

Consider a case where we have the following resource definition file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: web-configs
  namespace: alpha

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: settings
  namespace: alpha

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: settings
  namespace: bravo

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: settings
  namespace: charlie

Such that the defined resources are:

alpha namespace:

  • ConfigMap/web-configs
  • ConfigMap/settings

bravo namespace:

  • ConfigMap/settings

charlie namespace:

  • ConfigMap/settings

Using the ResourceBundle.get() method shown elsewhere in the documentation we could retrieve the web-configs resource as:

config_map = bundle.get(name="web-configs", kind="ConfigMap")

but another way to access this resource would be to use the dynamic accessors on the resources object:

config_map = bundle.resources.config_map.web_configs

Here the resources object can be filtered dynamically by kind, which returns a filtered resources object that can be filtered by name.

In the case above where we want to get the settings resource in the charlie namespace, we can add a .within("charlie") filter to the resources object:

settings = bundle.resources.within("charlie").config_map.settings

If the .within("charlie") namespace filter is omitted, the resources object will recognize that there are multiple resources that match the kind = ConfigMap and name = settings and instead return a tuple of all of those resources instead of just a single resource:

for settings in bundle.resources.config_map.settings:
    print(settings.metadata.namespace)

alpha_settings = bundle.resources.config_map.settings[0]

Case Conventions

From the examples above you can see that the dynamic accessors use snake_case. This is to make the accessors match conventional casing inside Python. Internally the values are converted to PascalCase for kind values and kebab-case for name values as are the Kubernetes conventions.

Kubernetes names also allow for the . character, which cannot be represented in a Python variable name. In those cases dictionary-style accessors can be used instead:

job = bundle.resources.job["my.job-name"]

The dictionary-style accessors will also accept PascalCase for kind values. Therefore, the web-configs from the earlier example can be accessed in any of the following ways:

web_configs = bundle.resources.config_map.web_configs
web_configs = bundle.resources["ConfigMap"].web_configs
web_configs = bundle.resources["ConfigMap"]["web-configs"]

# This one works because order is preserved when loading resources and
# the web-configs resource was the first one defined. However, it is usually
# preferred to reference by name instead of relying on order preservation.
web_configs = bundle.resources["ConfigMap"][0]

Advanced Filtering

Ultimately the dynamic accessing of resources is meant to be used in simple cases, while the .get() and .get_many() methods of resource bundles can be used to do the same thing, but also allow for filtering based on metadata labels:

Custom Objects

Custom objects, which are custom resource definitions not specified by the Kubernetes standard API, can be utilized and managed in kuber with the custom_v1.CustomObject resource. Any unknown resource definition encountered by Kuber will be assumed to be a custom object and loaded as a custom_v1.CustomObject.

For example, given the custom object definition below:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: steps-
spec:
  entrypoint: hello
  templates:
  - name: hello
    steps:
    - - name: hello
        template: whalesay
        arguments:
          parameters: [{name: message, value: "hello1"}]
  - name: whalesay
    inputs:
      parameters:
      - name: message
    container:
      image: docker/whalesay
      command: [cowsay]
      args: ["{{inputs.parameters.message}}"]

this can be loaded into a CustomObject resource directly:

import pathlib
import yaml

from kuber.latest import custom_v1

workflow = typing.cast(
    custom_v1.CustomObject,
    kuber.from_yaml_file(directory.joinpath("workflow.yaml"))
)

Kubernetes Versions

Unlike the lower-level Kubernetes Python Client, a single version of the kuber library contains multiple Kubernetes API version targets within a single installation. When loading/creating resources or resource bundles, the desired version of Kubernetes can be specified.

Specifying Versions

Kubernetes versions in kuber can be specified in two interchangeable ways:

  • Version Labels: Versions can be specified as strings, e.g. "1.20", or
  • KubernetesVersion: an object that contains the version label string along with other version related information.

In most cases, either of these can used. If the version label is used, it will be converted to a KubernetesVersion object internally.

Explicit Versions

Explicit Kubernetes versions in kuber are represented by the Major_Minor version syntax and prefixed with a v, e.g. v1.20 for version 1.20.x. The latest available patch level for each Kubernetes version will always be used when generating the configuration subpackages in kuber. Patch level distinctions are ignored by kuber because the configuration API is consistent across patch versions.

For example, to using Kubernetes version 1.20 in kuber would look like:

from kuber.v1.20 import core_v1

service = core_v1.Service()
config_map = core_v1.ConfigMap()

Or when dealing with resource bundles, the version is specified when creating the bundle and that version will be used by all resources loaded by that bundle:

import kuber

bundle = kuber.from_directory(
    "./foo/",
    kubernetes_version="1.20"
)

Floating Versions

To keep pace with the ongoing development of Kubernetes there are also two special versions available, latest and pre that will float from version to version over time. The latest version will always point to the most recent stable version of Kubernetes available at the time the library was published. Similarly, the pre version will always point to the latest pre-release (alpha or beta) version of Kubernetes available at the time of publishing.

These special version can be used in exactly the same way as the explicit versions within kuber:

import kuber
from kuber.latest import core_v1

service = core_v1.Service()
config_map = core_v1.ConfigMap()

bundle = kuber.create_bundle("latest")
bundle.add(service, config_map)

or for the pre-release version:

from kuber.pre import core_v1

service = core_v1.Service()
config_map = core_v1.ConfigMap()

bundle = kuber.create_bundle("pre")
bundle.add(service, config_map)

To find out the specific version information for any versions, explicit or floating, you can import that version of the package and print the version info object constant in that module. For example, to find out the specific version of the pre subpackage:

from kuber import pre

print(pre.KUBERNETES_VERSION.version)

Cluster-based Versioning

Often times it is most useful to write configurations that use the version of the cluster in which they will be deployed instead of hard-coding a version - even if it is a floating one. For that, kuber has a convenience function that returns a KubernetesVersion object, which can be used in place of a hard-coded version value. The function will connect to the currently configured cluster and return the KubernetesVersion object that best matches the cluster version.

In the f

import kuber

# Establich connection to the cluster currently
# configured in the local `.kubeconfig` file.
kuber.load_access_config()

# Get the version of the connected cluster or
# default to version 1.20 if unable to fetch
# version data from the cluster.
cluster_version = kuber.get_version_from_cluster("1.20")

# Use the returned `KubernetesVersion` object
# as the version for the cluster.
bundle = kuber.from_directory(
    "./foo/",
    kubernetes_version=cluster_version
)

CRUD Operations

kuber supports the basic CRUD behaviors by wrapping around the available actions from the Kubernetes Python client. For more advanced and custom operations, the resource configurations can always be serialized to YAML or JSON and used in custom defined commands or just saved to disk for later application. The resource configurations also have a to_dict() function that serializes down to a Python dictionary that is compatible with the Kubernetes Python client functions (passed into the _body_ parameter).

Initialization

Before operating on the cluster, kuber needs to be configured with access to the cluster. This is done with the load_access_config function:

Single Resource Operations

import kuber
from kuber.latest import apps_v1

# Initializes kuber with local kubeconfig for access.
kuber.load_access_config()

d = apps_v1.Deployment()

with d.metadata as md:
    md.name = "my-deployment"
    md.namespace = "default"
    md.labels.update(app="foo", component="application")

d.spec.selector.match_labels.update(app="foo")
d.spec.template.metadata.labels.update(app="foo")
d.append_container(name="app", image="my-app:1.0")
d.spec.replicas = 2

# Create the Deployment resource in the cluster.
status = d.create_resource()
print(status.to_dict())

# Read status of the Deployment resource in the cluster.
status = d.get_resource_status()
print(status.to_dict())

# Update (patch) the Deployment resource in the cluster.
d.spec.replicas = 0
status = d.patch_resource()
print(status.to_dict())

# Update (replace) the Deployment resource in the cluster.
status = d.replace_resource()
print(status.to_dict())

# Delete the Deployment resource from the cluster.
d.delete_resource()

Bundled Resources CRUD

When working with bundles, the ResourceBundle objects have CRUD methods that operate on all resources within the bundle collectively.

import kuber

kuber.load_access_config()

bundle = kuber.from_directory("./some-directory")

# Create resources within the currently configured cluster.
bundle.create(echo=True)

# Display current statuses of the resources in the cluster.
bundle.statuses(echo=True)

# Delete resources from the cluster.
bundle.delete(echo=True)

The following are the CRUD methods available on ResourceBundle objects:

CRUD on the Command Line

In addition to calling CRUD operations directly within code, it’s easy to turn a ResourceBundle object into a command line interface that exposes those CRUD operations as arguments to the executed python script. The example above could be rewritten for command line invocation as:

import kuber

if __name__ == "__main__":
    kuber.load_access_config()
    bundle = kuber.from_directory("./some-directory")
    bundle.cli()

The bundle.cli() command here will parse arguments from the command line and execute the CRUD operation based on those commands. If we saved the above code to file as resources.py, we could then carry out the same CRUD operations as the previous example from the command line as:

$ python3 resource.py create

to create the resources in the cluster,

$ python3 resource.py status

to get the statuses of the resources in the cluster, and

$ python3 resource.py delete

to remove the resources from the cluster.

Beyond CRUD

For more advanced operations beyond these basic cases, there are two approaches:

  1. Serialize the Resource object to a dictionary, which is compatible with the lower-level kubernetes python client library and carry out the operation that way, or
  2. Serialize the Resource object to YAML or JSON configuration string or file and use that in other configuration-based tooling like kubectl or helm.

Command Line Interface

While kuber can be used in many different ways to manage resources, the most common path is to generate a resource bundle and then manage that bundle on the cluster with basic CRUD operations. To facilitate the ease of that workflow, ResourceBundle objects have a .cli() method that exposes the CRUD operations on that bundle to the command line. A basic example of how this would work looks like this:

import kuber

if __name__ == "__main__":
    # Load the current cluster configuration from `kubeconfig`
    # into kuber for access to operate on the cluster.
    kuber.load_access_config()

    # Load bundle resources from the configuration files
    # stored in the local *./some/directory* directory.
    bundle = kuber.from_directory("./some/directory")

    # Add environment labels to all of the loaded resources.
    for resource in bundle.resources:
        resource.metadata.labels.update(
            environment="production"
        )

    # Expose the bundle CRUD operations as a command
    # line interface.
    bundle.cli()

The bundle.cli() command here will parse arguments from the command line and execute the CRUD operation based on those commands. If we saved the above code to file as resources.py, we could then carry out CRUD operations from the command line:

$ python3 resource.py create

to create the resources in the cluster,

$ python3 resource.py status

to get the statuses of the resources in the cluster, and

$ python3 resource.py delete

to remove the resources from the cluster.

Advanced Command Line Interface

In more complex scenarios, exposing additional command line interface arguments would be helpful for more flexibility in how the resource bundle is managed. In these cases, a callback can be used that will allow for additional configuration of the bundle prior to the command line action being carried out.

Consider the previous example where environment=”production” was essentially hard-coded into the bundle. If we wanted to make defining the environment value part of the CLI, we could refactor the above example like this:

import argparse

import kuber


def configure(action: kuber.CommandAction):
    """
    Configure the bundle based on additional command line flags.
    """
    parser = argparse.ArgumentParser()
    parser.add_argument("--environment", default="development")

    args = parser.parse_args(action.custom_args)

    bundle = action.bundle
    for resource in bundle.resources:
        resource.metadata.labels.update(
            environment=args.environment
        )


if __name__ == "__main__":
    # Load the current cluster configuration from `kubeconfig`
    # into kuber for access to operate on the cluster.
    kuber.load_access_config()

    # Load bundle resources from the configuration files
    # stored in the local *./some/directory* directory.
    bundle = kuber.from_directory("./some/directory")

    # Expose the bundle CRUD operations as a command
    # line interface, but invoke the CLI with the
    # specified callback before executing the action
    # to allow for additional configuration based on
    # the custom command line arguments supplied.
    bundle.cli.invoke(configure)

The same result as the previous example can then be achieved with the commands:

$ python3 resource.py create --environment=production

to create the resources in the cluster,

$ python3 resource.py status

to get the statuses of the resources in the cluster, and

$ python3 resource.py delete

to remove the resources from the cluster.

Invocation-Only Command Line Interface

The advanced example from above can be refactored yet again such that all of the configuration is carried out within the callback. In this case there is a convenience function kuber.cli() that simplifies bundle creation and CLI execution with the pre-execution callback. Refactoring the example from above would look like this:

import argparse

import kuber


def configure(action: kuber.CommandAction):
    """
    Configure the bundle entirely within this callback function.
    An empty bundle was created already and passed into this
    function as a member of the `action` object. Whatever changes
    are made to the bundle within this function will be reflected
    when the command line interface action is carried out after
    this function execution is complete.
    """
    bundle = action.bundle

    parser = argparse.ArgumentParser()
    parser.add_argument("--environment", default="development")
    args = parser.parse_args(action.custom_args)

    # Load the current cluster configuration from `kubeconfig`
    # into kuber for access to operate on the cluster.
    kuber.load_access_config()

    # Load bundle resources from the configuration files
    # stored in the local *./some/directory* directory.
    bundle.add_directory("./some/directory")

    for resource in bundle.resources:
        resource.metadata.labels.update(
            environment=args.environment
        )


if __name__ == "__main__":
    # Expose the bundle CRUD operations as a command
    # line interface, but invoke the CLI with the
    # specified callback before executing the action
    # to allow for additional configuration based on
    # the custom command line arguments supplied.
    kuber.cli(configure)

Examples

Bundle with CLI

This example shows how to expose a CRUD command line that calls back to a function to populate a bundle.

import typing

import kuber
from kuber.latest import apps_v1
from kuber.latest import core_v1


def populate(action: kuber.CommandAction):
    """
    Populate the empty bundle that was created by the
    cli function call prior to calling this function.
    The action argument contains the bundle along with
    information about the command line execution.
    """
    bundle = action.bundle
    bundle.namespace = "prometheus"
    bundle.add_file("./resources.yaml")

    # Get the server container from the server
    # deployment for modification.
    deployment = typing.cast(
        apps_v1.Deployment, bundle.get(name="prometheus-server", kind="Deployment")
    )
    server = typing.cast(
        core_v1.Container, deployment.get_container("prometheus-server")
    )

    # Override default retention time to be 7 days.
    server.args.append("--storage.tsdb.retention.time=7d")


if __name__ == "__main__":
    kuber.load_access_config()
    version = kuber.get_version_from_cluster("latest")
    kuber.cli(callback=populate, kubernetes_version=version, bundle_name="prometheus")

Complete code for this example is available at: kuber/examples/bundle-with-cli/

ConfigMap with Files

This example shows how a ConfigMap can be populated from files on disk using Python to do the heavy lifting instead of having to store the file data inside a ConfigMap resource configuration file.

from kuber.latest import core_v1

config_map = core_v1.ConfigMap()

# Populate the metadata on the ConfigMap.
with config_map.metadata as md:
    md.name = "glossary"
    md.namespace = "reference"
    md.labels.update(topic="kubernetes", version="1.0")

# Load file from disk and add it to the ConfigMap's data
# object with the key `data.json`.
with open("./data.json") as f:
    config_map.data["data.json"] = f.read()

# Display results.
print(config_map.to_yaml())

Complete code for this example is available at: kuber/examples/config-map/

From Helm Chart (Experimental)

This example shows the currently experimental functionality of generating a bundle from a helm chart. It requires a helm 3 executable to be available for external command execution for this to work as the helm executable is used to render the chart and the rendered output is loaded into the kuber bundle for additional processing.

Complete code for this example is available at: kuber/examples/from-helm-chart/

Simple Hybrid Configuration

This example shows how to load a YAML configuration file containing a Deployment resource and modify its values in code before serializing the results back to YAML.

import typing

import kuber
from kuber.latest import apps_v1
from kuber.latest import core_v1

# Load YAML configuration file into a Deployment object
d = typing.cast(
    apps_v1.Deployment, kuber.from_yaml_file(file_path="./my-deployment.yaml")
)

# Add an `app` label.
d.metadata.labels.update(app="foo")

# Create a container port to map port 8080 in
# the container to port 80 on the host.
port = apps_v1.ContainerPort(container_port=8080, host_port=80)

# Modify the container named "app" with resource
# limits/requests and an additional port mapping.
with typing.cast(core_v1.Container, d.get_container("app")) as c:
    c.resources.limits.update(cpu="1.5", memory="1Gi")
    c.resources.requests.update(cpu="1.5", memory="800Mi")
    c.ports.append(port)

# Render the results to YAML
print(d.to_yaml())

Complete code for this example is available at: kuber/examples/config-map/

Package Contents

kuber

kuber package

Subpackages
kuber.definitions package
Module contents
kuber.interface package
Module contents
kuber.latest package
Submodules
kuber.latest.admissionregistration_v1 module
kuber.latest.admissionregistration_v1alpha1 module
kuber.latest.apiextensions_v1 module
kuber.latest.apimachinery_runtime module
kuber.latest.apimachinery_version module
kuber.latest.apiregistration_v1 module
kuber.latest.apiserverinternal_v1alpha1 module
kuber.latest.apps_v1 module
kuber.latest.authentication_v1 module
kuber.latest.authentication_v1alpha1 module
kuber.latest.authorization_v1 module
kuber.latest.autoscaling_v1 module
kuber.latest.autoscaling_v2 module
kuber.latest.batch_v1 module
kuber.latest.certificates_v1 module
kuber.latest.coordination_v1 module
kuber.latest.core_v1 module
kuber.latest.custom_v1 module
kuber.latest.discovery_v1 module
kuber.latest.events_v1 module
kuber.latest.flowcontrol_v1beta2 module
kuber.latest.flowcontrol_v1beta3 module
kuber.latest.meta_v1 module
kuber.latest.networking_v1 module
kuber.latest.networking_v1alpha1 module
kuber.latest.node_v1 module
kuber.latest.policy_v1 module
kuber.latest.rbac_v1 module
kuber.latest.resource_v1alpha1 module
kuber.latest.scheduling_v1 module
kuber.latest.storage_v1 module
kuber.latest.storage_v1beta1 module
Module contents
kuber.management package
Submodules
kuber.management.arrays module
kuber.management.configuration module
kuber.management.creation module
Module contents
kuber.pre package
Submodules
kuber.pre.admissionregistration_v1 module
kuber.pre.admissionregistration_v1alpha1 module
kuber.pre.apiextensions_v1 module
kuber.pre.apimachinery_runtime module
kuber.pre.apimachinery_version module
kuber.pre.apiregistration_v1 module
kuber.pre.apiserverinternal_v1alpha1 module
kuber.pre.apps_v1 module
kuber.pre.authentication_v1 module
kuber.pre.authentication_v1alpha1 module
kuber.pre.authentication_v1beta1 module
kuber.pre.authorization_v1 module
kuber.pre.autoscaling_v1 module
kuber.pre.autoscaling_v2 module
kuber.pre.batch_v1 module
kuber.pre.certificates_v1 module
kuber.pre.certificates_v1alpha1 module
kuber.pre.coordination_v1 module
kuber.pre.core_v1 module
kuber.pre.custom_v1 module
kuber.pre.discovery_v1 module
kuber.pre.events_v1 module
kuber.pre.flowcontrol_v1beta2 module
kuber.pre.flowcontrol_v1beta3 module
kuber.pre.meta_v1 module
kuber.pre.networking_v1 module
kuber.pre.networking_v1alpha1 module
kuber.pre.node_v1 module
kuber.pre.policy_v1 module
kuber.pre.rbac_v1 module
kuber.pre.resource_v1alpha2 module
kuber.pre.scheduling_v1 module
kuber.pre.storage_v1 module
kuber.pre.storage_v1beta1 module
Module contents
kuber.tests package
Subpackages
kuber.tests.arrays package
Submodules
kuber.tests.arrays.test_array_filtering module
Module contents
kuber.tests.interface package
Submodules
kuber.tests.interface.test_cli module
kuber.tests.interface.test_configuration module
Module contents
kuber.tests.kube_api package
Submodules
kuber.tests.kube_api.test_execute module
kuber.tests.kube_api.test_kube_api module
kuber.tests.kube_api.test_to_kuber_dict module
Module contents
kuber.tests.management package
Submodules
kuber.tests.management.test_adding module
kuber.tests.management.test_arrays module
kuber.tests.management.test_complex_ordering module
kuber.tests.management.test_from_helm module
kuber.tests.management.test_from_url module
kuber.tests.management.test_get module
kuber.tests.management.test_resource_bundle module
kuber.tests.management.test_settings module
Module contents
kuber.tests.scenarios package
Subpackages
kuber.tests.scenarios.cron_suspend package
Submodules
kuber.tests.scenarios.cron_suspend.test_cron_suspend module
Module contents
kuber.tests.scenarios.custom_object package
Submodules
kuber.tests.scenarios.custom_object.test_custom_object module
Module contents
kuber.tests.scenarios.custom_resource_definition package
Submodules
kuber.tests.scenarios.custom_resource_definition.test_custom_resource_definition module
Module contents
kuber.tests.scenarios.empty_values package
Submodules
kuber.tests.scenarios.empty_values.test_empty_values module
Module contents
kuber.tests.scenarios.get_containers_from_job package
Submodules
kuber.tests.scenarios.get_containers_from_job.test_get_containers_from_job module
Module contents
kuber.tests.scenarios.metadata_owner_reference package
Submodules
kuber.tests.scenarios.metadata_owner_reference.test_metadata_owner_reference module
Module contents
kuber.tests.scenarios.odd_api_versions package
Submodules
kuber.tests.scenarios.odd_api_versions.test_odd_api_versions module
Module contents
kuber.tests.scenarios.port_int_or_string package
Submodules
kuber.tests.scenarios.port_int_or_string.test_pod_int_or_string module
Module contents
kuber.tests.scenarios.same_names_different_resources package
Submodules
kuber.tests.scenarios.same_names_different_resources.test_same_names_different_resources module
Module contents
kuber.tests.scenarios.value_casting package
Submodules
kuber.tests.scenarios.value_casting.test_value_casting module
Module contents
kuber.tests.scenarios.zero_scale_deployment package
Submodules
kuber.tests.scenarios.zero_scale_deployment.test_zero_scale_deployment module
Module contents
Module contents
Submodules
kuber.tests.test_containers module
kuber.tests.test_definitions module
kuber.tests.test_execution module
kuber.tests.test_importable module
kuber.tests.test_kuber module
kuber.tests.utils module
Module contents
kuber.v1_23 package
Submodules
kuber.v1_23.admissionregistration_v1 module
kuber.v1_23.apiextensions_v1 module
kuber.v1_23.apimachinery_runtime module
kuber.v1_23.apimachinery_version module
kuber.v1_23.apiregistration_v1 module
kuber.v1_23.apiserverinternal_v1alpha1 module
kuber.v1_23.apps_v1 module
kuber.v1_23.authentication_v1 module
kuber.v1_23.authorization_v1 module
kuber.v1_23.autoscaling_v1 module
kuber.v1_23.autoscaling_v2 module
kuber.v1_23.autoscaling_v2beta1 module
kuber.v1_23.autoscaling_v2beta2 module
kuber.v1_23.batch_v1 module
kuber.v1_23.batch_v1beta1 module
kuber.v1_23.certificates_v1 module
kuber.v1_23.coordination_v1 module
kuber.v1_23.core_v1 module
kuber.v1_23.custom_v1 module
kuber.v1_23.discovery_v1 module
kuber.v1_23.discovery_v1beta1 module
kuber.v1_23.events_v1 module
kuber.v1_23.events_v1beta1 module
kuber.v1_23.flowcontrol_v1beta1 module
kuber.v1_23.flowcontrol_v1beta2 module
kuber.v1_23.meta_v1 module
kuber.v1_23.networking_v1 module
kuber.v1_23.node_v1 module
kuber.v1_23.node_v1alpha1 module
kuber.v1_23.node_v1beta1 module
kuber.v1_23.policy_v1 module
kuber.v1_23.policy_v1beta1 module
kuber.v1_23.rbac_v1 module
kuber.v1_23.scheduling_v1 module
kuber.v1_23.storage_v1 module
kuber.v1_23.storage_v1alpha1 module
kuber.v1_23.storage_v1beta1 module
Module contents
kuber.v1_24 package
Submodules
kuber.v1_24.admissionregistration_v1 module
kuber.v1_24.apiextensions_v1 module
kuber.v1_24.apimachinery_runtime module
kuber.v1_24.apimachinery_version module
kuber.v1_24.apiregistration_v1 module
kuber.v1_24.apiserverinternal_v1alpha1 module
kuber.v1_24.apps_v1 module
kuber.v1_24.authentication_v1 module
kuber.v1_24.authorization_v1 module
kuber.v1_24.autoscaling_v1 module
kuber.v1_24.autoscaling_v2 module
kuber.v1_24.autoscaling_v2beta1 module
kuber.v1_24.autoscaling_v2beta2 module
kuber.v1_24.batch_v1 module
kuber.v1_24.batch_v1beta1 module
kuber.v1_24.certificates_v1 module
kuber.v1_24.coordination_v1 module
kuber.v1_24.core_v1 module
kuber.v1_24.custom_v1 module
kuber.v1_24.discovery_v1 module
kuber.v1_24.discovery_v1beta1 module
kuber.v1_24.events_v1 module
kuber.v1_24.events_v1beta1 module
kuber.v1_24.flowcontrol_v1beta1 module
kuber.v1_24.flowcontrol_v1beta2 module
kuber.v1_24.meta_v1 module
kuber.v1_24.networking_v1 module
kuber.v1_24.node_v1 module
kuber.v1_24.node_v1beta1 module
kuber.v1_24.policy_v1 module
kuber.v1_24.policy_v1beta1 module
kuber.v1_24.rbac_v1 module
kuber.v1_24.scheduling_v1 module
kuber.v1_24.storage_v1 module
kuber.v1_24.storage_v1beta1 module
Module contents
kuber.v1_25 package
Submodules
kuber.v1_25.admissionregistration_v1 module
kuber.v1_25.apiextensions_v1 module
kuber.v1_25.apimachinery_runtime module
kuber.v1_25.apimachinery_version module
kuber.v1_25.apiregistration_v1 module
kuber.v1_25.apiserverinternal_v1alpha1 module
kuber.v1_25.apps_v1 module
kuber.v1_25.authentication_v1 module
kuber.v1_25.authorization_v1 module
kuber.v1_25.autoscaling_v1 module
kuber.v1_25.autoscaling_v2 module
kuber.v1_25.autoscaling_v2beta2 module
kuber.v1_25.batch_v1 module
kuber.v1_25.certificates_v1 module
kuber.v1_25.coordination_v1 module
kuber.v1_25.core_v1 module
kuber.v1_25.custom_v1 module
kuber.v1_25.discovery_v1 module
kuber.v1_25.events_v1 module
kuber.v1_25.flowcontrol_v1beta1 module
kuber.v1_25.flowcontrol_v1beta2 module
kuber.v1_25.meta_v1 module
kuber.v1_25.networking_v1 module
kuber.v1_25.networking_v1alpha1 module
kuber.v1_25.node_v1 module
kuber.v1_25.policy_v1 module
kuber.v1_25.rbac_v1 module
kuber.v1_25.scheduling_v1 module
kuber.v1_25.storage_v1 module
kuber.v1_25.storage_v1beta1 module
Module contents
kuber.v1_26 package
Submodules
kuber.v1_26.admissionregistration_v1 module
kuber.v1_26.admissionregistration_v1alpha1 module
kuber.v1_26.apiextensions_v1 module
kuber.v1_26.apimachinery_runtime module
kuber.v1_26.apimachinery_version module
kuber.v1_26.apiregistration_v1 module
kuber.v1_26.apiserverinternal_v1alpha1 module
kuber.v1_26.apps_v1 module
kuber.v1_26.authentication_v1 module
kuber.v1_26.authentication_v1alpha1 module
kuber.v1_26.authorization_v1 module
kuber.v1_26.autoscaling_v1 module
kuber.v1_26.autoscaling_v2 module
kuber.v1_26.batch_v1 module
kuber.v1_26.certificates_v1 module
kuber.v1_26.coordination_v1 module
kuber.v1_26.core_v1 module
kuber.v1_26.custom_v1 module
kuber.v1_26.discovery_v1 module
kuber.v1_26.events_v1 module
kuber.v1_26.flowcontrol_v1beta2 module
kuber.v1_26.flowcontrol_v1beta3 module
kuber.v1_26.meta_v1 module
kuber.v1_26.networking_v1 module
kuber.v1_26.networking_v1alpha1 module
kuber.v1_26.node_v1 module
kuber.v1_26.policy_v1 module
kuber.v1_26.rbac_v1 module
kuber.v1_26.resource_v1alpha1 module
kuber.v1_26.scheduling_v1 module
kuber.v1_26.storage_v1 module
kuber.v1_26.storage_v1beta1 module
Module contents
kuber.v1_27 package
Submodules
kuber.v1_27.admissionregistration_v1 module
kuber.v1_27.admissionregistration_v1alpha1 module
kuber.v1_27.apiextensions_v1 module
kuber.v1_27.apimachinery_runtime module
kuber.v1_27.apimachinery_version module
kuber.v1_27.apiregistration_v1 module
kuber.v1_27.apiserverinternal_v1alpha1 module
kuber.v1_27.apps_v1 module
kuber.v1_27.authentication_v1 module
kuber.v1_27.authentication_v1alpha1 module
kuber.v1_27.authentication_v1beta1 module
kuber.v1_27.authorization_v1 module
kuber.v1_27.autoscaling_v1 module
kuber.v1_27.autoscaling_v2 module
kuber.v1_27.batch_v1 module
kuber.v1_27.certificates_v1 module
kuber.v1_27.certificates_v1alpha1 module
kuber.v1_27.coordination_v1 module
kuber.v1_27.core_v1 module
kuber.v1_27.custom_v1 module
kuber.v1_27.discovery_v1 module
kuber.v1_27.events_v1 module
kuber.v1_27.flowcontrol_v1beta2 module
kuber.v1_27.flowcontrol_v1beta3 module
kuber.v1_27.meta_v1 module
kuber.v1_27.networking_v1 module
kuber.v1_27.networking_v1alpha1 module
kuber.v1_27.node_v1 module
kuber.v1_27.policy_v1 module
kuber.v1_27.rbac_v1 module
kuber.v1_27.resource_v1alpha2 module
kuber.v1_27.scheduling_v1 module
kuber.v1_27.storage_v1 module
kuber.v1_27.storage_v1beta1 module
Module contents
Submodules
kuber.execution module
kuber.kube_api module
kuber.versioning module
Module contents

Indices and tables