Distributed Tensorflow in Kubernetes

From ESS-WIKI
Revision as of 04:17, 16 November 2018 by Terry.lu (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Introduce

Distributed Tensorflow (Clustering) can speed up your training. Distributed tensorflow in kubernates make it easy to:

  1. Add k8s nodes to extend computing capability
  2. Simplify the work to make a distributed tensorflow

This topic will describe how to make a distributed tensorflow.

TFJob

  • TFJob is a CRD(Custom Resource Definitions) of k8s that will create by kubeflow.
  • TFJob can help you to set

Prerequisite

  1. You must know the basic concept of distributed tensorflow here: Distributed TensorFlow
  2. You must know how to write a distributed tensorflow training. Ex: train_and_evaluate

Steps

1. Create(Download) source & Dockerfile File:Iris train and eval.zip and unzip to the same folder.

2. Create training container, where "ecgwc" is the username in dockerhub and "tf-iris:dist" is the container name

$ docker build -t ecgwc/tf-iris:dist .

3. Check if trainig docker is workable.

$ docker run --rm ecgwc/tf-iris:dist

Dist tf k8s-1.png

4. Push docker to dockerHub

$ docker push ecgwc/tf-iris:dist

5. Create(Download) yaml file for distributed tensorflow: File:Tf-dist-iris.zip

6. Deploy yaml to k8s

$ kubectl create -f tf-dist-iris.yaml

7. Check training status

  • Check pod

Dist tf k8s-2.png

  • Check tfjob

Dist tf k8s-3.png

  • Check training log
$ kubectl -n kubeflow logs tf-dist-chief-0


Reference

https://github.com/Azure/kubeflow-labs/tree/master/7-distributed-tensorflow