tfdbg + tflearn: replace deprecated classes and methods in example & docs
authorShanqing Cai <cais@google.com>
Thu, 3 May 2018 20:30:12 +0000 (13:30 -0700)
committerTensorFlower Gardener <gardener@tensorflow.org>
Thu, 3 May 2018 20:45:45 +0000 (13:45 -0700)
* `tf.contrib.learn.Experiment` is deprecated. Remove it from debug_tflearn_iris.py.
* Use `tf.estimator.DNNClassifier`, instead of the older one from `tf.contrib.learn`.
* Use `train()`, instead of `fit()` of Estimators.
* `Estimator.predict()` supports hooks. Add example lines for that.

PiperOrigin-RevId: 195301913

tensorflow/docs_src/programmers_guide/debugger.md
tensorflow/python/debug/BUILD
tensorflow/python/debug/examples/debug_tflearn_iris.py

index f7817b0..6bd9418 100644 (file)
@@ -34,7 +34,7 @@ type of bug in TensorFlow model development.
 The following example is for users who use the low-level
 [`Session`](https://www.tensorflow.org/api_docs/python/tf/Session) API of
 TensorFlow. A later section of this document describes how to use **tfdbg**
-with a higher-level API, namely tf-learn `Estimator`s and `Experiment`s.
+with a higher-level API, namely `Estimator`s.
 To *observe* such an issue, run the following command without the debugger (the
 source code can be found
 [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/debug/examples/debug_mnist.py)):
@@ -418,21 +418,20 @@ run -f has_inf_or_nan`
 Confirm that no tensors are flagged as containing `nan` or `inf` values, and
 accuracy now continues to rise rather than getting stuck. Success!
 
-## Debugging tf-learn Estimators and Experiments
+## Debugging TensorFlow Estimators
 
 This section explains how to debug TensorFlow programs that use the `Estimator`
-and `Experiment` APIs. Part of the convenience provided by these APIs is that
+APIs. Part of the convenience provided by these APIs is that
 they manage `Session`s internally. This makes the `LocalCLIDebugWrapperSession`
 described in the preceding sections inapplicable. Fortunately, you can still
 debug them by using special `hook`s provided by `tfdbg`.
 
-### Debugging tf.contrib.learn Estimators
-
-Currently, `tfdbg` can debug the
-@{tf.contrib.learn.BaseEstimator.fit$`fit()`}
-@{tf.contrib.learn.BaseEstimator.evaluate$`evaluate()`}
-methods of tf-learn `Estimator`s. To debug `Estimator.fit()`,
-create a `LocalCLIDebugHook` and supply it in the `monitors` argument. For example:
+`tfdbg` can debug the
+@{tf.estimator.Estimator.train$`train()`},
+@{tf.estimator.Estimator.evaluate$`evaluate()`} and
+@{tf.estimator.Estimator.predict$`predict()`}
+methods of tf-learn `Estimator`s. To debug `Estimator.train()`,
+create a `LocalCLIDebugHook` and supply it in the `hooks` argument. For example:
 
 ```python
 # First, let your BUILD target depend on "//tensorflow/python/debug:debug_py"
@@ -443,67 +442,33 @@ from tensorflow.python import debug as tf_debug
 # Create a LocalCLIDebugHook and use it as a monitor when calling fit().
 hooks = [tf_debug.LocalCLIDebugHook()]
 
-classifier.fit(x=training_set.data,
-               y=training_set.target,
-               steps=1000,
-               monitors=hooks)
+# To debug `train`:
+classifier.train(input_fn,
+                 steps=1000,
+                 hooks=hooks)
 ```
 
-To debug `Estimator.evaluate()`, assign hooks to the `hooks` parameter, as in
-the following example:
+Similarly, to debug `Estimator.evaluate()` and `Estimator.predict()`, assign
+hooks to the `hooks` parameter, as in the following example:
 
 ```python
-accuracy_score = classifier.evaluate(x=test_set.data,
-                                     y=test_set.target,
+# To debug `evaluate`:
+accuracy_score = classifier.evaluate(eval_input_fn,
                                      hooks=hooks)["accuracy"]
-```
 
+# To debug `predict`:
+predict_results = classifier.predict(predict_input_fn, hooks=hooks)
+```
 
 [debug_tflearn_iris.py](https://www.tensorflow.org/code/tensorflow/python/debug/examples/debug_tflearn_iris.py),
-based on [tf-learn's iris tutorial](https://www.tensorflow.org/versions/r1.2/get_started/tflearn), contains a full example of how to
-use the tfdbg with `Estimator`s. To run this example, do:
+based on [tf-learn's iris tutorial](https://www.tensorflow.org/versions/r1.8/get_started/tflearn),
+contains a full example of how to use the tfdbg with `Estimator`s.
+To run this example, do:
 
 ```none
 python -m tensorflow.python.debug.examples.debug_tflearn_iris --debug
 ```
 
-### Debugging tf.contrib.learn Experiments
-
-`Experiment` is a construct in `tf.contrib.learn` at a higher level than
-`Estimator`.
-It provides a single interface for training and evaluating a model. To debug
-the `train()` and `evaluate()` calls to an `Experiment` object, you can
-use the keyword arguments `train_monitors` and `eval_hooks`, respectively, when
-calling its constructor. For example:
-
-```python
-# First, let your BUILD target depend on "//tensorflow/python/debug:debug_py"
-# (You don't need to worry about the BUILD dependency if you are using a pip
-#  install of open-source TensorFlow.)
-from tensorflow.python import debug as tf_debug
-
-hooks = [tf_debug.LocalCLIDebugHook()]
-
-ex = experiment.Experiment(classifier,
-                           train_input_fn=iris_input_fn,
-                           eval_input_fn=iris_input_fn,
-                           train_steps=FLAGS.train_steps,
-                           eval_delay_secs=0,
-                           eval_steps=1,
-                           train_monitors=hooks,
-                           eval_hooks=hooks)
-
-ex.train()
-accuracy_score = ex.evaluate()["accuracy"]
-```
-
-To build and run the `debug_tflearn_iris` example in the `Experiment` mode, do:
-
-```none
-python -m tensorflow.python.debug.examples.debug_tflearn_iris \
-    --use_experiment --debug
-```
-
 The `LocalCLIDebugHook` also allows you to configure a `watch_fn` that can be
 used to flexibly specify what `Tensor`s to watch on different `Session.run()`
 calls, as a function of the `fetches` and `feed_dict` and other states. See
@@ -573,7 +538,7 @@ Often, your model is running on a remote machine or a process that you don't
 have terminal access to. To perform model debugging in such cases, you can use
 the `offline_analyzer` binary of `tfdbg` (described below). It operates on
 dumped data directories. This can be done to both the lower-level `Session` API
-and the higher-level `Estimator` and `Experiment` APIs.
+and the higher-level `Estimator` API.
 
 ### Debugging Remote tf.Sessions
 
@@ -636,7 +601,7 @@ can be inspected offline. See
 [the proto definition](https://www.tensorflow.org/code/tensorflow/core/protobuf/debug.proto)
 for more details.
 
-### Debugging Remotely-Running tf-learn Estimators and Experiments
+### Debugging Remotely-Running Estimators
 
 If your remote TensorFlow server runs `Estimator`s,
 you can use the non-interactive `DumpingDebugHook`. For example:
@@ -652,8 +617,8 @@ hooks = [tf_debug.DumpingDebugHook("/shared/storage/location/tfdbg_dumps_1")]
 
 Then this `hook` can be used in the same way as the `LocalCLIDebugHook` examples
 described earlier in this document.
-As the training and/or evalution of `Estimator` or `Experiment`
-happens, tfdbg creates directories having the following name pattern:
+As the training, evalution or prediction happens with `Estimator`,
+tfdbg creates directories having the following name pattern:
 `/shared/storage/location/tfdbg_dumps_1/run_<epoch_timestamp_microsec>_<uuid>`.
 Each directory corresponds to a `Session.run()` call that underlies
 the `fit()` or `evaluate()` call. You can load these directories and inspect
index b5760df..183994d 100644 (file)
@@ -449,7 +449,6 @@ py_binary(
     deps = [
         ":debug_py",
         "//tensorflow:tensorflow_py",
-        "//third_party/py/numpy",
         "@six_archive//:six",
     ],
 )
index 4f4666e..00090b2 100644 (file)
@@ -22,11 +22,9 @@ import os
 import sys
 import tempfile
 
-import numpy as np
 from six.moves import urllib
 import tensorflow as tf
 
-from tensorflow.contrib.learn.python.learn import experiment
 from tensorflow.contrib.learn.python.learn.datasets import base
 from tensorflow.python import debug as tf_debug
 
@@ -82,28 +80,34 @@ def iris_input_fn():
 def main(_):
   # Load datasets.
   if FLAGS.fake_data:
-    training_set = tf.contrib.learn.datasets.base.Dataset(
-        np.random.random([120, 4]),
-        np.random.random_integers(3, size=[120]) - 1)
-    test_set = tf.contrib.learn.datasets.base.Dataset(
-        np.random.random([30, 4]),
-        np.random.random_integers(3, size=[30]) - 1)
+    def training_input_fn():
+      return ({"features": tf.random_normal([128, 4])},
+              tf.random_uniform([128], minval=0, maxval=3, dtype=tf.int32))
+    def test_input_fn():
+      return ({"features": tf.random_normal([32, 4])},
+              tf.random_uniform([32], minval=0, maxval=3, dtype=tf.int32))
+    feature_columns = [
+        tf.feature_column.numeric_column("features", shape=(4,))]
   else:
     training_data_path, test_data_path = maybe_download_data(FLAGS.data_dir)
-    training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
-        filename=training_data_path,
-        target_dtype=np.int,
-        features_dtype=np.float32)
-    test_set = tf.contrib.learn.datasets.base.load_csv_with_header(
-        filename=test_data_path, target_dtype=np.int, features_dtype=np.float32)
-
-  # Specify that all features have real-value data
-  feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]
+    column_names = [
+        "sepal_length", "sepal_width", "petal_length", "petal_width", "label"]
+    batch_size = 32
+    def training_input_fn():
+      return tf.contrib.data.make_csv_dataset(
+          [training_data_path], batch_size,
+          column_names=column_names, label_name="label")
+    def test_input_fn():
+      return tf.contrib.data.make_csv_dataset(
+          [test_data_path], batch_size,
+          column_names=column_names, label_name="label")
+    feature_columns = [tf.feature_column.numeric_column(feature)
+                       for feature in column_names[:-1]]
 
   # Build 3 layer DNN with 10, 20, 10 units respectively.
   model_dir = FLAGS.model_dir or tempfile.mkdtemp(prefix="debug_tflearn_iris_")
 
-  classifier = tf.contrib.learn.DNNClassifier(
+  classifier = tf.estimator.DNNClassifier(
       feature_columns=feature_columns,
       hidden_units=[10, 20, 10],
       n_classes=3,
@@ -121,32 +125,23 @@ def main(_):
     debug_hook = tf_debug.TensorBoardDebugHook(FLAGS.tensorboard_debug_address)
   hooks = [debug_hook]
 
-  if not FLAGS.use_experiment:
-    # Fit model.
-    classifier.fit(x=training_set.data,
-                   y=training_set.target,
+  # Train model, using tfdbg hook.
+  classifier.train(training_input_fn,
                    steps=FLAGS.train_steps,
-                   monitors=hooks)
+                   hooks=hooks)
 
-    # Evaluate accuracy.
-    accuracy_score = classifier.evaluate(x=test_set.data,
-                                         y=test_set.target,
-                                         hooks=hooks)["accuracy"]
-  else:
-    ex = experiment.Experiment(classifier,
-                               train_input_fn=iris_input_fn,
-                               eval_input_fn=iris_input_fn,
-                               train_steps=FLAGS.train_steps,
-                               eval_delay_secs=0,
-                               eval_steps=1,
-                               train_monitors=hooks,
-                               eval_hooks=hooks)
-    ex.train()
-    accuracy_score = ex.evaluate()["accuracy"]
+  # Evaluate accuracy, using tfdbg hook.
+  accuracy_score = classifier.evaluate(test_input_fn,
+                                       steps=FLAGS.eval_steps,
+                                       hooks=hooks)["accuracy"]
 
   print("After training %d steps, Accuracy = %f" %
         (FLAGS.train_steps, accuracy_score))
 
+  # Make predictions, using tfdbg hook.
+  predict_results = classifier.predict(test_input_fn, hooks=hooks)
+  print("A prediction result: %s" % predict_results.next())
+
 
 if __name__ == "__main__":
   parser = argparse.ArgumentParser()
@@ -165,14 +160,12 @@ if __name__ == "__main__":
       "--train_steps",
       type=int,
       default=10,
-      help="Number of steps to run trainer.")
+      help="Number of steps to run training for.")
   parser.add_argument(
-      "--use_experiment",
-      type="bool",
-      nargs="?",
-      const=True,
-      default=False,
-      help="Use tf.contrib.learn Experiment to run training and evaluation")
+      "--eval_steps",
+      type=int,
+      default=1,
+      help="Number of steps to run evaluation foir.")
   parser.add_argument(
       "--ui_type",
       type=str,