Update docs_src in 1.9 to match master
authorBilly Lamberta <blamb@google.com>
Tue, 3 Jul 2018 23:40:14 +0000 (16:40 -0700)
committerBilly Lamberta <blamb@google.com>
Tue, 3 Jul 2018 23:40:14 +0000 (16:40 -0700)
58 files changed:
tensorflow/docs_src/community/leftnav_files
tensorflow/docs_src/community/swift.md [deleted file]
tensorflow/docs_src/get_started/eager.md [deleted file]
tensorflow/docs_src/get_started/leftnav_files [deleted file]
tensorflow/docs_src/guide/custom_estimators.md
tensorflow/docs_src/guide/datasets_for_estimators.md
tensorflow/docs_src/guide/debugger.md
tensorflow/docs_src/guide/eager.md
tensorflow/docs_src/guide/graphs.md
tensorflow/docs_src/guide/keras.md
tensorflow/docs_src/guide/saved_model.md
tensorflow/docs_src/guide/tensorboard_histograms.md
tensorflow/docs_src/install/install_c.md
tensorflow/docs_src/install/install_go.md
tensorflow/docs_src/install/install_java.md
tensorflow/docs_src/install/install_linux.md
tensorflow/docs_src/install/install_mac.md
tensorflow/docs_src/install/install_raspbian.md
tensorflow/docs_src/install/install_sources.md
tensorflow/docs_src/install/install_windows.md
tensorflow/docs_src/mobile/leftnav_files
tensorflow/docs_src/mobile/linking_libs.md
tensorflow/docs_src/mobile/mobile_intro.md
tensorflow/docs_src/mobile/prepare_models.md
tensorflow/docs_src/mobile/tflite/demo_android.md
tensorflow/docs_src/mobile/tflite/devguide.md
tensorflow/docs_src/mobile/tflite/index.md
tensorflow/docs_src/mobile/tflite/performance.md [new file with mode: 0644]
tensorflow/docs_src/performance/quantization.md
tensorflow/docs_src/performance/xla/operation_semantics.md
tensorflow/docs_src/tutorials/_index.yaml [moved from tensorflow/docs_src/get_started/_index.yaml with 77% similarity]
tensorflow/docs_src/tutorials/_toc.yaml [new file with mode: 0644]
tensorflow/docs_src/tutorials/eager/custom_training_walkthrough.md [new file with mode: 0644]
tensorflow/docs_src/tutorials/eager/index.md [new file with mode: 0644]
tensorflow/docs_src/tutorials/image_retraining.md [deleted file]
tensorflow/docs_src/tutorials/images/deep_cnn.md [moved from tensorflow/docs_src/tutorials/deep_cnn.md with 98% similarity]
tensorflow/docs_src/tutorials/images/image_recognition.md [moved from tensorflow/docs_src/tutorials/image_recognition.md with 99% similarity]
tensorflow/docs_src/tutorials/images/layers.md [moved from tensorflow/docs_src/tutorials/layers.md with 94% similarity]
tensorflow/docs_src/tutorials/index.md [deleted file]
tensorflow/docs_src/tutorials/keras/basic_classification.md [moved from tensorflow/docs_src/get_started/basic_classification.md with 100% similarity]
tensorflow/docs_src/tutorials/keras/basic_regression.md [moved from tensorflow/docs_src/get_started/basic_regression.md with 100% similarity]
tensorflow/docs_src/tutorials/keras/basic_text_classification.md [moved from tensorflow/docs_src/get_started/basic_text_classification.md with 100% similarity]
tensorflow/docs_src/tutorials/keras/index.md [new file with mode: 0644]
tensorflow/docs_src/tutorials/keras/overfit_and_underfit.md [moved from tensorflow/docs_src/get_started/overfit_and_underfit.md with 100% similarity]
tensorflow/docs_src/tutorials/keras/save_and_restore_models.md [moved from tensorflow/docs_src/get_started/save_and_restore_models.md with 100% similarity]
tensorflow/docs_src/tutorials/leftnav_files [deleted file]
tensorflow/docs_src/tutorials/next_steps.md [moved from tensorflow/docs_src/get_started/next_steps.md with 100% similarity]
tensorflow/docs_src/tutorials/non-ml/mandelbrot.md [moved from tensorflow/docs_src/tutorials/mandelbrot.md with 100% similarity, mode: 0644]
tensorflow/docs_src/tutorials/non-ml/pdes.md [moved from tensorflow/docs_src/tutorials/pdes.md with 98% similarity, mode: 0644]
tensorflow/docs_src/tutorials/representation/kernel_methods.md [moved from tensorflow/docs_src/tutorials/kernel_methods.md with 99% similarity]
tensorflow/docs_src/tutorials/representation/linear.md [moved from tensorflow/docs_src/tutorials/linear.md with 100% similarity]
tensorflow/docs_src/tutorials/representation/wide.md [moved from tensorflow/docs_src/tutorials/wide.md with 100% similarity]
tensorflow/docs_src/tutorials/representation/wide_and_deep.md [moved from tensorflow/docs_src/tutorials/wide_and_deep.md with 100% similarity]
tensorflow/docs_src/tutorials/representation/word2vec.md [moved from tensorflow/docs_src/tutorials/word2vec.md with 100% similarity]
tensorflow/docs_src/tutorials/seq2seq.md [deleted file]
tensorflow/docs_src/tutorials/sequences/audio_recognition.md [moved from tensorflow/docs_src/tutorials/audio_recognition.md with 100% similarity]
tensorflow/docs_src/tutorials/sequences/recurrent.md [moved from tensorflow/docs_src/tutorials/recurrent.md with 98% similarity]
tensorflow/docs_src/tutorials/sequences/recurrent_quickdraw.md [moved from tensorflow/docs_src/tutorials/recurrent_quickdraw.md with 98% similarity]

index 2bae60d..0bd1f14 100644 (file)
@@ -6,4 +6,3 @@ groups.md
 documentation.md
 style_guide.md
 benchmarks.md
-swift.md
diff --git a/tensorflow/docs_src/community/swift.md b/tensorflow/docs_src/community/swift.md
deleted file mode 100644 (file)
index d1625d3..0000000
+++ /dev/null
@@ -1,60 +0,0 @@
-<p align="center">
-  <img src="../images/swift_tensorflow_logo.png">
-</p>
-
-# Swift for TensorFlow
-
-Welcome to the Swift for TensorFlow development community!
-
-Swift for TensorFlow is a new way to develop machine learning models. It
-gives you the power of
-[TensorFlow](https://www.tensorflow.org) directly
-integrated into the [Swift programming language](https://swift.org/about).
-With Swift, you can write the following imperative code, and Swift
-automatically turns it into **a single TensorFlow Graph** and runs it
-with the full performance of TensorFlow Sessions on CPU, GPU and
-[TPU](https://cloud.google.com/tpu/docs/tpus).
-
-```swift
-import TensorFlow
-
-var x = Tensor<Float>([[1, 2], [3, 4]])
-
-for i in 1...5 {
-  x += x ⊗ x
-}
-
-print(x)
-```
-
-Swift combines the flexibility of
-[Eager Execution](https://www.tensorflow.org/programmers_guide/eager) with the
-high performance of [Graphs and Sessions](https://www.tensorflow.org/programmers_guide/graphs).
-Behind the scenes, Swift analyzes your Tensor code and automatically builds
-graphs for you. Swift also catches type errors and shape mismatches before
-running your code, and has [Automatic Differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)
-built right in. We believe that machine learning tools are so important that
-they deserve **a first-class language and a compiler**.
-
-Note: Swift for TensorFlow is an early stage research project. It has been
-released to enable open source development and is not yet ready for general use
-by machine learning developers.
-
-## Open Source
-
-We have released Swift for TensorFlow as an open-source project on GitHub!
-
-Our [documentation repository](https://github.com/tensorflow/swift) contains a
-[project overview](https://github.com/tensorflow/swift/blob/master/docs/DesignOverview.md)
-and [technical papers](https://github.com/tensorflow/swift/tree/master/docs)
-explaining specific areas in depth. There are also instructions for [installing
-pre-built packages](https://github.com/tensorflow/swift/blob/master/Installation.md)
-(for macOS and Ubuntu) as well as a simple
-[usage tutorial](https://github.com/tensorflow/swift/blob/master/Usage.md).
-
-Moving forward, we will use an open design model and all discussions will be
-public.
-
-[Sign up here to join the community Google
-group](https://groups.google.com/a/tensorflow.org/d/forum/swift), which we will
-use for announcements and general discussion.
diff --git a/tensorflow/docs_src/get_started/eager.md b/tensorflow/docs_src/get_started/eager.md
deleted file mode 100644 (file)
index ddf2394..0000000
+++ /dev/null
@@ -1,3 +0,0 @@
-# Custom Training Walkthrough
-
-[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/r1.9.0/samples/core/get_started/eager.ipynb)
diff --git a/tensorflow/docs_src/get_started/leftnav_files b/tensorflow/docs_src/get_started/leftnav_files
deleted file mode 100644 (file)
index 99d2b2c..0000000
+++ /dev/null
@@ -1,10 +0,0 @@
-### Learn and use ML
-basic_classification.md: Basic classification
-basic_text_classification.md: Text classification
-basic_regression.md: Regression
-overfit_and_underfit.md
-save_and_restore_models.md
-next_steps.md
-
-### Research and experimentation
-eager.md
index fb20b35..a63e2ba 100644 (file)
@@ -362,10 +362,10 @@ model's loss. This is the
 that will be optimized.
 
 We can calculate the loss by calling @{tf.losses.sparse_softmax_cross_entropy}.
-The value returned by this function will be lowest, approximately 0,
-probability of the correct class (at index `label`) is near 1.0. The loss value
-returned is progressively larger as the probability of the correct class
-decreases.
+The value returned by this function will be approximately 0 at lowest,
+when the probability of the correct class (at index `label`) is near 1.0.
+The loss value returned is progressively larger as the probability of the
+correct class decreases.
 
 This function returns the average over the whole batch.
 
index b04af78..b55a573 100644 (file)
@@ -76,9 +76,9 @@ Let's walk through the `train_input_fn()`.
 The function starts by using the @{tf.data.Dataset.from_tensor_slices} function
 to create a @{tf.data.Dataset} representing slices of the array. The array is
 sliced across the first dimension. For example, an array containing the
-@{$tutorials/layers$mnist training data} has a shape of `(60000, 28, 28)`.
-Passing this to `from_tensor_slices` returns a `Dataset` object containing
-60000 slices, each one a 28x28 image.
+MNIST training data has a shape of `(60000, 28, 28)`. Passing this to
+`from_tensor_slices` returns a `Dataset` object containing 60000 slices, each one
+a 28x28 image.
 
 The code that returns this `Dataset` is as follows:
 
index 6bd9418..8d78fe6 100644 (file)
@@ -17,7 +17,7 @@ how to use the graphical user interface (GUI) of tfdbg, i.e., the
 Note: The TensorFlow debugger uses a
 [curses](https://en.wikipedia.org/wiki/Curses_\(programming_library\))-based text
 user interface. On Mac OS X, the `ncurses` library is required and can be
-installed with `brew install homebrew/dupes/ncurses`. On Windows, curses isn't as
+installed with `brew install ncurses`. On Windows, curses isn't as
 well supported, so a [readline](https://en.wikipedia.org/wiki/GNU_Readline)-based
 interface can be used with tfdbg by installing `pyreadline` with `pip`. If you
 use Anaconda3, you can install it with a command such as
@@ -33,8 +33,9 @@ and [`inf`s](https://en.wikipedia.org/wiki/Infinity), a frequently-encountered
 type of bug in TensorFlow model development.
 The following example is for users who use the low-level
 [`Session`](https://www.tensorflow.org/api_docs/python/tf/Session) API of
-TensorFlow. A later section of this document describes how to use **tfdbg**
-with a higher-level API, namely `Estimator`s.
+TensorFlow. Later sections of this document describe how to use **tfdbg**
+with higher-level APIs of TensorFlow, including `tf.estimator`,
+`tf.keras` / `keras` and `tf.contrib.slim`.
 To *observe* such an issue, run the following command without the debugger (the
 source code can be found
 [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/debug/examples/debug_mnist.py)):
@@ -209,6 +210,7 @@ Try the following commands at the `tfdbg>` prompt (referencing the code at
 | **`config`** | | **Set or show persistent TFDBG UI configuration.** | |
 | | `set` | Set the value of a config item: {`graph_recursion_depth`, `mouse_mode`}. | `config set graph_recursion_depth 3` |
 | | `show` | Show current persistent UI configuration. | `config show` |
+| **`version`** | | **Print the version of TensorFlow and its key dependencies.** | `version` |
 | **`help`** | | **Print general help information** | `help` |
 | | `help <command>` | Print help for given command. | `help lt` |
 
@@ -461,7 +463,6 @@ predict_results = classifier.predict(predict_input_fn, hooks=hooks)
 ```
 
 [debug_tflearn_iris.py](https://www.tensorflow.org/code/tensorflow/python/debug/examples/debug_tflearn_iris.py),
-based on [tf-learn's iris tutorial](https://www.tensorflow.org/versions/r1.8/get_started/tflearn),
 contains a full example of how to use the tfdbg with `Estimator`s.
 To run this example, do:
 
@@ -477,20 +478,31 @@ for more details.
 
 ## Debugging Keras Models with TFDBG
 
-To use TFDBG with [Keras](https://keras.io/), let the Keras backend use
-a TFDBG-wrapped Session object. For example, to use the CLI wrapper:
+To use TFDBG with
+[tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras),
+let the Keras backend use a TFDBG-wrapped Session object. For example, to use
+the CLI wrapper:
 
 ``` python
 import tensorflow as tf
-from keras import backend as keras_backend
 from tensorflow.python import debug as tf_debug
 
-keras_backend.set_session(tf_debug.LocalCLIDebugWrapperSession(tf.Session()))
+tf.keras.backend.set_session(tf_debug.LocalCLIDebugWrapperSession(tf.Session()))
 
 # Define your keras model, called "model".
-model.fit(...)  # This will break into the TFDBG CLI.
+
+# Calls to `fit()`, 'evaluate()` and `predict()` methods will break into the
+# TFDBG CLI.
+model.fit(...)
+model.evaluate(...)
+model.predict(...)
 ```
 
+With minor modification, the preceding code example also works for the
+[non-TensorFlow version of Keras](https://keras.io/) running against a
+TensorFlow backend. You just need to replace `tf.keras.backend` with
+`keras.backend`.
+
 ## Debugging tf-slim with TFDBG
 
 TFDBG supports debugging of training and evaluation with
index 00d02b4..003ca26 100644 (file)
@@ -149,16 +149,17 @@ it to implement your own layer:
 ```py
 class MySimpleLayer(tf.keras.layers.Layer):
   def __init__(self, output_units):
+    super(MySimpleLayer, self).__init__()
     self.output_units = output_units
 
-  def build(self, input):
+  def build(self, input_shape):
     # The build method gets called the first time your layer is used.
     # Creating variables on build() allows you to make their shape depend
-    # on the input shape and hence remove the need for the user to specify
+    # on the input shape and hence removes the need for the user to specify
     # full shapes. It is possible to create variables during __init__() if
     # you already know their full shapes.
     self.kernel = self.add_variable(
-      "kernel", [input.shape[-1], self.output_units])
+      "kernel", [input_shape[-1], self.output_units])
 
   def call(self, input):
     # Override call() instead of __call__ so we can perform some bookkeeping.
@@ -315,9 +316,8 @@ for (batch, (images, labels)) in enumerate(dataset):
 
 
 The following example creates a multi-layer model that classifies the standard
-[MNIST handwritten digits](https://www.tensorflow.org/tutorials/layers). It
-demonstrates the optimizer and layer APIs to build trainable graphs in an eager
-execution environment.
+MNIST handwritten digits. It demonstrates the optimizer and layer APIs to build
+trainable graphs in an eager execution environment.
 
 ### Train a model
 
index e6246ef..492f97c 100644 (file)
@@ -486,7 +486,7 @@ subgraph inside.
 ![](../images/mnist_deep.png)
 
 For more information about visualizing your TensorFlow application with
-TensorBoard, see the [TensorBoard tutorial](../get_started/summaries_and_tensorboard.md).
+TensorBoard, see the [TensorBoard guide](./summaries_and_tensorboard.md).
 
 ## Programming with multiple graphs
 
index d584ebe..1d846df 100644 (file)
@@ -221,7 +221,7 @@ To *evaluate* the inference-mode loss and metrics for the data provided:
 ```python
 model.evaluate(x, y, batch_size=32)
 
-model.evaluate(dataset, steps=30
+model.evaluate(dataset, steps=30)
 ```
 
 And to *predict* the output of the last layer in inference for the data provided,
@@ -548,11 +548,9 @@ model.compile(optimizer=tf.train.RMSPropOptimizer(0.001),
 estimator = keras.estimator.model_to_estimator(model)
 ```
 
-Note: 
-* Enable [eager execution](./eager.md) for debugging
+Note: Enable [eager execution](./eager.md) for debugging
 [Estimator input functions](./premade_estimators.md#create_input_functions)
 and inspecting data.
-* Don't use batch normalization or try to finetune batch normalization models with estimators created from `tf.keras.estimator.model_to_estimator`. More details at [#17950](https://github.com/tensorflow/tensorflow/issues/17950)
 
 ### Multiple GPUs
 
@@ -583,15 +581,6 @@ model.compile(loss='binary_crossentropy', optimizer=optimizer)
 model.summary()
 ```
 
-Convert the Keras model to a `tf.estimator.Estimator` instance:
-
-```python
-keras_estimator = keras.estimator.model_to_estimator(
-  keras_model=model,
-  config=config,
-  model_dir='/tmp/model_dir')
-```
-
 Define an *input pipeline*. The `input_fn` returns a `tf.data.Dataset` object
 used to distribute the data across multiple devices—with each device processing
 a slice of the input batch.
@@ -617,6 +606,15 @@ strategy = tf.contrib.distribute.MirroredStrategy()
 config = tf.estimator.RunConfig(train_distribute=strategy)
 ```
 
+Convert the Keras model to a `tf.estimator.Estimator` instance:
+
+```python
+keras_estimator = keras.estimator.model_to_estimator(
+  keras_model=model,
+  config=config,
+  model_dir='/tmp/model_dir')
+```
+
 Finally, train the `Estimator` instance by providing the `input_fn` and `steps`
 arguments:
 
index 27ef7bb..acc3d3c 100644 (file)
@@ -794,11 +794,12 @@ Here's the syntax:
 ```
 usage: saved_model_cli run [-h] --dir DIR --tag_set TAG_SET --signature_def
                            SIGNATURE_DEF_KEY [--inputs INPUTS]
-                           [--input_exprs INPUT_EXPRS] [--outdir OUTDIR]
+                           [--input_exprs INPUT_EXPRS]
+                           [--input_examples INPUT_EXAMPLES] [--outdir OUTDIR]
                            [--overwrite] [--tf_debug]
 ```
 
-The `run` command provides the following two ways to pass inputs to the model:
+The `run` command provides the following three ways to pass inputs to the model:
 
 * `--inputs` option enables you to pass numpy ndarray in files.
 * `--input_exprs` option enables you to pass Python expressions.
@@ -847,7 +848,7 @@ dictionary is stored in the pickle file and the value corresponding to
 the *variable_name* will be used.
 
 
-#### `--inputs_exprs`
+#### `--input_exprs`
 
 To pass inputs through Python expressions, specify the `--input_exprs` option.
 This can be useful for when you don't have data
@@ -869,7 +870,7 @@ example:
 (Note that the `numpy` module is already available to you as `np`.)
 
 
-#### `--inputs_examples`
+#### `--input_examples`
 
 To pass `tf.train.Example` as inputs, specify the `--input_examples` option.
 For each input key, it takes a list of dictionary, where each dictionary is an
index 918deda..af8f2ca 100644 (file)
@@ -13,8 +13,8 @@ TensorFlow has an op
 which is perfect for this purpose. As is usually the case with TensorBoard, we
 will ingest data using a summary op; in this case,
 ['tf.summary.histogram'](https://www.tensorflow.org/api_docs/python/tf/summary/histogram).
-For a primer on how summaries work, please see the general
-[TensorBoard tutorial](https://www.tensorflow.org/get_started/summaries_and_tensorboard).
+For a primer on how summaries work, please see the
+[TensorBoard guide](./summaries_and_tensorboard.md).
 
 Here is a code snippet that will generate some histogram summaries containing
 normally distributed data, where the mean of the distribution increases over
index 9aebf2b..2901848 100644 (file)
@@ -38,7 +38,7 @@ enable TensorFlow for C:
          OS="linux" # Change to "darwin" for macOS
          TARGET_DIRECTORY="/usr/local"
          curl -L \
-           "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.9.0-rc2.tar.gz" |
+           "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-${OS}-x86_64-1.9.0-rc0.tar.gz" |
            sudo tar -C $TARGET_DIRECTORY -xz
 
      The `tar` command extracts the TensorFlow C library into the `lib`
index 1907355..2c126df 100644 (file)
@@ -38,7 +38,7 @@ steps to install this library and enable TensorFlow for Go:
          TF_TYPE="cpu" # Change to "gpu" for GPU support
          TARGET_DIRECTORY='/usr/local'
          curl -L \
-           "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.9.0-rc2.tar.gz" |
+           "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-${TF_TYPE}-$(go env GOOS)-x86_64-1.9.0-rc0.tar.gz" |
          sudo tar -C $TARGET_DIRECTORY -xz
 
      The `tar` command extracts the TensorFlow C library into the `lib`
index b9c9912..692dfc9 100644 (file)
@@ -36,7 +36,7 @@ following to the project's `pom.xml` to use the TensorFlow Java APIs:
 <dependency>
   <groupId>org.tensorflow</groupId>
   <artifactId>tensorflow</artifactId>
-  <version>1.9.0-rc2</version>
+  <version>1.9.0-rc0</version>
 </dependency>
 ```
 
@@ -65,7 +65,7 @@ As an example, these steps will create a Maven project that uses TensorFlow:
                <dependency>
                  <groupId>org.tensorflow</groupId>
                  <artifactId>tensorflow</artifactId>
-                 <version>1.9.0-rc2</version>
+                 <version>1.9.0-rc0</version>
                </dependency>
              </dependencies>
          </project>
@@ -124,12 +124,12 @@ instead:
 <dependency>
   <groupId>org.tensorflow</groupId>
   <artifactId>libtensorflow</artifactId>
-  <version>1.9.0-rc2</version>
+  <version>1.9.0-rc0</version>
 </dependency>
 <dependency>
   <groupId>org.tensorflow</groupId>
   <artifactId>libtensorflow_jni_gpu</artifactId>
-  <version>1.9.0-rc2</version>
+  <version>1.9.0-rc0</version>
 </dependency>
 ```
 
@@ -148,7 +148,7 @@ refer to the simpler instructions above instead.
 Take the following steps to install TensorFlow for Java on Linux or macOS:
 
   1. Download
-     [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.9.0-rc2.jar),
+     [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.9.0-rc0.jar),
      which is the TensorFlow Java Archive (JAR).
 
   2. Decide whether you will run TensorFlow for Java on CPU(s) only or with
@@ -167,7 +167,7 @@ Take the following steps to install TensorFlow for Java on Linux or macOS:
          OS=$(uname -s | tr '[:upper:]' '[:lower:]')
          mkdir -p ./jni
          curl -L \
-           "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.9.0-rc2.tar.gz" |
+           "https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-${TF_TYPE}-${OS}-x86_64-1.9.0-rc0.tar.gz" |
            tar -xz -C ./jni
 
 ### Install on Windows
@@ -175,13 +175,13 @@ Take the following steps to install TensorFlow for Java on Linux or macOS:
 Take the following steps to install TensorFlow for Java on Windows:
 
   1. Download
-     [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.9.0-rc2.jar),
+     [libtensorflow.jar](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-1.9.0-rc0.jar),
      which is the TensorFlow Java Archive (JAR).
   2. Download the following Java Native Interface (JNI) file appropriate for
-     [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.9.0-rc2.zip).
+     [TensorFlow for Java on Windows](https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow_jni-cpu-windows-x86_64-1.9.0-rc0.zip).
   3. Extract this .zip file.
 
-
+__Note__: The native library (`tensorflow_jni.dll`) requires `msvcp140.dll` at runtime, which is included in the [Visual C++ 2015 Redistributable](https://www.microsoft.com/en-us/download/details.aspx?id=48145) package. 
 
 ### Validate the installation
 
@@ -227,7 +227,7 @@ must be part of your `classpath`. For example, you can include the
 downloaded `.jar` in your `classpath` by using the `-cp` compilation flag
 as follows:
 
-<pre><b>javac -cp libtensorflow-1.9.0-rc2.jar HelloTF.java</b></pre>
+<pre><b>javac -cp libtensorflow-1.9.0-rc0.jar HelloTF.java</b></pre>
 
 
 ### Running
@@ -241,11 +241,11 @@ two files are available to the JVM:
 For example, the following command line executes the `HelloTF` program on Linux
 and macOS X:
 
-<pre><b>java -cp libtensorflow-1.9.0-rc2.jar:. -Djava.library.path=./jni HelloTF</b></pre>
+<pre><b>java -cp libtensorflow-1.9.0-rc0.jar:. -Djava.library.path=./jni HelloTF</b></pre>
 
 And the following command line executes the `HelloTF` program on Windows:
 
-<pre><b>java -cp libtensorflow-1.9.0-rc2.jar;. -Djava.library.path=jni HelloTF</b></pre>
+<pre><b>java -cp libtensorflow-1.9.0-rc0.jar;. -Djava.library.path=jni HelloTF</b></pre>
 
 If the program prints <tt>Hello from <i>version</i></tt>, you've successfully
 installed TensorFlow for Java and are ready to use the API.  If the program
index ae3d50f..f21c073 100644 (file)
@@ -339,9 +339,7 @@ Docker will download the TensorFlow binary image the first time you launch it.
 
 #### GPU support
 
-Prior to installing TensorFlow with GPU support, ensure that your system meets all
-[NVIDIA software requirements](#NVIDIARequirements).  To launch a Docker container
-with NVidia GPU support, enter a command of the following format:
+To launch a Docker container with NVidia GPU support, enter a command of the following format (this [does not require any local CUDA installation](https://github.com/nvidia/nvidia-docker/wiki/CUDA#requirements)):
 
 <pre>
 $ <b>nvidia-docker run -it</b> <i>-p hostPort:containerPort TensorFlowGPUImage</i>
@@ -438,7 +436,7 @@ Take the following steps to install TensorFlow in an Anaconda environment:
 
      <pre>
      (tensorflow)$ <b>pip install --ignore-installed --upgrade \
-     https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc2-cp34-cp34m-linux_x86_64.whl</b></pre>
+     https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc0-cp34-cp34m-linux_x86_64.whl</b></pre>
 
 <a name="ValidateYourInstallation"></a>
 ## Validate your installation
@@ -491,7 +489,7 @@ TensorFlow programs:
 If the system outputs an error message instead of a greeting, see [Common
 installation problems](#common_installation_problems).
 
-To learn more, see [Get Started with TensorFlow](https://www.tensorflow.org/get_started).
+To learn more, see the [TensorFlow tutorials](../tutorials/).
 
 <a name="NVIDIARequirements"></a>
 ## TensorFlow GPU support
@@ -678,14 +676,14 @@ This section documents the relevant values for Linux installations.
 CPU only:
 
 <pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc2-cp27-none-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc0-cp27-none-linux_x86_64.whl
 </pre>
 
 
 GPU support:
 
 <pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc2-cp27-none-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc0-cp27-none-linux_x86_64.whl
 </pre>
 
 Note that GPU support requires the NVIDIA hardware and software described in
@@ -697,14 +695,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
 CPU only:
 
 <pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc2-cp34-cp34m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc0-cp34-cp34m-linux_x86_64.whl
 </pre>
 
 
 GPU support:
 
 <pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc2-cp34-cp34m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc0-cp34-cp34m-linux_x86_64.whl
 </pre>
 
 Note that GPU support requires the NVIDIA hardware and software described in
@@ -716,14 +714,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
 CPU only:
 
 <pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc2-cp35-cp35m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc0-cp35-cp35m-linux_x86_64.whl
 </pre>
 
 
 GPU support:
 
 <pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc2-cp35-cp35m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc0-cp35-cp35m-linux_x86_64.whl
 </pre>
 
 
@@ -735,14 +733,14 @@ Note that GPU support requires the NVIDIA hardware and software described in
 CPU only:
 
 <pre>
-https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc2-cp36-cp36m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.9.0rc0-cp36-cp36m-linux_x86_64.whl
 </pre>
 
 
 GPU support:
 
 <pre>
-https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc2-cp36-cp36m-linux_x86_64.whl
+https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.9.0rc0-cp36-cp36m-linux_x86_64.whl
 </pre>
 
 
index 3de6da1..c6f0c17 100644 (file)
@@ -119,7 +119,7 @@ Take the following steps to install TensorFlow with Virtualenv:
      TensorFlow in the active Virtualenv is as follows:
 
      <pre> $ <b>pip3 install --upgrade \
-     https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc2-py3-none-any.whl</b></pre>
+     https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc0-py3-none-any.whl</b></pre>
 
 If you encounter installation problems, see
 [Common Installation Problems](#common-installation-problems).
@@ -242,7 +242,7 @@ take the following steps:
      issue the following command:
 
      <pre> $ <b>sudo pip3 install --upgrade \
-     https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc2-py3-none-any.whl</b> </pre>
+     https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc0-py3-none-any.whl</b> </pre>
 
      If the preceding command fails, see
      [installation problems](#common-installation-problems).
@@ -350,7 +350,7 @@ Take the following steps to install TensorFlow in an Anaconda environment:
      TensorFlow for Python 2.7:
 
      <pre> (<i>targetDirectory</i>)$ <b>pip install --ignore-installed --upgrade \
-     https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc2-py2-none-any.whl</b></pre>
+     https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc0-py2-none-any.whl</b></pre>
 
 
 <a name="ValidateYourInstallation"></a>
@@ -403,8 +403,7 @@ writing TensorFlow programs:
 If the system outputs an error message instead of a greeting, see
 [Common installation problems](#common_installation_problems).
 
-To learn more, see [Get Started with TensorFlow](https://www.tensorflow.org/get_started).
-
+To learn more, see the [TensorFlow tutorials](../tutorials/).
 
 ## Common installation problems
 
@@ -518,7 +517,7 @@ The value you specify depends on your Python version.
 
 
 <pre>
-https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc2-py2-none-any.whl
+https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc0-py2-none-any.whl
 </pre>
 
 
@@ -526,5 +525,5 @@ https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc2-py2-none-a
 
 
 <pre>
-https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc2-py3-none-any.whl
+https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.9.0rc0-py3-none-any.whl
 </pre>
index 0caab6d..46c4944 100644 (file)
@@ -230,7 +230,7 @@ problems, despite the log message.
 If the system outputs an error message instead of a greeting, see [Common
 installation problems](#common_installation_problems).
 
-To learn more, see [Get Started with TensorFlow](https://www.tensorflow.org/get_started).
+To learn more, see the [TensorFlow tutorials](../tutorials/).
 
 ## Common installation problems
 
index 3520f97..fc1f6d0 100644 (file)
@@ -81,7 +81,7 @@ or
 [macOS](#PrepareMac)
 
 
-<a name="#PrepareLinux"></a>
+<a name="PrepareLinux"></a>
 ## Prepare environment for Linux
 
 Before building TensorFlow on Linux, install the following build
@@ -289,17 +289,27 @@ Note: If you're only interested in building the libraries for the TensorFlow C
 or Java APIs, see [Build the C or Java libraries](#BuildCorJava), you do not
 need to build the pip package in that case.
 
-To build a pip package for TensorFlow with CPU-only support,
-you would typically invoke the following command:
+### CPU-only support
+
+To build a pip package for TensorFlow with CPU-only support:
+
+<pre>
+$ bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
+</pre>
+
+To build a pip package for TensorFlow with CPU-only support for the Intel® MKL-DNN:
 
 <pre>
-$ <b>bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package</b>
+$ bazel build --config=mkl --config=opt //tensorflow/tools/pip_package:build_pip_package
 </pre>
 
-To build a pip package for TensorFlow with GPU support,
-invoke the following command:
+### GPU support
+
+To build a pip package for TensorFlow with GPU support:
 
-<pre>$ <b>bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package</b> </pre>
+<pre>
+$ bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
+</pre>
 
 **NOTE on gcc 5 or later:** the binary pip packages available on the
 TensorFlow website are built with gcc 4, which uses the older ABI. To
@@ -328,10 +338,10 @@ Invoke `pip install` to install that pip package.
 The filename of the `.whl` file depends on your platform.
 For example, the following command will install the pip package
 
-for TensorFlow 1.9.0rc2 on Linux:
+for TensorFlow 1.9.0rc0 on Linux:
 
 <pre>
-$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.9.0rc2-py2-none-any.whl</b>
+$ <b>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.9.0rc0-py2-none-any.whl</b>
 </pre>
 
 ## Validate your installation
@@ -362,7 +372,7 @@ TensorFlow programs:
 
 <pre>Hello, TensorFlow!</pre>
 
-To learn more, see [Get Started with TensorFlow](https://www.tensorflow.org/get_started).
+To learn more, see the [TensorFlow tutorials](../tutorials/).
 
 If the system outputs an error message instead of a greeting, see [Common
 installation problems](#common_installation_problems).
@@ -373,9 +383,9 @@ The build and installation problems you encounter typically depend on the
 operating system.  See the "Common installation problems" section
 of one of the following guides:
 
-  * @{$install_linux#CommonInstallationProblems$Installing TensorFlow on Linux}
-  * @{$install_mac#CommonInstallationProblems$Installing TensorFlow on Mac OS}
-  * @{$install_windows#CommonInstallationProblems$Installing TensorFlow on Windows}
+  * @{$install_linux#common_installation_problems$Installing TensorFlow on Linux}
+  * @{$install_mac#common_installation_problems$Installing TensorFlow on Mac OS}
+  * @{$install_windows#common_installation_problems$Installing TensorFlow on Windows}
 
 Beyond the errors documented in those two guides, the following table
 notes additional errors specific to building TensorFlow.  Note that we
index 7fe94f0..7b7b17c 100644 (file)
@@ -157,7 +157,7 @@ TensorFlow programs:
 If the system outputs an error message instead of a greeting, see [Common
 installation problems](#common_installation_problems).
 
-To learn more, see [Get Started with TensorFlow](https://www.tensorflow.org/get_started).
+To learn more, see the [TensorFlow tutorials](../tutorials/).
 
 ## Common installation problems
 
index 585470d..97340ef 100644 (file)
@@ -4,6 +4,7 @@ tflite/index.md
 tflite/devguide.md
 tflite/demo_android.md
 tflite/demo_ios.md
+tflite/performance.md
 >>>
 ### TensorFlow Mobile
 mobile_intro.md
index cf0db59..efef5dd 100644 (file)
@@ -27,7 +27,7 @@ called `libandroid_tensorflow_inference_java.jar`. There are three ways to
 include this functionality in your program:
 
 1. Include the jcenter AAR which contains it, as in this
- [example app](https://github.com/googlecodelabs/tensorflow-for-poets-2/blob/master/android/build.gradle#L59-L65)
+ [example app](https://github.com/googlecodelabs/tensorflow-for-poets-2/blob/master/android/tfmobile/build.gradle#L59-L65)
 
 2. Download the nightly precompiled version from
 [ci.tensorflow.org](http://ci.tensorflow.org/view/Nightly/job/nightly-android/lastSuccessfulBuild/artifact/out/).
index 241f01d..baad443 100644 (file)
@@ -38,7 +38,8 @@ speech-driven interface, and many of these require on-device processing. Most of
 the time a user isn’t giving commands, and so streaming audio continuously to a
 remote server would be a waste of bandwidth, since it would mostly be silence or
 background noises. To solve this problem it’s common to have a small neural
-network running on-device @{$tutorials/audio_recognition$listening out for a particular keyword}.
+network running on-device
+[listening out for a particular keyword](../tutorials/sequences/audio_recognition).
 Once that keyword has been spotted, the rest of the
 conversation can be transmitted over to the server for further processing if
 more computing power is needed.
index 8b22c04..2b84dbb 100644 (file)
@@ -105,8 +105,8 @@ inline constants so everything’s in one file.  To handle the conversion, you
 need the `freeze_graph.py` script, that’s held in
 [`tensorflow/python/tools/freeze_graph.py`](https://www.tensorflow.org/code/tensorflow/python/tools/freeze_graph.py). You’ll run it like this:
 
-    bazel build tensorflow/tools:freeze_graph
-    bazel-bin/tensorflow/tools/freeze_graph \
+    bazel build tensorflow/python/tools:freeze_graph
+    bazel-bin/tensorflow/python/tools/freeze_graph \
     --input_graph=/tmp/model/my_graph.pb \
     --input_checkpoint=/tmp/model/model.ckpt-1000 \
     --output_graph=/tmp/frozen_graph.pb \
index 7f2f888..fdf0bcf 100644 (file)
@@ -1,7 +1,7 @@
 # Android Demo App
 
 An example Android application using TensorFLow Lite is available
-[on GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/app).
+[on GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo).
 The demo is a sample camera app that classifies images continuously
 using either a quantized Mobilenet model or a floating point Inception-v3 model.
 To run the demo, a device running Android 5.0 ( API 21) or higher is required.
@@ -44,20 +44,22 @@ app:
   Android Studio project.
 * Install all the Gradle extensions it requests.
 
-To get a model, either:
+Now you can build and run the demo app. 
 
-* Download the quantized [Mobilenet TensorFlow Lite model](https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_224_android_quant_2017_11_08.zip)
-  and unzip and copy `mobilenet_quant_v1_224.tflite` to the assets directory:
-  `tensorflow/contrib/lite/java/demo/app/src/main/assets/`.
-* Or, download the floating point [Inception-v3 model](https://storage.googleapis.com/download.tensorflow.org/models/tflite/inception_v3_slim_2016_android_2017_11_10.zip)
-  and unzip and copy `inceptionv3_non_slim_2015.tflite` to the assets
-  directory. Change the chosen classifier in
-  [Camera2BasicFragment.java](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/Camera2BasicFragment.java)<br>
+The build process downloads the quantized [Mobilenet TensorFlow Lite model](https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_224_android_quant_2017_11_08.zip), and unzips it into the assets directory: `tensorflow/contrib/lite/java/demo/app/src/main/assets/`.
+
+Some additional details are available on the
+[TF Lite Android App page](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/java/demo/README.md).
+
+### Using other models
+
+To use a different model:
+* Download the floating point [Inception-v3 model](https://storage.googleapis.com/download.tensorflow.org/models/tflite/inception_v3_slim_2016_android_2017_11_10.zip).
+* Unzip and copy `inceptionv3_non_slim_2015.tflite` to the assets directory. 
+* Change the chosen classifier in [Camera2BasicFragment.java](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/java/demo/app/src/main/java/com/example/android/tflitecamerademo/Camera2BasicFragment.java)<br>
   from: `classifier = new ImageClassifierQuantizedMobileNet(getActivity());`<br>
   to: `classifier = new ImageClassifierFloatInception(getActivity());`.
 
-Now you can build and run the demo app.
-
 
 ## Build TensorFlow Lite and the demo app from source
 
index 4133bc1..b168d6c 100644 (file)
@@ -54,10 +54,11 @@ both floating point and quantized inference.
 ### Train a custom model
 
 A developer may choose to train a custom model using Tensorflow (see the
-@{$tutorials} for examples of building and training models). If you have already
-written a model, the first step is to export this to a @{tf.GraphDef} file. This
-is required because some formats do not store the model structure outside the
-code, and we must communicate with other parts of the framework. See
+[TensorFlow tutorials](../../tutorials/) for examples of building and training
+models). If you have already written a model, the first step is to export this
+to a @{tf.GraphDef} file. This is required because some formats do not store the
+model structure outside the code, and we must communicate with other parts of the
+framework. See
 [Exporting the Inference Graph](https://github.com/tensorflow/models/blob/master/research/slim/README.md)
 to create .pb file for the custom model.
 
index 5622034..3d17330 100644 (file)
@@ -37,8 +37,9 @@ a custom (less-dynamic) memory allocator to ensure minimal load, initialization,
 and execution latency.
 
 TensorFlow Lite provides an interface to leverage hardware acceleration, if
-available on the device. It does so via the Android Neural Networks library,
-released as part of Android O-MR1.
+available on the device. It does so via the
+[Android Neural Networks API](https://developer.android.com/ndk/guides/neuralnetworks/index.html),
+available on Android 8.1 (API level 27) and higher.
 
 ## Why do we need a new mobile-specific library?
 
@@ -116,6 +117,10 @@ following:
       Wear](https://research.googleblog.com/2017/02/on-device-machine-intelligence.html)
       to all first-party and third-party apps.
 
+    Also see the complete list of
+    [TensorFlow Lite's supported models](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/models.md),
+    including the model sizes, performance numbers, and downloadable model files.
+
 - Quantized versions of the MobileNet model, which runs faster than the
   non-quantized (float) version on CPU.
 
@@ -131,10 +136,10 @@ compatibility with this release.
 ## Getting Started
 
 We recommend you try out TensorFlow Lite with the pre-tested models indicated
-above. If you have an existing mode, you will need to test whether your model is
-compatible with both the converter and the supported operator set.  To test your
-model, see the [documentation on
-GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite).
+above. If you have an existing model, you will need to test whether your model
+is compatible with both the converter and the supported operator set.  To test
+your model, see the
+[documentation on GitHub](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite).
 
 ### Retrain Inception-V3 or MobileNet for a custom data set
 
diff --git a/tensorflow/docs_src/mobile/tflite/performance.md b/tensorflow/docs_src/mobile/tflite/performance.md
new file mode 100644 (file)
index 0000000..79bacaa
--- /dev/null
@@ -0,0 +1,174 @@
+# Performance
+
+This document lists TensorFlow Lite performance benchmarks when running well
+known models on some Android and iOS devices.
+
+These performance benchmark numbers were generated with the
+[Android TFLite benchmark binary](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/tools/benchmark)
+and the [iOS benchmark app](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/tools/benchmark/ios).
+
+# Android performance benchmarks
+
+For Android benchmarks, the CPU affinity is set to use big cores on the device to
+reduce variance (see [details](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/tools/benchmark#reducing-variance-between-runs-on-android)).
+
+It assumes that models were download and unzipped to the
+`/data/local/tmp/tflite_models` directory. The benchmark binary is built
+using [these instructions](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/tools/benchmark#on-android)
+and assumed in the `/data/local/tmp` directory.
+
+To run the benchmark:
+
+```
+adb shell taskset ${CPU_MASK} /data/local/tmp/benchmark_model \
+  --num_threads=1 \
+  --graph=/data/local/tmp/tflite_models/${GRAPH} \
+  --warmup_runs=1 \
+  --num_runs=50 \
+  --use_nnapi=false
+```
+
+Here, `${GRAPH}` is the name of model and `${CPU_MASK}` is the CPU affinity
+chosen according to the following table:
+
+Device | CPU_MASK |
+-------| ----------
+Pixel 2 | f0 |
+Pixel xl | 0c |
+
+
+<table>
+  <thead>
+    <tr>
+      <th>Model Name</th>
+      <th>Device </th>
+      <th>Mean inference time (std dev)</th>
+    </tr>
+  </thead>
+  <tr>
+    <td rowspan = 2>
+      <a href="http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz">Mobilenet_1.0_224(float)</a>
+    </td>
+    <td>Pixel 2 </td>
+    <td>166.5 ms (2.6 ms)</td>
+  </tr>
+   <tr>
+     <td>Pixel xl </td>
+     <td>122.9 ms (1.8 ms)  </td>
+  </tr>
+  <tr>
+    <td rowspan = 2>
+      <a href="http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz">Mobilenet_1.0_224 (quant)</a>
+    </td>
+    <td>Pixel 2 </td>
+    <td>69.5 ms (0.9 ms)</td>
+  </tr>
+   <tr>
+     <td>Pixel xl </td>
+     <td>78.9 ms (2.2 ms)  </td>
+  </tr>
+  <tr>
+    <td rowspan = 2>
+      <a href="https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/nasnet_mobile_2018_04_27.tgz">NASNet mobile</a>
+    </td>
+    <td>Pixel 2 </td>
+    <td>273.8 ms (3.5 ms)</td>
+  </tr>
+   <tr>
+     <td>Pixel xl </td>
+     <td>210.8 ms (4.2 ms)</td>
+  </tr>
+  <tr>
+    <td rowspan = 2>
+      <a href="https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/squeezenet_2018_04_27.tgz">SqueezeNet</a>
+    </td>
+    <td>Pixel 2 </td>
+    <td>234.0 ms (2.1 ms)</td>
+  </tr>
+   <tr>
+     <td>Pixel xl </td>
+     <td>158.0 ms (2.1 ms)</td>
+  </tr>
+  <tr>
+    <td rowspan = 2>
+      <a href="https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_resnet_v2_2018_04_27.tgz">Inception_ResNet_V2</a>
+    </td>
+    <td>Pixel 2 </td>
+    <td>2846.0 ms (15.0 ms)</td>
+  </tr>
+   <tr>
+     <td>Pixel xl </td>
+     <td>1973.0 ms (15.0 ms)  </td>
+  </tr>
+  <tr>
+    <td rowspan = 2>
+      <a href="https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v4_2018_04_27.tgz">Inception_V4</a>
+    </td>
+    <td>Pixel 2 </td>
+    <td>3180.0 ms (11.7 ms)</td>
+  </tr>
+   <tr>
+     <td>Pixel xl </td>
+     <td>2262.0 ms (21.0 ms)  </td>
+  </tr>
+
+ </table>
+
+# iOS benchmarks
+
+To run iOS benchmarks, the [benchmark
+app](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/tools/benchmark/ios)
+was modified to include the appropriate model and `benchmark_params.json` was
+modified  to set `num_threads` to 1.
+
+<table>
+  <thead>
+    <tr>
+      <th>Model Name</th>
+      <th>Device </th>
+      <th>Mean inference time (std dev)</th>
+    </tr>
+  </thead>
+  <tr>
+    <td>
+      <a href="http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz">Mobilenet_1.0_224(float)</a>
+    </td>
+    <td>iPhone 8 </td>
+    <td>32.2 ms (0.8 ms)</td>
+  </tr>
+  <tr>
+    <td>
+      <a href="http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz)">Mobilenet_1.0_224 (quant)</a>
+    </td>
+    <td>iPhone 8 </td>
+    <td>24.4 ms (0.8 ms)</td>
+  </tr>
+  <tr>
+    <td>
+      <a href="https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/nasnet_mobile_2018_04_27.tgz">NASNet mobile</a>
+    </td>
+    <td>iPhone 8 </td>
+    <td>60.3 ms (0.6 ms)</td>
+  </tr>
+  <tr>
+    <td>
+      <a href="https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/squeezenet_2018_04_27.tgz">SqueezeNet</a>
+    </td>
+    <td>iPhone 8 </td>
+    <td>44.3 (0.7 ms)</td>
+  </tr>
+  <tr>
+    <td>
+      <a href="https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_resnet_v2_2018_04_27.tgz">Inception_ResNet_V2</a>
+    </td>
+    <td>iPhone 8</td>
+    <td>562.4 ms (18.2 ms)</td>
+  </tr>
+  <tr>
+    <td>
+      <a href="https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v4_2018_04_27.tgz">Inception_V4</a>
+    </td>
+    <td>iPhone 8 </td>
+    <td>661.0 ms (29.2 ms)</td>
+  </tr>
+ </table>
index 2fea02d..c97f741 100644 (file)
@@ -227,8 +227,8 @@ of 30.0f, and an 8-bit array, the quantized values represent the following:
   <table>
     <tr><th>Quantized</th><th>Float</th></tr>
     <tr><td>0</td><td>-10.0</td></tr>
-    <tr><td>255</td><td>30.0</td></tr>
     <tr><td>128</td><td>10.0</td></tr>
+    <tr><td>255</td><td>30.0</td></tr>
   </table>
   <figcaption>
     <b>Table 2</b>: Example quantized value range
index 5887c3d..4c4f3f3 100644 (file)
@@ -581,12 +581,21 @@ Computes a sum across replicas.
 Arguments | Type    | Semantics
 --------- | ------- | -----------------------------
 `operand` | `XlaOp` | Array to sum across replicas.
+| `replica_group_ids`    | `int64` vector | Group ID for each replica.      |
 
 The output shape is the same as the input shape. For example, if there are two
 replicas and the operand has the value `(1.0, 2.5)` and `(3.0, 5.25)`
 respectively on the two replicas, then the output value from this op will be
 `(4.0, 7.75)` on both replicas.
 
+`replica_group_ids` identifies the group ID of each replica. The group ID must
+either be empty (all replicas belong to a single group), or contain the same
+number of elements as the number of replicas. For example, if
+`replica_group_ids` = {0, 1, 2, 3, 0, 1, 2, 3} has eight replicas, there are
+four subgroups of replica IDs: {0, 4}, {1, 5}, {2, 6}, and {3, 7}. The size of
+each subgroup *must* be identical, so, for example, using:
+`replica_group_ids` = {0, 1, 2, 0} for four replicas is invalid.
+
 Computing the result of CrossReplicaSum requires having one input from each
 replica, so if one replica executes a CrossReplicaSum node more times than
 another, then the former replica will wait forever. Since the replicas are all
@@ -1299,12 +1308,10 @@ See also
 :                   :                        : parameters of type T and M of  :
 :                   :                        : arbitrary type                 :
 | `dimensions`      | `int64` array          | array of map dimensions        |
-| `static_operands` | sequence of M `XlaOp`s | M arrays of arbitrary type     |
 
 Applies a scalar function over the given `operands` arrays, producing an array
 of the same dimensions where each element is the result of the mapped function
-applied to the corresponding elements in the input arrays with `static_operands`
-given as additional input to `computation`.
+applied to the corresponding elements in the input arrays.
 
 The mapped function is an arbitrary computation with the restriction that it has
 N inputs of scalar type `T` and a single output with type `S`. The output has
@@ -2003,13 +2010,35 @@ Slice(b, {2, 1}, {4, 3}) produces:
 See also
 [`XlaBuilder::Sort`](https://www.tensorflow.org/code/tensorflow/compiler/xla/client/xla_client/xla_builder.h).
 
-Sorts the elements in the operand.
+There are two versions of the Sort instruction: a single-operand and a
+two-operand version.
 
 <b>`Sort(operand)`</b>
 
 Arguments | Type    | Semantics
+--------- | ------- | --------------------
+`operand` | `XlaOp` | The operand to sort.
+
+Sorts the elements in the operand in ascending order. The operand must be rank-1.
+If the operand's elements have floating point type, and the operand contains
+NaN elements, the order of elements in the output is implementation-defined.
+
+<b>`Sort(key, value)`</b>
+
+Sorts both the key and the value operands. The keys are sorted as in the
+single-operand version. The values are sorted according to the order of their
+corresponding keys. For example, if the inputs are `keys = [3, 1]` and
+`values = [42, 50]`, then the output of the sort is the tuple `{[1, 3], [50, 42]}`.
+The sort is not guaranteed to be stable, that is, if the keys array contains
+duplicates, the order of their corresponding values may not be preserved.
+
+Arguments | Type    | Semantics
 --------- | ------- | -------------------
-`operand` | `XlaOp` | The operand to sort
+`keys`    | `XlaOp` | The sort keys.
+`values`  | `XlaOp` | The values to sort.
+
+The `keys` and `values` operand must both be rank-1, and must have the same
+dimensions, but may have different element types.
 
 ## Transpose
 
similarity index 77%
rename from tensorflow/docs_src/get_started/_index.yaml
rename to tensorflow/docs_src/tutorials/_index.yaml
index 4060804..6fc8155 100644 (file)
@@ -75,11 +75,11 @@ landing_page:
               <a href="/guide/keras">TensorFlow Keras guide</a>.
             </p>
             <ol style="padding-left:20px;">
-              <li><a href="/get_started/basic_classification">Basic classification</a></li>
-              <li><a href="/get_started/basic_text_classification">Text classification</a></li>
-              <li><a href="/get_started/basic_regression">Regression</a></li>
-              <li><a href="/get_started/overfit_and_underfit">Overfitting and underfitting</a></li>
-              <li><a href="/get_started/save_and_restore_models">Save and load</a></li>
+              <li><a href="./keras/basic_classification">Basic classification</a></li>
+              <li><a href="./keras/basic_text_classification">Text classification</a></li>
+              <li><a href="./keras/basic_regression">Regression</a></li>
+              <li><a href="./keras/overfit_and_underfit">Overfitting and underfitting</a></li>
+              <li><a href="./keras/save_and_restore_models">Save and load</a></li>
             </ol>
           </div>
           <div class="devsite-landing-row-item-buttons" style="margin-top:0;">
@@ -109,7 +109,7 @@ landing_page:
         model.evaluate(x_test, y_test)
         </pre>
         {% dynamic if request.tld != 'cn' %}
-        <a class="colab-button" target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb">Run in a <span>Notebook</span></a>
+        <a class="colab-button" target="_blank" href="https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb">Run in a <span>Notebook</span></a>
         {% dynamic endif %}
 
   - items:
@@ -124,38 +124,38 @@ landing_page:
             <ol style="padding-left:20px;">
               <li>
                 {% dynamic if request.tld == 'cn' %}
-                <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/1_basics.ipynb" class="external">Eager execution basics</a>
+                <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_basics.ipynb" class="external">Eager execution basics</a>
                 {% dynamic else %}
-                <a href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/1_basics.ipynb" class="external">Eager execution basics</a>
+                <a href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_basics.ipynb" class="external">Eager execution basics</a>
                 {% dynamic endif %}
               </li>
               <li>
                 {% dynamic if request.tld == 'cn' %}
-                <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/2_gradients.ipynb" class="external">Automatic differentiation and gradient tapes</a>
+                <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb" class="external">Automatic differentiation and gradient tape</a>
                 {% dynamic else %}
-                <a href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/2_gradients.ipynb" class="external">Automatic differentiation and gradient tapes</a>
+                <a href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb" class="external">Automatic differentiation and gradient tape</a>
                 {% dynamic endif %}
               </li>
               <li>
                 {% dynamic if request.tld == 'cn' %}
-                <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/3_training_models.ipynb" class="external">Variables, models, and training</a>
+                <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_training.ipynb" class="external">Custom training: basics</a>
                 {% dynamic else %}
-                <a href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/3_training_models.ipynb" class="external">Variables, models, and training</a>
+                <a href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_training.ipynb" class="external">Custom training: basics</a>
                 {% dynamic endif %}
               </li>
               <li>
                 {% dynamic if request.tld == 'cn' %}
-                <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/4_high_level.ipynb" class="external">Custom layers</a>
+                <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb" class="external">Custom layers</a>
                 {% dynamic else %}
-                <a href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/4_high_level.ipynb" class="external">Custom layers</a>
+                <a href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb" class="external">Custom layers</a>
                 {% dynamic endif %}
               </li>
-              <li><a href="/get_started/eager">Custom training walkthrough</a></li>
+              <li><a href="./eager/custom_training_walkthrough">Custom training: walkthrough</a></li>
               <li>
                 {% dynamic if request.tld == 'cn' %}
                 <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb" class="external">Example: Neural machine translation w/ attention</a>
                 {% dynamic else %}
-                <a href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb" class="external">Example: Neural machine translation w/ attention</a>
+                <a href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb" class="external">Example: Neural machine translation w/ attention</a>
                 {% dynamic endif %}
               </li>
             </ol>
@@ -170,13 +170,15 @@ landing_page:
           <div class="devsite-landing-row-item-description-content">
             <p>
               Estimators can train large models on multiple machines in a
-              production environment. Try the examples below and read the
-              <a href="/guide/estimators">Estimators guide</a>.
+              production environment. Read the
+              <a href="/guide/estimators">Estimators guide</a> for details.
             </p>
             <ol style="padding-left: 20px;">
-              <li><a href="/tutorials/text_classification_with_tf_hub">How to build a simple text classifier with TF-Hub</a></li>
+              <li><a href="/tutorials/images/layers">Build a Convolutional Neural Network using Estimators</a></li>
+              <li><a href="/hub/tutorials/text_classification_with_tf_hub">How to build a simple text classifier with TF-Hub</a></li>
               <li><a href="https://github.com/tensorflow/models/tree/master/official/boosted_trees">Classifying Higgs boson processes</a></li>
-              <li><a href="/tutorials/wide_and_deep">Wide and deep learning using estimators</a></li>
+              <li><a href="/tutorials/representation/wide_and_deep">Wide and deep learning using Estimators</a></li>
+              <li><a href="/tutorials/representation/linear">Large-scale linear models</a></li>
             </ol>
           </div>
           <div class="devsite-landing-row-item-buttons">
@@ -187,7 +189,7 @@ landing_page:
   - description: >
       <h2 class="hide-from-toc">Google Colab&#58; An easy way to learn and use TensorFlow</h2>
       <p>
-        <a href="https://colab.sandbox.google.com/notebooks/welcome.ipynb" class="external">Colaboratory</a>
+        <a href="https://colab.research.google.com/notebooks/welcome.ipynb" class="external">Colaboratory</a>
         is a Google research project created to help disseminate machine learning
         education and research. It's a Jupyter notebook environment that requires
         no setup to use and runs entirely in the cloud.
diff --git a/tensorflow/docs_src/tutorials/_toc.yaml b/tensorflow/docs_src/tutorials/_toc.yaml
new file mode 100644 (file)
index 0000000..d46d570
--- /dev/null
@@ -0,0 +1,93 @@
+toc:
+- title: Get started with TensorFlow
+  path: /tutorials/
+
+- title: Learn and use ML
+  style: accordion
+  section:
+  - title: Overview
+    path: /tutorials/keras/
+  - title: Basic classification
+    path: /tutorials/keras/basic_classification
+  - title: Text classification
+    path: /tutorials/keras/basic_text_classification
+  - title: Regression
+    path: /tutorials/keras/basic_regression
+  - title: Overfitting and underfitting
+    path: /tutorials/keras/overfit_and_underfit
+  - title: Save and restore models
+    path: /tutorials/keras/save_and_restore_models
+
+- title: Research and experimentation
+  style: accordion
+  section:
+  - title: Overview
+    path: /tutorials/eager/
+  - title: Eager execution
+    path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_intro.ipynb
+    status: external
+  - title: Automatic differentiation
+    path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb
+    status: external
+  - title: "Custom training: basics"
+    path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_training.ipynb
+    status: external
+  - title: Custom layers
+    path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb
+    status: external
+  - title: "Custom training: walkthrough"
+    path: /tutorials/eager/custom_training_walkthrough
+  - title: Neural machine translation
+    path: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb
+    status: external
+
+- title: Images
+  style: accordion
+  section:
+  - title: Build a CNN using Estimators
+    path: /tutorials/images/layers
+  - title: Image recognition
+    path: /tutorials/images/image_recognition
+  - title: Image retraining
+    path: /hub/tutorials/image_retraining
+  - title: Advanced CNN
+    path: /tutorials/images/deep_cnn
+
+- title: Sequences
+  style: accordion
+  section:
+  - title: Recurrent neural network
+    path: /tutorials/sequences/recurrent
+  - title: Drawing classification
+    path: /tutorials/sequences/recurrent_quickdraw
+  - title: Simple audio recognition
+    path: /tutorials/sequences/audio_recognition
+  - title: Neural machine translation
+    path: https://github.com/tensorflow/nmt
+    status: external
+
+- title: Data representation
+  style: accordion
+  section:
+  - title: Linear models
+    path: /tutorials/representation/wide
+  - title: Wide and deep learning
+    path: /tutorials/representation/wide_and_deep
+  - title: Vector representations of words
+    path: /tutorials/representation/word2vec
+  - title: Kernel methods
+    path: /tutorials/representation/kernel_methods
+  - title: Large-scale linear models
+    path: /tutorials/representation/linear
+
+- title: Non-ML
+  style: accordion
+  section:
+  - title: Mandelbrot set
+    path: /tutorials/non-ml/mandelbrot
+  - title: Partial differential equations
+    path: /tutorials/non-ml/pdes
+
+- break: True
+- title: Next steps
+  path: /tutorials/next_steps
diff --git a/tensorflow/docs_src/tutorials/eager/custom_training_walkthrough.md b/tensorflow/docs_src/tutorials/eager/custom_training_walkthrough.md
new file mode 100644 (file)
index 0000000..b45fbef
--- /dev/null
@@ -0,0 +1,3 @@
+# Custom training: walkthrough
+
+[Colab notebook](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/eager.ipynb)
diff --git a/tensorflow/docs_src/tutorials/eager/index.md b/tensorflow/docs_src/tutorials/eager/index.md
new file mode 100644 (file)
index 0000000..5445e0c
--- /dev/null
@@ -0,0 +1,13 @@
+# Research and experimentation
+
+Eager execution provides an imperative, define-by-run interface for advanced
+operations. Write custom layers, forward passes, and training loops with
+auto&nbsp;differentiation. Start with these notebooks, then read the
+[eager execution guide](../../guide/eager).
+
+1. <span>[Eager execution](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/eager_intro.ipynb){:.external}</span>
+2. <span>[Automatic differentiation and gradient tape](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/automatic_differentiation.ipynb){:.external}</span>
+3. <span>[Custom training: basics](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_training.ipynb){:.external}</span>
+4. <span>[Custom layers](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/notebooks/custom_layers.ipynb){:.external}</span>
+5. [Custom training: walkthrough](/tutorials/eager/custom_training_walkthrough)
+6. <span>[Advanced example: Neural machine translation with attention](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb){:.external}</span>
diff --git a/tensorflow/docs_src/tutorials/image_retraining.md b/tensorflow/docs_src/tutorials/image_retraining.md
deleted file mode 100644 (file)
index 27784ee..0000000
+++ /dev/null
@@ -1,4 +0,0 @@
-# How to Retrain Inception's Final Layer for New Categories
-
-**NOTE: This tutorial has moved to**
-https://github.com/tensorflow/hub/tree/master/docs/tutorials/image_retraining.md
similarity index 98%
rename from tensorflow/docs_src/tutorials/deep_cnn.md
rename to tensorflow/docs_src/tutorials/images/deep_cnn.md
index 44a32d9..1590f15 100644 (file)
@@ -1,7 +1,4 @@
-# Convolutional Neural Networks
-
-> **NOTE:** This tutorial is intended for *advanced* users of TensorFlow
-and assumes expertise and experience in machine learning.
+# Advanced Convolutional Neural Networks
 
 ## Overview
 
@@ -438,9 +435,6 @@ with a batch size of 64 and compare the training speed.
 
 ## Next Steps
 
-[Congratulations!](https://www.youtube.com/watch?v=9bZkp7q19f0) You have
-completed the CIFAR-10 tutorial.
-
 If you are now interested in developing and training your own image
 classification system, we recommend forking this tutorial and replacing
 components to address your image classification problem.
@@ -434,7 +434,6 @@ should be able to transfer some of that understanding to solving related
 problems.  One way to perform transfer learning is to remove the final
 classification layer of the network and extract
 the [next-to-last layer of the CNN](https://arxiv.org/abs/1310.1531), in this case a 2048 dimensional vector.
-There's a guide to doing this @{$image_retraining$in the how-to section}.
 
 
 ## Resources for Learning More
similarity index 94%
rename from tensorflow/docs_src/tutorials/layers.md
rename to tensorflow/docs_src/tutorials/images/layers.md
index 212e337..12a215b 100644 (file)
@@ -1,4 +1,4 @@
-# A Guide to TF Layers: Building a Convolutional Neural Network
+# Build a Convolutional Neural Network using Estimators
 
 The TensorFlow @{tf.layers$`layers` module} provides a high-level API that makes
 it easy to construct a neural network. It provides methods that facilitate the
@@ -470,51 +470,18 @@ as the loss metric. The following code calculates cross entropy when the model
 runs in either `TRAIN` or `EVAL` mode:
 
 ```python
-onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32), depth=10)
-loss = tf.losses.softmax_cross_entropy(
-    onehot_labels=onehot_labels, logits=logits)
+loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
 ```
 
 Let's take a closer look at what's happening above.
 
-Our `labels` tensor contains a list of predictions for our examples, e.g. `[1,
-9, ...]`. In order to calculate cross-entropy, first we need to convert `labels`
-to the corresponding
-[one-hot encoding](https://www.quora.com/What-is-one-hot-encoding-and-when-is-it-used-in-data-science):
+Our `labels` tensor contains a list of prediction indices for our examples, e.g. `[1,
+9, ...]`. `logits` contains the linear outputs of our last layer. 
 
-```none
-[[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
- ...]
-```
-
-We use the @{tf.one_hot} function
-to perform this conversion. `tf.one_hot()` has two required arguments:
-
-*   `indices`. The locations in the one-hot tensor that will have "on
-    values"—i.e., the locations of `1` values in the tensor shown above.
-*   `depth`. The depth of the one-hot tensor—i.e., the number of target classes.
-    Here, the depth is `10`.
+`tf.losses.sparse_softmax_cross_entropy`, calculates the softmax crossentropy
+(aka: categorical crossentropy, negative log-likelihood) from these two inputs
+in an efficient, numerically stable way.
 
-The following code creates the one-hot tensor for our labels, `onehot_labels`:
-
-```python
-onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32), depth=10)
-```
-
-Because `labels` contains a series of values from 0–9, `indices` is just our
-`labels` tensor, with values cast to integers. The `depth` is `10` because we
-have 10 possible target classes, one for each digit.
-
-Next, we compute cross-entropy of `onehot_labels` and the softmax of the
-predictions from our logits layer. `tf.losses.softmax_cross_entropy()` takes
-`onehot_labels` and `logits` as arguments, performs softmax activation on
-`logits`, calculates cross-entropy, and returns our `loss` as a scalar `Tensor`:
-
-```python
-loss = tf.losses.softmax_cross_entropy(
-    onehot_labels=onehot_labels, logits=logits)
-```
 
 ### Configure the Training Op
 
diff --git a/tensorflow/docs_src/tutorials/index.md b/tensorflow/docs_src/tutorials/index.md
deleted file mode 100644 (file)
index 6bd3a3a..0000000
+++ /dev/null
@@ -1,59 +0,0 @@
-# Tutorials
-
-
-This section contains tutorials demonstrating how to do specific tasks
-in TensorFlow.  If you are new to TensorFlow, we recommend reading
-[Get Started with TensorFlow](/get_started/).
-
-## Images
-
-These tutorials cover different aspects of image recognition:
-
-  * @{$layers$MNIST}, which introduces convolutional neural networks (CNNs) and
-    demonstrates how to build a CNN in TensorFlow.
-  * @{$image_recognition}, which introduces the field of image recognition and
-    uses a pre-trained model (Inception) for recognizing images.
-  * @{$image_retraining}, which has a wonderfully self-explanatory title.
-  * @{$deep_cnn}, which demonstrates how to build a small CNN for recognizing
-    images.  This tutorial is aimed at advanced TensorFlow users.
-
-
-## Sequences
-
-These tutorials focus on machine learning problems dealing with sequence data.
-
-  * @{$recurrent}, which demonstrates how to use a
-    recurrent neural network to predict the next word in a sentence.
-  * @{$seq2seq}, which demonstrates how to use a
-    sequence-to-sequence model to translate text from English to French.
-  * @{$recurrent_quickdraw}
-    builds a classification model for drawings, directly from the sequence of
-    pen strokes.
-  * @{$audio_recognition}, which shows how to
-    build a basic speech recognition network.
-
-## Data representation
-
-These tutorials demonstrate various data representations that can be used in
-TensorFlow.
-
-  * @{$wide}, uses
-    @{tf.feature_column$feature columns} to feed a variety of data types
-    to linear model, to solve a classification problem.
-  * @{$wide_and_deep}, builds on the
-    above linear model tutorial, adding a deep feed-forward neural network
-    component and a DNN-compatible data representation.
-  * @{$word2vec}, which demonstrates how to
-    create an embedding for words.
-  * @{$kernel_methods},
-    which shows how to improve the quality of a linear model by using explicit
-    kernel mappings.
-
-## Non Machine Learning
-
-Although TensorFlow specializes in machine learning, the core of TensorFlow is
-a powerful numeric computation system which you can also use to solve other
-kinds of math problems.  For example:
-
-  * @{$mandelbrot}
-  * @{$pdes}
diff --git a/tensorflow/docs_src/tutorials/keras/index.md b/tensorflow/docs_src/tutorials/keras/index.md
new file mode 100644 (file)
index 0000000..9d42281
--- /dev/null
@@ -0,0 +1,22 @@
+# Learn and use machine learning
+
+This notebook collection is inspired by the book
+*[Deep Learning with Python](https://books.google.com/books?id=Yo3CAQAACAAJ)*.
+These tutorials use `tf.keras`, TensorFlow's high-level Python API for building
+and training deep learning models. To learn more about using Keras with
+TensorFlow, see the [TensorFlow Keras Guide](../../guide/keras).
+
+Publisher's note: *Deep Learning with Python* introduces the field of deep
+learning using the Python language and the powerful Keras library. Written by
+Keras creator and Google AI researcher François Chollet, this book builds your
+understanding through intuitive explanations and practical examples.
+
+To learn about machine learning fundamentals and concepts, consider taking the
+[Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/).
+Additional TensorFlow and machine learning resources are listed in [next steps](../next_steps).
+
+1. [Basic classification](./basic_classification)
+2. [Text classification](./basic_text_classification)
+3. [Regression](./basic_regression)
+4. [Overfitting and underfitting](./overfit_and_underfit)
+5. [Save and restore models](./save_and_restore_models)
diff --git a/tensorflow/docs_src/tutorials/leftnav_files b/tensorflow/docs_src/tutorials/leftnav_files
deleted file mode 100644 (file)
index 8880524..0000000
+++ /dev/null
@@ -1,23 +0,0 @@
-index.md
-
-### Images
-layers.md: MNIST
-image_recognition.md: Image Recognition
-image_retraining.md: Image Retraining
-deep_cnn.md
-
-### Sequences
-recurrent.md
-seq2seq.md: Neural Machine Translation
-recurrent_quickdraw.md: Drawing Classification
-audio_recognition.md
-
-### Data Representation
-wide.md: Linear Models
-wide_and_deep.md: Wide & Deep Learning
-word2vec.md
-kernel_methods.md: Kernel Methods
-
-### Non-ML
-mandelbrot.md
-pdes.md
old mode 100755 (executable)
new mode 100644 (file)
similarity index 98%
rename from tensorflow/docs_src/tutorials/pdes.md
rename to tensorflow/docs_src/tutorials/non-ml/pdes.md
index 425e8d7..b5a0fa8
@@ -135,7 +135,6 @@ for i in range(1000):
   DisplayArray(U.eval(), rng=[-0.1, 0.1])
 ```
 
-![jpeg](../images/pde_output_2.jpg)
+![jpeg](../../images/pde_output_2.jpg)
 
 Look! Ripples!
-
@@ -27,7 +27,7 @@ TensorFlow will provide support for sparse features at a later release.
 
 This tutorial uses [tf.contrib.learn](https://www.tensorflow.org/code/tensorflow/contrib/learn/python/learn)
 (TensorFlow's high-level Machine Learning API) Estimators for our ML models.
-If you are not familiar with this API, [tf.estimator Quickstart](https://www.tensorflow.org/get_started/estimator)
+If you are not familiar with this API, The [Estimator guide](../../guide/estimators.md)
 is a good place to start. We will use the MNIST dataset. The tutorial consists
 of the following steps:
 
diff --git a/tensorflow/docs_src/tutorials/seq2seq.md b/tensorflow/docs_src/tutorials/seq2seq.md
deleted file mode 100644 (file)
index 8928ba4..0000000
+++ /dev/null
@@ -1,5 +0,0 @@
-# Sequence-to-Sequence Models
-
-Please check out the
-[tensorflow neural machine translation tutorial](https://github.com/tensorflow/nmt)
-for building sequence-to-sequence models with the latest Tensorflow API.
@@ -2,8 +2,8 @@
 
 ## Introduction
 
-Take a look at [this great article](https://colah.github.io/posts/2015-08-Understanding-LSTMs/)
-for an introduction to recurrent neural networks and LSTMs in particular.
+See [Understanding LSTM Networks](https://colah.github.io/posts/2015-08-Understanding-LSTMs/){:.external}
+for an introduction to recurrent neural networks and LSTMs.
 
 ## Language Modeling
 
@@ -13,7 +13,7 @@ In this tutorial we'll show how to build an RNN-based recognizer for this
 problem. The model will use a combination of convolutional layers, LSTM layers,
 and a softmax output layer to classify the drawings:
 
-<center> ![RNN model structure](../images/quickdraw_model.png) </center>
+<center> ![RNN model structure](../../images/quickdraw_model.png) </center>
 
 The figure above shows the structure of the model that we will build in this
 tutorial. The input is a drawing that is encoded as a sequence of strokes of
@@ -208,7 +208,7 @@ This data is then reformatted into a tensor of shape `[num_training_samples,
 max_length, 3]`. Then we determine the bounding box of the original drawing in
 screen coordinates and normalize the size such that the drawing has unit height.
 
-<center> ![Size normalization](../images/quickdraw_sizenormalization.png) </center>
+<center> ![Size normalization](../../images/quickdraw_sizenormalization.png) </center>
 
 Finally, we compute the differences between consecutive points and store these
 as a `VarLenFeature` in a