* @{tf.contrib.framework.with_same_shape}
## Deprecation
+
* @{tf.contrib.framework.deprecated}
* @{tf.contrib.framework.deprecated_args}
* @{tf.contrib.framework.deprecated_arg_values}
## Arg_Scope
+
* @{tf.contrib.framework.arg_scope}
* @{tf.contrib.framework.add_arg_scope}
* @{tf.contrib.framework.has_arg_scope}
* @{tf.contrib.framework.arg_scoped_arguments}
## Variables
+
* @{tf.contrib.framework.add_model_variable}
* @{tf.train.assert_global_step}
* @{tf.contrib.framework.assert_or_get_global_step}
* @{tf.contrib.learn.LogisticRegressor}
## Distributed training utilities
+
* @{tf.contrib.learn.Experiment}
* @{tf.contrib.learn.ExportStrategy}
* @{tf.contrib.learn.TaskType}
### Attention Mechanisms
The two basic attention mechanisms are:
+
* @{tf.contrib.seq2seq.BahdanauAttention} (additive attention,
[ref.](https://arxiv.org/abs/1409.0473))
* @{tf.contrib.seq2seq.LuongAttention} (multiplicative attention,
```
### Decoder base class and functions
+
* @{tf.contrib.seq2seq.Decoder}
* @{tf.contrib.seq2seq.dynamic_decode}
### Basic Decoder
+
* @{tf.contrib.seq2seq.BasicDecoderOutput}
* @{tf.contrib.seq2seq.BasicDecoder}
### Decoder Helpers
+
* @{tf.contrib.seq2seq.Helper}
* @{tf.contrib.seq2seq.CustomHelper}
* @{tf.contrib.seq2seq.GreedyEmbeddingHelper}
* [`GraphDef`](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) for describing the graph.
* [`SaverDef`](https://www.tensorflow.org/code/tensorflow/core/protobuf/saver.proto) for the saver.
* [`CollectionDef`](https://www.tensorflow.org/code/tensorflow/core/protobuf/meta_graph.proto)
-map that further describes additional components of the model, such as
+map that further describes additional components of the model such as
@{$python/state_ops$`Variables`},
-@{tf.train.QueueRunner}, etc. In order for a Python object to be serialized
+@{tf.train.QueueRunner}, etc.
+
+In order for a Python object to be serialized
to and from `MetaGraphDef`, the Python class must implement `to_proto()` and
`from_proto()` methods, and register them with the system using
-`register_proto_function`.
-
- For example,
+`register_proto_function`. For example:
```Python
def to_proto(self, export_scope=None):
* @{tf.global_norm}
## Decaying the learning rate
+
* @{tf.train.exponential_decay}
* @{tf.train.inverse_time_decay}
* @{tf.train.natural_exp_decay}
The `tf.data` API utilizes C++ multi-threading and has a much lower overhead
than the Python-based `queue_runner` that is limited by Python's multi-threading
performance. A detailed performance guide for the `tf.data` API can be found
-[here](@{$datasets_performance}).
+@{$datasets_performance$here}.
While feeding data using a `feed_dict` offers a high level of flexibility, in
general `feed_dict` does not provide a scalable solution. If only a single GPU