platform/upstream/tensorflow.git
6 years agoDecreases number of gRPC polling threads from 8 to 2.
Noah Eisen [Wed, 14 Feb 2018 16:18:41 +0000 (08:18 -0800)]
Decreases number of gRPC polling threads from 8 to 2.

PiperOrigin-RevId: 185688704

6 years agoBuild file bug fix for iOS simulators.
A. Unique TensorFlower [Wed, 14 Feb 2018 16:18:17 +0000 (08:18 -0800)]
Build file bug fix for iOS simulators.

PiperOrigin-RevId: 185688662

6 years agoeffective_sample_size kwarg change (same default behavior).
Ian Langmore [Wed, 14 Feb 2018 12:02:15 +0000 (04:02 -0800)]
effective_sample_size kwarg change (same default behavior).

* rename max_lags --> filter_beyond_lag
* rename max_lags_threshold --> filter_threshold
* Users can use both filters, and they combine in an "OR" manner
* None ==> turn off a filter.

PiperOrigin-RevId: 185666926

6 years agoFix the comment for the return type _dnn_model_fn and _dnn_linear_combined_model_fn.
A. Unique TensorFlower [Wed, 14 Feb 2018 08:38:42 +0000 (00:38 -0800)]
Fix the comment for the return type _dnn_model_fn and _dnn_linear_combined_model_fn.

PiperOrigin-RevId: 185651142

6 years agotfe SPINN example: Add inference; fix serialization
Shanqing Cai [Wed, 14 Feb 2018 05:49:28 +0000 (21:49 -0800)]
tfe SPINN example: Add inference; fix serialization

* Also de-flake a test.

PiperOrigin-RevId: 185637742

6 years ago[TF:XLA] Tag reverse_sequence test as optonly, increase its timeout.
Peter Hawkins [Wed, 14 Feb 2018 05:45:08 +0000 (21:45 -0800)]
[TF:XLA] Tag reverse_sequence test as optonly, increase its timeout.

PiperOrigin-RevId: 185637431

6 years agoFix tf.keras progbar display.
Francois Chollet [Wed, 14 Feb 2018 04:39:33 +0000 (20:39 -0800)]
Fix tf.keras progbar display.

PiperOrigin-RevId: 185632155

6 years agoFor models running in Eager mode, do not update the weights of the BatchNorm layer...
Anjali Sridhar [Wed, 14 Feb 2018 03:16:08 +0000 (19:16 -0800)]
For models running in Eager mode, do not update the weights of the BatchNorm layer if the layer's trainable argument is False.

This change is required in Eager mode to freeze a layer's weights when we set the layer's trainable attribute to False.

This should not be confused with the "training" attribute which refers to a model's training or inference mode behavior.

PiperOrigin-RevId: 185625661

6 years agoAutomated g4 rollback of changelist 185598764
A. Unique TensorFlower [Wed, 14 Feb 2018 02:57:53 +0000 (18:57 -0800)]
Automated g4 rollback of changelist 185598764

PiperOrigin-RevId: 185623948

6 years agoInternal Change.
A. Unique TensorFlower [Wed, 14 Feb 2018 01:53:13 +0000 (17:53 -0800)]
Internal Change.

PiperOrigin-RevId: 185618034

6 years agoTensorBoard debugger plugin: SIGINT handler for easier termination of debugged runtime
Shanqing Cai [Wed, 14 Feb 2018 01:52:50 +0000 (17:52 -0800)]
TensorBoard debugger plugin: SIGINT handler for easier termination of debugged runtime

from TensorBoardDebugWrapperSession and TensorBoardDebugHook.

PiperOrigin-RevId: 185617989

6 years agoDon't release kInvalidHandle.
Rohan Jain [Wed, 14 Feb 2018 01:26:18 +0000 (17:26 -0800)]
Don't release kInvalidHandle.

Also added a little more debug information.

PiperOrigin-RevId: 185615169

6 years agoLazily reads from resource variables in eager mode.
Alexandre Passos [Wed, 14 Feb 2018 01:16:34 +0000 (17:16 -0800)]
Lazily reads from resource variables in eager mode.

PiperOrigin-RevId: 185614070

6 years ago[XLA:GPU] Don't crash when the root instruction of a computation is a multi-output...
Justin Lebar [Wed, 14 Feb 2018 01:16:29 +0000 (17:16 -0800)]
[XLA:GPU] Don't crash when the root instruction of a computation is a multi-output fusion node, and avoid some pointer chasing with tuples.

Previously, the kernels we generated would have one argument per
*top-level* buffer of the input/output.  This was fine for inputs.  But
it doesn't work for outputs: Imagine you're a node that returns a tuple
-- e.g. multi-output fusion -- if all you get is a pointer to the
top-level buffer of your output (which should contain pointers to the
lower-level buffers at some point, but at the moment is just empty), how
are you supposed to figure out where to write your output?

(This usually worked because most of the time your output would live
inside of the big XLA temp buffer, and kernels always get a pointer to
that.)

Now we pass all the buffers, top-level and otherwise, to our kernel.  In
addition, we're now willing to dereference statically tuples that live
entirely in XLA's temp buffer.  Pointers in input tuples must still be
dereferenced dynamically, because the caller has the option of giving us
these values or not when invoking XLA.

This change makes some parts of BufferAssignment/BufferAllocations more
truthful.  Previously, if you passed a tuple-shaped input to XLA, we'd
say in BufferAllocations that the pointer for some subshape of the param
was the *top-level tuple pointer*.  XLA then knew that this was a lie
and would dereference it accordingly.  Now we have an explicit notion of
a BufferAllocation pointing to a subshape of an input parameter.

PiperOrigin-RevId: 185614060

6 years agoImprove type safety around float constants
Sanjoy Das [Wed, 14 Feb 2018 00:55:54 +0000 (16:55 -0800)]
Improve type safety around float constants

Instead of passing floating point constants to the vector support library as
compiler-side floats, pass them as APFloats instead.  This reduces the duration
during which these constants are semantically represented as floats on the host
side and are subject to fast-math-like behavior.  This is especially important
in cases where the exact bit representation of the floating point constant is
significant, but also makes progress towards ensuring that e.g. build XLA with
-ffast-math does not change the IR we generate.

PiperOrigin-RevId: 185611301

6 years agoFix incorrect is_training parameter.
Suharsh Sivakumar [Wed, 14 Feb 2018 00:43:21 +0000 (16:43 -0800)]
Fix incorrect is_training parameter.

And remove many is_training default values to avoid these mistakes from happening again.

PiperOrigin-RevId: 185609589

6 years agoUpdate TPU Profiler to be able to take a TPU name
A. Unique TensorFlower [Wed, 14 Feb 2018 00:43:20 +0000 (16:43 -0800)]
Update TPU Profiler to be able to take a TPU name

PiperOrigin-RevId: 185609588

6 years agoCleanupFunc doesn't need to do cleanup if _py_funcs is already destroyed.
Akshay Modi [Wed, 14 Feb 2018 00:17:55 +0000 (16:17 -0800)]
CleanupFunc doesn't need to do cleanup if _py_funcs is already destroyed.

PiperOrigin-RevId: 185606203

6 years agoAllow an option to delete the temporary file created at compilation on exit. Defaulti...
A. Unique TensorFlower [Wed, 14 Feb 2018 00:09:00 +0000 (16:09 -0800)]
Allow an option to delete the temporary file created at compilation on exit. Defaulting it to true, though we may want to keep it off during development.

PiperOrigin-RevId: 185604628

6 years agoWire in support for XLA kConditional instruction.
A. Unique TensorFlower [Tue, 13 Feb 2018 23:29:00 +0000 (15:29 -0800)]
Wire in support for XLA kConditional instruction.

PiperOrigin-RevId: 185598764

6 years agoChange linkage type of modules to external after dropping initializers
Sanjoy Das [Tue, 13 Feb 2018 22:55:24 +0000 (14:55 -0800)]
Change linkage type of modules to external after dropping initializers

It isn't legal to have private global variables without initializers.  In the
current state the -noconst.ll LLVM IR cannot be passed to opt.

PiperOrigin-RevId: 185593073

6 years agoCode cleanup: Made Executor related API take std::unique_ptr<const Graph> instead...
Mingsheng Hong [Tue, 13 Feb 2018 22:52:31 +0000 (14:52 -0800)]
Code cleanup: Made Executor related API take std::unique_ptr<const Graph> instead of const Graph* as input.

PiperOrigin-RevId: 185592574

6 years ago[TF-XLA] Disable Tensorflow's CSE in xla compiler
Yunxing Dai [Tue, 13 Feb 2018 22:51:25 +0000 (14:51 -0800)]
[TF-XLA] Disable Tensorflow's CSE in xla compiler

No need to do CSE in TF-XLA bridge, as XLA already has its own CSE pass later in the compilation pipeline. This removes one source of nondeterminism.

RELNOTES: CSE pass from Tensorflow is now disabled in XLA.
PiperOrigin-RevId: 185592383

6 years agoUse a more advanced py_func wrapper, one that allows non-tensor args to be passed...
A. Unique TensorFlower [Tue, 13 Feb 2018 21:56:24 +0000 (13:56 -0800)]
Use a more advanced py_func wrapper, one that allows non-tensor args to be passed directly to the wrapped function.

PiperOrigin-RevId: 185583023

6 years ago[TF:XLA] Bump open source llvm revision to r324990
Sanjoy Das [Tue, 13 Feb 2018 21:54:52 +0000 (13:54 -0800)]
[TF:XLA] Bump open source llvm revision to r324990

PiperOrigin-RevId: 185582753

6 years agoInternal Change
Zhixian Yan [Tue, 13 Feb 2018 21:42:11 +0000 (13:42 -0800)]
Internal Change

PiperOrigin-RevId: 185580787

6 years agoAdd `Kumaraswamy` to distributions __init__.py
A. Unique TensorFlower [Tue, 13 Feb 2018 21:30:20 +0000 (13:30 -0800)]
Add `Kumaraswamy` to distributions __init__.py

PiperOrigin-RevId: 185578929

6 years agoMake LLVM IR dumps more readable
Sanjoy Das [Tue, 13 Feb 2018 20:45:27 +0000 (12:45 -0800)]
Make LLVM IR dumps more readable

Before this the IR we dumped out would contain constant tensors that were
sometimes huge resulting in unweildy IR files.  With this change we dump out a
"noconst" IR file that has the constant initializers stripped out.

PiperOrigin-RevId: 185572700

6 years agoExtract the filter and input shape for Conv2DBackpropFilter/Conv2DBackpropInput
Benoit Steiner [Tue, 13 Feb 2018 20:31:59 +0000 (12:31 -0800)]
Extract the filter and input shape for Conv2DBackpropFilter/Conv2DBackpropInput
from the corresponding op inputs whenever possible.

PiperOrigin-RevId: 185570750

6 years agoFix bug in populating the execution plan sent to the delegate.
Andrew Selle [Tue, 13 Feb 2018 20:24:02 +0000 (12:24 -0800)]
Fix bug in populating the execution plan sent to the delegate.

- memcpy was missing the array size.
- modified the unit test to verify that the execution plan is
trivial on first delegate invocation.

PiperOrigin-RevId: 185569606

6 years agoAdd cache for _zeros in backprop
Akshay Modi [Tue, 13 Feb 2018 20:10:40 +0000 (12:10 -0800)]
Add cache for _zeros in backprop

PiperOrigin-RevId: 185567508

6 years agoMinor eager-related performance improvements
Akshay Modi [Tue, 13 Feb 2018 19:56:59 +0000 (11:56 -0800)]
Minor eager-related performance improvements

- Add a cache for name_scope
- skip some overhead _MulGrad and _MatMulGrad

PiperOrigin-RevId: 185565363

6 years agoExplicitely place the swap-in node: this ensures that subsequent rounds of
Benoit Steiner [Tue, 13 Feb 2018 19:46:08 +0000 (11:46 -0800)]
Explicitely place the swap-in node: this ensures that subsequent rounds of
memory optimization have a more accurate picture of the placement.

PiperOrigin-RevId: 185563797

6 years agoError out or log a warning if user sets the TPUConfig.num_shards incorrectly.
Jianwei Xie [Tue, 13 Feb 2018 19:33:12 +0000 (11:33 -0800)]
Error out or log a warning if user sets the TPUConfig.num_shards incorrectly.
Also improve TPU system metadata print out message.

PiperOrigin-RevId: 185561680

6 years agoAllow other types of variables to act as a resource variable.
Igor Saprykin [Tue, 13 Feb 2018 19:18:15 +0000 (11:18 -0800)]
Allow other types of variables to act as a resource variable.

Introduce resource_variable_ops.is_resource_variable() function that returns true
if an _should_act_as_resource_variable attribute is set.

PiperOrigin-RevId: 185559202

6 years agoadd missing blank line
Mark Daoust [Tue, 13 Feb 2018 18:54:09 +0000 (10:54 -0800)]
add missing blank line

PiperOrigin-RevId: 185554969

6 years agoAdd empty scaffolding for loop optimizers in Grappler.
A. Unique TensorFlower [Tue, 13 Feb 2018 18:48:58 +0000 (10:48 -0800)]
Add empty scaffolding for loop optimizers in Grappler.

PiperOrigin-RevId: 185554126

6 years agoClarify that the behavior of the iterator (advancing whenever any of the components...
A. Unique TensorFlower [Tue, 13 Feb 2018 18:48:43 +0000 (10:48 -0800)]
Clarify that the behavior of the iterator (advancing whenever any of the components is evaluated) is not magic, but a simple consequence of the dataflow graph structure.

PiperOrigin-RevId: 185554089

6 years agoFix documentation for the real shape of the output of crf_log_likelihood.
A. Unique TensorFlower [Tue, 13 Feb 2018 18:36:37 +0000 (10:36 -0800)]
Fix documentation for the real shape of the output of crf_log_likelihood.

PiperOrigin-RevId: 185552171

6 years agoUse _set_attr instead of directly modifying the nodedef
Akshay Modi [Tue, 13 Feb 2018 18:23:53 +0000 (10:23 -0800)]
Use _set_attr instead of directly modifying the nodedef

PiperOrigin-RevId: 185550223

6 years agoMove two common utility functions used by training and training_eager classes to...
Anjali Sridhar [Tue, 13 Feb 2018 18:16:03 +0000 (10:16 -0800)]
Move two common utility functions used by training and training_eager classes to a utility class.

PiperOrigin-RevId: 185548922

6 years agoTiny bugfix to eager TensorArray error message.
Eugene Brevdo [Tue, 13 Feb 2018 17:46:36 +0000 (09:46 -0800)]
Tiny bugfix to eager TensorArray error message.

PiperOrigin-RevId: 185543699

6 years agoTF to XLA compiler to support FakeQuantWithMinMaxVars/Args.
A. Unique TensorFlower [Tue, 13 Feb 2018 17:07:12 +0000 (09:07 -0800)]
TF to XLA compiler to support FakeQuantWithMinMaxVars/Args.

PiperOrigin-RevId: 185538228

6 years agoAdd gradient norm target arg to wass gradient penalty function. This trick is usd...
A. Unique TensorFlower [Tue, 13 Feb 2018 16:46:26 +0000 (08:46 -0800)]
Add gradient norm target arg to wass gradient penalty function. This trick is usd in the progressive GAN paper https://arxiv.org/abs/1710.10196

PiperOrigin-RevId: 185535584

6 years agoForce the use of print function in generated code.
A. Unique TensorFlower [Tue, 13 Feb 2018 16:12:09 +0000 (08:12 -0800)]
Force the use of print function in generated code.

PiperOrigin-RevId: 185531979

6 years agoMechanical variable renaming to improve consistency. No other changes.
A. Unique TensorFlower [Tue, 13 Feb 2018 09:08:22 +0000 (01:08 -0800)]
Mechanical variable renaming to improve consistency. No other changes.

Distinguishing between points and vectors: A point refers to a location in the tensor/filter, referred to by channel/row/col. A vector is the difference between two points (mostly 'one_past_the_end - begin'), referred to by depth/height/width.

PiperOrigin-RevId: 185496176

6 years agoInternal change
Gunhan Gulsoy [Tue, 13 Feb 2018 08:13:28 +0000 (00:13 -0800)]
Internal change

PiperOrigin-RevId: 185491705

6 years agoAdd test to ensure that covariance terms of FID is being incorporated meaningfully.
Surya Bhupatiraju [Tue, 13 Feb 2018 07:31:38 +0000 (23:31 -0800)]
Add test to ensure that covariance terms of FID is being incorporated meaningfully.

PiperOrigin-RevId: 185488757

6 years agoDisable interleave_dataset_op_test.py and remove a duplicate entry in test blacklist...
Gunhan Gulsoy [Tue, 13 Feb 2018 07:20:15 +0000 (23:20 -0800)]
Disable interleave_dataset_op_test.py and remove a duplicate entry in test blacklist in cmake build.

PiperOrigin-RevId: 185488210

6 years ago1. Add image_ops.is_jpeg Op to decide if a input string is a jpeg string or not.
A. Unique TensorFlower [Tue, 13 Feb 2018 06:54:48 +0000 (22:54 -0800)]
1. Add image_ops.is_jpeg Op to decide if a input string is a jpeg string or not.
2. Change tfexample_decoder in slim/objection_detection to accept different JPEG decompression method.
Defaults to ""/None which maps to a system-specific default. Currently valid values are ["INTEGER_FAST", "INTEGER_ACCURATE"]. The hint may be ignored (e.g., the internal jpeg library changes to a version that does not have that specific option.)

PiperOrigin-RevId: 185486653

6 years ago[TF:XLA] Work around crash in Gather op on CPU backend by making loop bound a compile...
Peter Hawkins [Tue, 13 Feb 2018 06:45:49 +0000 (22:45 -0800)]
[TF:XLA] Work around crash in Gather op on CPU backend by making loop bound a compile-time constant.

PiperOrigin-RevId: 185486148

6 years ago[XLA:CPU] Implement vectorized Log in LLVM IR
Sanjoy Das [Tue, 13 Feb 2018 05:56:19 +0000 (21:56 -0800)]
[XLA:CPU] Implement vectorized Log in LLVM IR

This was the last vectorized intrinsic for which we had to call into
C++ so also remove the associated machinery.

PiperOrigin-RevId: 185482962

6 years ago[TF:XLA] Implement ScatterNd.
Peter Hawkins [Tue, 13 Feb 2018 02:34:36 +0000 (18:34 -0800)]
[TF:XLA] Implement ScatterNd.

Add a helper method for performing scatter operations. Share it between ScatterNd and UnsortedSegmentSum implementations. In passing, add support for negative indices to the  UnsortedSegmentSum implementation.

Added helper methods for creating XLA while loops. Use the new helper in both Gather and Scatter ops.

PiperOrigin-RevId: 185469229

6 years agoFix TFLite examples/image_label
Yu-Cheng Ling [Tue, 13 Feb 2018 01:58:27 +0000 (17:58 -0800)]
Fix TFLite examples/image_label

PiperOrigin-RevId: 185465716

6 years agoEnable Model subclassing, both in eager-mode and symbolic-mode.
Francois Chollet [Tue, 13 Feb 2018 01:44:01 +0000 (17:44 -0800)]
Enable Model subclassing, both in eager-mode and symbolic-mode.

PiperOrigin-RevId: 185464334

6 years agoRollforward switch group identification with fixes.
Jacques Pienaar [Tue, 13 Feb 2018 01:15:51 +0000 (17:15 -0800)]
Rollforward switch group identification with fixes.

Fixed computing the switch depth: with the erroneous switch depth incorrect
clusters could be formed. Change the way the switch depth is determined (the
switch depth is now on the output side, so a switch always has a switch depth
one higher than all its inputs), add further checking during execution.

PiperOrigin-RevId: 185461054

6 years agoFix a typo in the comments.
A. Unique TensorFlower [Tue, 13 Feb 2018 01:07:27 +0000 (17:07 -0800)]
Fix a typo in the comments.

PiperOrigin-RevId: 185459972

6 years ago[XLA:GPU] Extend the CustomCall for cudnn convolutions to represent
Bixia Zheng [Tue, 13 Feb 2018 00:56:28 +0000 (16:56 -0800)]
[XLA:GPU] Extend the CustomCall for cudnn convolutions to represent
tensor_ops_enabled.

The convolution algorithms returned from the stream executor have a flag
for whether tensor_ops is enabled. This flag is used when running each
algorithm during auto-tunning. However, this flag is not currently represented
in the CustomCall representing the auto-tune result. As a result, the algorithm
may be run differently after auto-tune.

This change adds a constant to the CustomCall for cudnn convolution algorithm
selected by auto-tune, to represent whether tensor_ops is enabled during
auto-tune. This information is used by convolution thunk to ensure that the
algorithm is run with the same flag after auto-tune.

PiperOrigin-RevId: 185458497

6 years agoSupport None trainable variables that don't produce a gradient in replicate_model_fn.
Igor Saprykin [Tue, 13 Feb 2018 00:24:45 +0000 (16:24 -0800)]
Support None trainable variables that don't produce a gradient in replicate_model_fn.

This fixes #16829.

PiperOrigin-RevId: 185453911

6 years agoAutomated g4 rollback of changelist 185420228
Suharsh Sivakumar [Tue, 13 Feb 2018 00:20:07 +0000 (16:20 -0800)]
Automated g4 rollback of changelist 185420228

PiperOrigin-RevId: 185453293

6 years agoAvoid setting `ConfigProto.cluster_def` when `run_config.cluster_def` is not set.
Jianwei Xie [Mon, 12 Feb 2018 23:12:31 +0000 (15:12 -0800)]
Avoid setting `ConfigProto.cluster_def` when `run_config.cluster_def` is not set.

PiperOrigin-RevId: 185443115

6 years agoAdd an option to tf_gen_op_wrapper_py to make it able to run the genrule
Guangda Lai [Mon, 12 Feb 2018 22:52:47 +0000 (14:52 -0800)]
Add an option to tf_gen_op_wrapper_py to make it able to run the genrule
locally.

PiperOrigin-RevId: 185439892

6 years agoAdd a caveat that pixel value range might not be preserved by ResizeArea.
A. Unique TensorFlower [Mon, 12 Feb 2018 22:38:37 +0000 (14:38 -0800)]
Add a caveat that pixel value range might not be preserved by ResizeArea.

PiperOrigin-RevId: 185437687

6 years agoRename op name in comments to reflect renamed op names. NFC.
Jacques Pienaar [Mon, 12 Feb 2018 22:37:41 +0000 (14:37 -0800)]
Rename op name in comments to reflect renamed op names. NFC.

PiperOrigin-RevId: 185437550

6 years agoAdd tests for visible api arguments in quantize_graph.
Suharsh Sivakumar [Mon, 12 Feb 2018 22:05:08 +0000 (14:05 -0800)]
Add tests for visible api arguments in quantize_graph.

PiperOrigin-RevId: 185432142

6 years agoReturn false instead of crashing in Tensor::SharesBufferWith if neither tensor has...
A. Unique TensorFlower [Mon, 12 Feb 2018 22:01:51 +0000 (14:01 -0800)]
Return false instead of crashing in Tensor::SharesBufferWith if neither tensor has a buffer assigned yet, since that is a valid state. Returning

  buf_ != nullptr && b.buf_ != nullptr &&  buf_->root_buffer() == b.buf_->root_buffer();

still satisfies the contract in the header, i.e. "True iff the two tensors use the same underlying refcounted storage."

PiperOrigin-RevId: 185431574

6 years agoFilter out the fake XLA devices to avoid double counting the actual hardware
Benoit Steiner [Mon, 12 Feb 2018 21:37:37 +0000 (13:37 -0800)]
Filter out the fake XLA devices to avoid double counting the actual hardware
resources available on the machine

PiperOrigin-RevId: 185427665

6 years agoallow @{} links to break across lines.
Mark Daoust [Mon, 12 Feb 2018 21:26:35 +0000 (13:26 -0800)]
allow @{} links to break across lines.

PiperOrigin-RevId: 185426070

6 years agoExtend the memory optimizations to also support accumulate_n ops
Benoit Steiner [Mon, 12 Feb 2018 21:25:57 +0000 (13:25 -0800)]
Extend the memory optimizations to also support accumulate_n ops

PiperOrigin-RevId: 185425999

6 years agoRespect the cluster spec prop during TPU system auto query.
Jianwei Xie [Mon, 12 Feb 2018 21:11:17 +0000 (13:11 -0800)]
Respect the cluster spec prop during TPU system auto query.

PiperOrigin-RevId: 185423314

6 years agoScope and decorator to automatically add control dependencies.
Alexandre Passos [Mon, 12 Feb 2018 21:00:57 +0000 (13:00 -0800)]
Scope and decorator to automatically add control dependencies.

Should mimic the desired behavior of eager code.

For now supports only straight-line code and conditionals.

PiperOrigin-RevId: 185421760

6 years agoAlso add quantization step node to MODEL_VARIABLES collection.
Suharsh Sivakumar [Mon, 12 Feb 2018 20:50:07 +0000 (12:50 -0800)]
Also add quantization step node to MODEL_VARIABLES collection.

PiperOrigin-RevId: 185420228

6 years ago[TF:XLA] Bump open source llvm revision to r324889
Sanjoy Das [Mon, 12 Feb 2018 20:30:01 +0000 (12:30 -0800)]
[TF:XLA] Bump open source llvm revision to r324889

PiperOrigin-RevId: 185417275

6 years agoAdd missing feature in make_parse_example_spec documentation.
A. Unique TensorFlower [Mon, 12 Feb 2018 20:29:05 +0000 (12:29 -0800)]
Add missing feature in make_parse_example_spec documentation.

PiperOrigin-RevId: 185417163

6 years agoAdd yield_single_examples arg to Estimator.predict
A. Unique TensorFlower [Mon, 12 Feb 2018 20:23:16 +0000 (12:23 -0800)]
Add yield_single_examples arg to Estimator.predict

PiperOrigin-RevId: 185416396

6 years ago[Java]: Add link to samples in the tensorflow/models repository.
Asim Shankar [Mon, 12 Feb 2018 20:09:26 +0000 (12:09 -0800)]
[Java]: Add link to samples in the tensorflow/models repository.

PiperOrigin-RevId: 185414475

6 years agoEnable the use of scheduling heuristics to reduce peak memory usage by default
Benoit Steiner [Mon, 12 Feb 2018 20:05:49 +0000 (12:05 -0800)]
Enable the use of scheduling heuristics to reduce peak memory usage by default

PiperOrigin-RevId: 185413855

6 years agoSupport reduction with true keep_dims and squeeze along NHW dimensions.
Yao Zhang [Mon, 12 Feb 2018 19:54:07 +0000 (11:54 -0800)]
Support reduction with true keep_dims and squeeze along NHW dimensions.

PiperOrigin-RevId: 185411786

6 years agoAdding support for tf.reduce_sum with keep_dims=True.
A. Unique TensorFlower [Mon, 12 Feb 2018 19:49:54 +0000 (11:49 -0800)]
Adding support for tf.reduce_sum with keep_dims=True.

PiperOrigin-RevId: 185411141

6 years ago[XLA] An HLO pass that folds BF16 F32 conversions: if an HLO already supports BF16...
Yuanzhong Xu [Mon, 12 Feb 2018 19:26:22 +0000 (11:26 -0800)]
[XLA] An HLO pass that folds BF16 F32 conversions: if an HLO already supports BF16 input/output, conversions before/after it will be removed and the HLO's input/output types will be converted to BF16.

Also updates HloVerifier to allow mixed precision if requested. If an HLO has both both F32 and BF16 inputs, ShapeInference will use F32 as the output type.

PiperOrigin-RevId: 185407143

6 years agoMake variable_ops_test optonly
Sanjoy Das [Mon, 12 Feb 2018 19:12:04 +0000 (11:12 -0800)]
Make variable_ops_test optonly

variable_ops_test sometimes times out in fastbuild mode.  So mark it as optonly.

Running this test with `bazel test -c opt` passes all 1000 of 1000 reruns.
Running it with just `bazel test` fails 5 out of 300 reruns.

PiperOrigin-RevId: 185404726

6 years agoChange the column name in tutorials/wide.md from 'income' to 'income_bracket' to...
Neal Wu [Mon, 12 Feb 2018 18:47:26 +0000 (10:47 -0800)]
Change the column name in tutorials/wide.md from 'income' to 'income_bracket' to match the code

PiperOrigin-RevId: 185400490

6 years agoAdd support for scalars in `tf.contrib.all_reduce`.
A. Unique TensorFlower [Mon, 12 Feb 2018 18:34:20 +0000 (10:34 -0800)]
Add support for scalars in `tf.contrib.all_reduce`.

PiperOrigin-RevId: 185398372

6 years ago[TF:XLA] Add additional test case for tf.gather.
Peter Hawkins [Mon, 12 Feb 2018 18:34:18 +0000 (10:34 -0800)]
[TF:XLA] Add additional test case for tf.gather.

PiperOrigin-RevId: 185398368

6 years agoFix shape inference bug in tensorlist
Alexandre Passos [Mon, 12 Feb 2018 18:27:18 +0000 (10:27 -0800)]
Fix shape inference bug in tensorlist

PiperOrigin-RevId: 185397219

6 years agoUpdate `tf.contrib.data` API docstring.
Derek Murray [Mon, 12 Feb 2018 17:57:40 +0000 (09:57 -0800)]
Update `tf.contrib.data` API docstring.

PiperOrigin-RevId: 185392564

6 years agoParseNodeName fix.
Jacques Pienaar [Mon, 12 Feb 2018 17:28:47 +0000 (09:28 -0800)]
ParseNodeName fix.

ParseNodeName was skipping ops that started with an underscore, leading to warnings that input of an op was undefined and stopping grappler optimizations from being run on the graph.

PiperOrigin-RevId: 185388749

6 years agoInternal Change
A. Unique TensorFlower [Mon, 12 Feb 2018 16:38:17 +0000 (08:38 -0800)]
Internal Change

PiperOrigin-RevId: 185382594

6 years agoFor debugging purposes, it can be useful to know which ops are considered non-pure...
Brian Patton [Mon, 12 Feb 2018 14:40:26 +0000 (06:40 -0800)]
For debugging purposes, it can be useful to know which ops are considered non-pure / non-constant.

PiperOrigin-RevId: 185371882

6 years ago[XLA] Support generating tuple shaped fake data in client testing
A. Unique TensorFlower [Mon, 12 Feb 2018 13:34:05 +0000 (05:34 -0800)]
[XLA] Support generating tuple shaped fake data in client testing

The previous implementation failed over in case of a tuple shaped input
what broke the replay computation tool for the case where the input is a
tuple.

PiperOrigin-RevId: 185366228

6 years agoProvide more diagnostic shape information in output window error message.
Vijay Vasudevan [Mon, 12 Feb 2018 05:19:37 +0000 (21:19 -0800)]
Provide more diagnostic shape information in output window error message.

PiperOrigin-RevId: 185331713

6 years agoAutomated g4 rollback of changelist 185233116
Guangda Lai [Mon, 12 Feb 2018 02:09:11 +0000 (18:09 -0800)]
Automated g4 rollback of changelist 185233116

PiperOrigin-RevId: 185324160

6 years ago[TPUEstimator] Automatically detect the TPU system information, including topology...
Jianwei Xie [Sun, 11 Feb 2018 23:54:39 +0000 (15:54 -0800)]
[TPUEstimator] Automatically detect the TPU system information, including topology for model parallelism.

PiperOrigin-RevId: 185318852

6 years agoDisable flaky halton_sequence_test
A. Unique TensorFlower [Sun, 11 Feb 2018 11:44:24 +0000 (03:44 -0800)]
Disable flaky halton_sequence_test

PiperOrigin-RevId: 185294455

6 years agoAdd support for kConditional to the module group scheduler.
A. Unique TensorFlower [Sun, 11 Feb 2018 04:48:19 +0000 (20:48 -0800)]
Add support for kConditional to the module group scheduler.

PiperOrigin-RevId: 185279412

6 years agoGetting rid of unnecessary GPUDevice typedef.
A. Unique TensorFlower [Sat, 10 Feb 2018 20:45:12 +0000 (12:45 -0800)]
Getting rid of unnecessary GPUDevice typedef.
Passing DepthwiseArgs by reference in host code.

PiperOrigin-RevId: 185263307

6 years agoAdd python/util/is_in_graph_mode.py
A. Unique TensorFlower [Sat, 10 Feb 2018 19:22:55 +0000 (11:22 -0800)]
Add python/util/is_in_graph_mode.py

PiperOrigin-RevId: 185260675

6 years agoAutomated g4 rollback of changelist 185073515
A. Unique TensorFlower [Sat, 10 Feb 2018 11:47:15 +0000 (03:47 -0800)]
Automated g4 rollback of changelist 185073515

PiperOrigin-RevId: 185246348

6 years agoDo not convert layout for Select if condition input is of unknown shape.
Yao Zhang [Sat, 10 Feb 2018 09:45:11 +0000 (01:45 -0800)]
Do not convert layout for Select if condition input is of unknown shape.

PiperOrigin-RevId: 185242138

6 years agoFix grappler to use CudaGpuId instead of TfGpuId to query device states.
Guangda Lai [Sat, 10 Feb 2018 06:47:30 +0000 (22:47 -0800)]
Fix grappler to use CudaGpuId instead of TfGpuId to query device states.

PiperOrigin-RevId: 185233116