platform/upstream/tensorflow.git
6 years agoDon't initialize GPUs if none will be used.
A. Unique TensorFlower [Wed, 16 May 2018 16:26:51 +0000 (09:26 -0700)]
Don't initialize GPUs if none will be used.

PiperOrigin-RevId: 196838739

6 years agoMigrating BestModelExportStrategy to core library.
A. Unique TensorFlower [Wed, 16 May 2018 16:17:15 +0000 (09:17 -0700)]
Migrating BestModelExportStrategy to core library.

PiperOrigin-RevId: 196837506

6 years agoModify tf.contrib.distributions.BatchReshape to behave a bit more like
A. Unique TensorFlower [Wed, 16 May 2018 15:45:28 +0000 (08:45 -0700)]
Modify tf.contrib.distributions.BatchReshape to behave a bit more like
tf.reshape: accept a single unknown dimension and infer partial shape
information statically.

PiperOrigin-RevId: 196833267

6 years agoResolved inconsistency with shape inference for tf.reduce_join when passing non-Tenso...
A. Unique TensorFlower [Wed, 16 May 2018 15:35:29 +0000 (08:35 -0700)]
Resolved inconsistency with shape inference for tf.reduce_join when passing non-Tensor values.
Removed deprecated arguments in tf.reduce_join test.

PiperOrigin-RevId: 196832183

6 years ago[XLA] Expose MinimumMemoryForComputation in hlo_scheduling.h
Peter Hawkins [Wed, 16 May 2018 15:10:02 +0000 (08:10 -0700)]
[XLA] Expose MinimumMemoryForComputation in hlo_scheduling.h

PiperOrigin-RevId: 196829414

6 years agoAdd support libraries in core/platform.
A. Unique TensorFlower [Wed, 16 May 2018 14:47:44 +0000 (07:47 -0700)]
Add support libraries in core/platform.

PiperOrigin-RevId: 196826860

6 years agoMigrate HloExecutionProfileTest to textual HLO
A. Unique TensorFlower [Wed, 16 May 2018 14:01:04 +0000 (07:01 -0700)]
Migrate HloExecutionProfileTest to textual HLO

Also add lhs_contracting_dims and rhs_contracting_dims to make the test more realistic.
Before, the dot operation was created with CreateBinary instead of CreateCanonicalDot.

PiperOrigin-RevId: 196822255

6 years agoEmploy array flat sizes more directly in optimized_ops, some places in reference_ops.h.
A. Unique TensorFlower [Wed, 16 May 2018 13:30:19 +0000 (06:30 -0700)]
Employ array flat sizes more directly in optimized_ops, some places in reference_ops.h.

PiperOrigin-RevId: 196819423

6 years agointernal change
A. Unique TensorFlower [Wed, 16 May 2018 12:20:47 +0000 (05:20 -0700)]
internal change

PiperOrigin-RevId: 196813574

6 years agoRefactor HloInstruction::Fuse and add a method for multi-output fusion.
A. Unique TensorFlower [Wed, 16 May 2018 12:11:33 +0000 (05:11 -0700)]
Refactor HloInstruction::Fuse and add a method for multi-output fusion.

PiperOrigin-RevId: 196813042

6 years agoImproving variable_scope documentation.
Alexandre Passos [Wed, 16 May 2018 10:56:17 +0000 (03:56 -0700)]
Improving variable_scope documentation.

PiperOrigin-RevId: 196807465

6 years agoImplementation of transpose_conv
A. Unique TensorFlower [Wed, 16 May 2018 10:43:10 +0000 (03:43 -0700)]
Implementation of transpose_conv

PiperOrigin-RevId: 196806646

6 years agoAdd performance notes for in-context gradient calls.
Tom Hennigan [Wed, 16 May 2018 06:54:39 +0000 (23:54 -0700)]
Add performance notes for in-context gradient calls.

Also:
* Add _{start,stop}_recording methods to GradientTape.
* Add performance notes when calling gradient in recording context for
  persistent tapes.
* s/tfe.GradientTape/tf.GradientTape/ in docstrings.
PiperOrigin-RevId: 196786148

6 years ago[TF:XLA] Make softplus more accurate
David Majnemer [Wed, 16 May 2018 06:00:32 +0000 (23:00 -0700)]
[TF:XLA] Make softplus more accurate

The softplus function computes log(exp(x) + 1).

We computed it this way but with special cases to handle underflow and
overflow.
This was done by comparing the input against a quantity with the
magnitude 13.94238515. Note that this quantity is not representable as a single
precision float and is instead rounded to 13.9423847.

If softplus would overflow, it will be approximated as x.
If softplus would underflow, it will be approximated as exp(x).

Unfortunately, this can provide inaccurate results for negative floats close to
the threshold.

For example: consider x = -13.9274826049805. softmax(x) is ~8.94068849e-7;
rounded to the nearest single precision float, this is 8.940689e-7.

In this case, x is quite close to the underflow threshold but not close enough
to be approximated by exp(x) == 8.94069273e-7.
Rather, it gets calculated using the canonical definition of softmax and comes
to 8.34464686e-7.

This result comes out to be wrong by 1,048,568 ULPs.

Instead, we can compute it the way one would compute LogSumExp(x, 0):
  max(x, 0) + log(exp(x - max(x, 0)) + exp(0 - max(x, 0)))

When x is positive, this is:
  x + log(exp(0) + exp(-x))

When x is negative, this is:
  log(exp(x) + exp(0))

When x is 0, this is:
  log(exp(0) + exp(0))

exp(0) evaluates to 1 which gives us:
  if x is positive, x + log(1 + exp(-x))
  if x is negative, log(exp(x) + 1)
  if x is zero,     log(2)

These three cases can be combined like so:
  max(x, 0) + log(exp(-abs(x)) + 1)

Further, we can increase the fidelity of the log calculation by using log1p:
  max(x, 0) + log1p(exp(-abs(x)))

This computation naturally handles underflow and overflow while also providing
more numerically accurate results for a few small, positive, floating point
values.

PiperOrigin-RevId: 196782814

6 years agoFix bug in `WorkerService::Logging()` handler.
Derek Murray [Wed, 16 May 2018 05:55:44 +0000 (22:55 -0700)]
Fix bug in `WorkerService::Logging()` handler.

Since transitioning to proto3, it was not possible to distinguish between the absence of
LoggingRequest::rpc_logging and it being set to false. This led to a bug that ignored
log-disabling messages in some implementations, which meant that logging was never
disabled. This fix adds explicit fields in LoggingRequest for enabling and disabling RPC
logging.
PiperOrigin-RevId: 196782547

6 years agoTrivial message cleanup.
Martin Wicke [Wed, 16 May 2018 05:38:20 +0000 (22:38 -0700)]
Trivial message cleanup.

PiperOrigin-RevId: 196781381

6 years agoEnable gpu tests for cross_tower_ops_test
Priya Gupta [Wed, 16 May 2018 05:06:10 +0000 (22:06 -0700)]
Enable gpu tests for cross_tower_ops_test

PiperOrigin-RevId: 196779286

6 years agoRemove check for axis == 3 since if the input dimension is not 4, the input axis...
A. Unique TensorFlower [Wed, 16 May 2018 04:35:27 +0000 (21:35 -0700)]
Remove check for axis == 3 since if the input dimension is not 4, the input axis is not necessary 3.
And change the test as well.

PiperOrigin-RevId: 196777020

6 years agoHardcode two exceptions to the list of files allowed in a 'platform'
A. Unique TensorFlower [Wed, 16 May 2018 03:14:18 +0000 (20:14 -0700)]
Hardcode two exceptions to the list of files allowed in a 'platform'

PiperOrigin-RevId: 196771621

6 years agoFix TfLite Convolution handle input_bacthe incorrectly for 1*1 kernel,
A. Unique TensorFlower [Wed, 16 May 2018 03:11:01 +0000 (20:11 -0700)]
Fix TfLite Convolution handle input_bacthe incorrectly for 1*1 kernel,
and improve test coverage for conv ops.

PiperOrigin-RevId: 196771421

6 years agoFix transpose_conv typo in optimized_ops. input_depth should be output_depth.
A. Unique TensorFlower [Wed, 16 May 2018 02:19:00 +0000 (19:19 -0700)]
Fix transpose_conv typo in optimized_ops. input_depth should be output_depth.

PiperOrigin-RevId: 196767951

6 years ago[tf.data] Fixing a race in map_and_batch.
Jiri Simsa [Wed, 16 May 2018 01:32:01 +0000 (18:32 -0700)]
[tf.data] Fixing a race in map_and_batch.

PiperOrigin-RevId: 196764211

6 years agoOptimize batch normalization when possible
Benoit Steiner [Wed, 16 May 2018 01:15:32 +0000 (18:15 -0700)]
Optimize batch normalization when possible

PiperOrigin-RevId: 196762618

6 years ago[TF:XLA] Remove the need for memcpy from Tensor->Literal.
Kay Zhu [Wed, 16 May 2018 00:49:43 +0000 (17:49 -0700)]
[TF:XLA] Remove the need for memcpy from Tensor->Literal.

Introducing a new LiteralOwningSlice class that is similar to LiteraSlice, but owns the root piece.

PiperOrigin-RevId: 196759785

6 years agoFixes false-positive in tf.enable_eager_execution
Alexandre Passos [Tue, 15 May 2018 23:26:43 +0000 (16:26 -0700)]
Fixes false-positive in tf.enable_eager_execution

Simply using ops which have name scopes and other things will trigger
adding a graph to the stack.

PiperOrigin-RevId: 196748657

6 years agoRemove misleading declaration-as-default that results in a deleted constructor, and...
A. Unique TensorFlower [Tue, 15 May 2018 22:46:14 +0000 (15:46 -0700)]
Remove misleading declaration-as-default that results in a deleted constructor, and a misguided comment.

PiperOrigin-RevId: 196742616

6 years agoRemove unused BUILD dependencies
A. Unique TensorFlower [Tue, 15 May 2018 22:46:03 +0000 (15:46 -0700)]
Remove unused BUILD dependencies

PiperOrigin-RevId: 196742598

6 years agoCheckpointable: Restore-on-create for name-based checkpoints when executing eagerly
Allen Lavoie [Tue, 15 May 2018 21:26:21 +0000 (14:26 -0700)]
Checkpointable: Restore-on-create for name-based checkpoints when executing eagerly

Should make loading name-based checkpoints more natural with object-based APIs when executing eagerly. Before this CL they could be loaded, but users needed to use "run_restore_ops" after all variables were created (which is less useful and confusing).

PiperOrigin-RevId: 196729311

6 years agoChange the block size used in the triangular system solve step of the Cholesky decomp...
A. Unique TensorFlower [Tue, 15 May 2018 21:24:57 +0000 (14:24 -0700)]
Change the block size used in the triangular system solve step of the Cholesky decomposition algorithm to be the same block size as the Cholesky blocks.  Dramatically speeds up compilation time with negligible affect on runtime.

PiperOrigin-RevId: 196729081

6 years agoRemove a tensor_slice_reader warning when using HDF5 in Model.load_weights.
Allen Lavoie [Tue, 15 May 2018 20:57:39 +0000 (13:57 -0700)]
Remove a tensor_slice_reader warning when using HDF5 in Model.load_weights.

It may make sense to remove the logged warning entirely, but this change just adds an extra check for the filename.

Fixes #19289.

PiperOrigin-RevId: 196724395

6 years agoDon't ever use cuDNN to perform depthwise convolutions on CPU.
A. Unique TensorFlower [Tue, 15 May 2018 20:38:34 +0000 (13:38 -0700)]
Don't ever use cuDNN to perform depthwise convolutions on CPU.

PiperOrigin-RevId: 196721302

6 years agoFixes potential race in ResourceHandleOp::Compute
Alexandre Passos [Tue, 15 May 2018 20:30:13 +0000 (13:30 -0700)]
Fixes potential race in ResourceHandleOp::Compute
Fixes #19299

PiperOrigin-RevId: 196720004

6 years agoHandle delayed variable initialization in MirroredStrategy. Test with RNN layer.
Priya Gupta [Tue, 15 May 2018 20:20:01 +0000 (13:20 -0700)]
Handle delayed variable initialization in MirroredStrategy. Test with RNN layer.
Bug reported and solution suggested in #19069

PiperOrigin-RevId: 196718454

6 years agoAllows scan to work in reverse as well as forward
Alexandre Passos [Tue, 15 May 2018 20:07:49 +0000 (13:07 -0700)]
Allows scan to work in reverse as well as forward

PiperOrigin-RevId: 196716758

6 years agoImprove documentation for tf.contrib.eager.defun.
Akshay Agrawal [Tue, 15 May 2018 19:28:33 +0000 (12:28 -0700)]
Improve documentation for tf.contrib.eager.defun.

PiperOrigin-RevId: 196711029

6 years ago[XLA] Make HloCSE compare computations
Benjamin Kramer [Tue, 15 May 2018 19:12:44 +0000 (12:12 -0700)]
[XLA] Make HloCSE compare computations

This shows up when you have two otherwise identical instructions that call a
computation, like a fusion or a reduce. Even if the called computations are
identical but not the same it wouldn't get CSE'd. I was a bit worried about the
compile time impact of comparing full computations, but this only happens if
everything else already compares equal. The impact on compile time of
benchmarks seems to be within the noise.

PiperOrigin-RevId: 196708782

6 years agoRefactoring: Remove mutable_op_resolver module
Yu-Cheng Ling [Tue, 15 May 2018 18:25:25 +0000 (11:25 -0700)]
Refactoring: Remove mutable_op_resolver module
PiperOrigin-RevId: 196700821

6 years agoUse the new compile op and print user-friendly error message without mixing with...
Jianwei Xie [Tue, 15 May 2018 18:08:25 +0000 (11:08 -0700)]
Use the new compile op and print user-friendly error message without mixing with infeed/outfeed if compilation fails.

PiperOrigin-RevId: 196697690

6 years agoAutomated g4 rollback of changelist 196683444
Peter Hawkins [Tue, 15 May 2018 17:33:48 +0000 (10:33 -0700)]
Automated g4 rollback of changelist 196683444

PiperOrigin-RevId: 196691101

6 years ago[XLA] Cache computations when creating reduces in algebraic simplifier or batchnorm...
Benjamin Kramer [Tue, 15 May 2018 17:23:27 +0000 (10:23 -0700)]
[XLA] Cache computations when creating reduces in algebraic simplifier or batchnorm expander

Otherwise we create a lot of identical small computations. This shouldn't have
an effect except for cluttering the HLO, but turns out HloCSE doesn't look
inside of the computation of reduces, effectively never eliminating reduces
that were produced via this code path.

While there clean up some YAGNI, this only worked for F32 anyways, so just
hardcode it.

PiperOrigin-RevId: 196689316

6 years ago[TF:XLA] Generalize existing support for keeping variables on an XLA device in reshap...
Peter Hawkins [Tue, 15 May 2018 16:47:50 +0000 (09:47 -0700)]
[TF:XLA] Generalize existing support for keeping variables on an XLA device in reshaped form, instead allowing XLA devices to keep all tensors in a reshaped form outside an XLA computation.

PiperOrigin-RevId: 196683444

6 years agoAdding --distinct_host_configuration=false in tools/bazel.rc
A. Unique TensorFlower [Tue, 15 May 2018 15:38:49 +0000 (08:38 -0700)]
Adding --distinct_host_configuration=false in tools/bazel.rc

When building TensorFlow, the host and target platforms are usually the same. So we don't have to distinct them by default. This helps avoid building the same targets twice.

If we need to do cross compilation, add --config=cross-compile to distinct them.

PiperOrigin-RevId: 196673728

6 years agoupdate doc
A. Unique TensorFlower [Tue, 15 May 2018 15:09:04 +0000 (08:09 -0700)]
update doc

PiperOrigin-RevId: 196670274

6 years agoSmall polishing changes in stream executor, no functional changes.
A. Unique TensorFlower [Tue, 15 May 2018 14:28:24 +0000 (07:28 -0700)]
Small polishing changes in stream executor, no functional changes.

PiperOrigin-RevId: 196665609

6 years agointernal change
A. Unique TensorFlower [Tue, 15 May 2018 09:48:07 +0000 (02:48 -0700)]
internal change

PiperOrigin-RevId: 196640024

6 years agoReland improve fusion logic of (a dot b) * alpha
A. Unique TensorFlower [Tue, 15 May 2018 08:22:13 +0000 (01:22 -0700)]
Reland improve fusion logic of (a dot b) * alpha

The previous fusion approach didn't work because a multiplication by a scalar value
will be changed into an explicit broadcast.
Another issue that is fixed in this CL is retrieving the constant value from
the literal. This depends on the PrimitiveType, before we always assumed it to be double.
Also when checking ImplementedAsGemm() we should not call it recursively, but instead just the check related to kDot.
Finally add an execution test and adjust the fusion logic test.

The fix for the issue that caused the revert is that we check earlier that consumer->operand_count() is 2.
Also, we fix the call to Get() to pass {} instead of {0}.
And we handle an output fusion node in GemmThunk to extract the dimension numbers from the dot operation.

PiperOrigin-RevId: 196631031

6 years ago[TF:XLA] Scheduling test which demonstrates that we are ignoring the memory needed...
Dimitris Vardoulakis [Tue, 15 May 2018 05:28:06 +0000 (22:28 -0700)]
[TF:XLA] Scheduling test which demonstrates that we are ignoring the memory needed by subcomputations.

PiperOrigin-RevId: 196618347

6 years agoAdded type check to feature column keys. So that users will get meaningful error...
Mustafa Ispir [Tue, 15 May 2018 05:04:50 +0000 (22:04 -0700)]
Added type check to feature column keys. So that users will get meaningful error messages in situations like: #19219

PiperOrigin-RevId: 196616638

6 years agoPartial update of tf.keras to the Keras 2.1.6 API.
Pavithra Vijay [Tue, 15 May 2018 04:43:55 +0000 (21:43 -0700)]
Partial update of tf.keras to the Keras 2.1.6 API.

Changes included are:
- Fix `batch_dot` when `axes=None`
- Add axis=-1 as an argument to keras.backend.softmax
- Fix ctc_batch_cost() error when batch_size = 1
- Print previous best in ModelCheckpoint callback
- Fix ReduceLROnPlateau callback
- Extend RemoteMonitor to send data as application/json
- Fix default dilation rate value in 2D separable conv.
- Fix for MobileNet model with undefined shape
- Disable require_flatten in nasnet & Add an error message for undefined shape.
- Improve tests by designating dtype of sample data
- Multi_gpu_model supporting legacy/fullCPU/fullGPU

PiperOrigin-RevId: 196615376

6 years ago Function should inherit device information from the caller site.
Youlong Cheng [Tue, 15 May 2018 04:28:44 +0000 (21:28 -0700)]
  Function should inherit device information from the caller site.

PiperOrigin-RevId: 196614376

6 years agoUpdate SCALED mode to use the full quantized range of -128..127 when possible.
A. Unique TensorFlower [Tue, 15 May 2018 02:38:37 +0000 (19:38 -0700)]
Update SCALED mode to use the full quantized range of -128..127 when possible.

PiperOrigin-RevId: 196606455

6 years ago[XLA] Move more comparison functions to non-test library.
Chris Leary [Tue, 15 May 2018 02:21:10 +0000 (19:21 -0700)]
[XLA] Move more comparison functions to non-test library.

PiperOrigin-RevId: 196605347

6 years agoMove model_to_estimator utility into Estimator from Keras.
Michael Case [Tue, 15 May 2018 02:15:51 +0000 (19:15 -0700)]
Move model_to_estimator utility into Estimator from Keras.

Working on untangling TF/Estimator deps. We would like to get to a state
where Estimator depends on Keras and not vice versa

PiperOrigin-RevId: 196605024

6 years agoFix a bug in HloInstruction::ImplicitlyBroadcastsOperand where operands with the...
Yunxing Dai [Tue, 15 May 2018 01:55:20 +0000 (18:55 -0700)]
Fix a bug in HloInstruction::ImplicitlyBroadcastsOperand where operands with the same dimension but different types are not considered broadcast.

PiperOrigin-RevId: 196603348

6 years agoAdds CsvDataset, which both reads and parses files.
Rachel Lim [Tue, 15 May 2018 01:30:49 +0000 (18:30 -0700)]
Adds CsvDataset, which both reads and parses files.
Example usage: dataset = tf.contrib.data.CsvDataset(filenames, record_defaults=record_defaults, **kwargs)
Motivation: Fusing reading and parsing is more performant and correct than the previous canonical CSV parsing flow (`dataset = tf.data.TextLineDataset(filenames).map(lambda l: tf.decode_csv(l, **kwargs))`)

Closes #19077.

PiperOrigin-RevId: 196601381

6 years agoDisable LinearOperatorKroneckerTest.test_solve_{with_broadcast} temporarily.
A. Unique TensorFlower [Tue, 15 May 2018 01:29:59 +0000 (18:29 -0700)]
Disable LinearOperatorKroneckerTest.test_solve_{with_broadcast} temporarily.

PiperOrigin-RevId: 196601310

6 years ago[tf.data] Add optional `args` argument to `Dataset.from_generator()`.
Derek Murray [Tue, 15 May 2018 01:04:31 +0000 (18:04 -0700)]
[tf.data] Add optional `args` argument to `Dataset.from_generator()`.

The new argument allows you to parameterize the generator with the value of a tf.Tensor,
enabling `Dataset.from_generator()` to be initialized from a placeholder or used in a
nested expression (such as `flat_map()` or `parallel_interleave()`). For example:

```python
def generator(n):
  for _ in range(n):
    yield n

# Define a generator based on a placeholder.
placeholder = tf.placeholder(tf.int64, shape=[])
dataset = tf.data.Dataset.from_generator(generator, tf.int64, args=(placeholder,))

# Define a generator based on the value of a nested dataset element.
dataset = tf.data.Dataset.range(10).flat_map(
    lambda i: tf.data.Dataset.from_generator(generator, tf.int64, args=(i,)))
```

Fixes #19269. Partially addresses issue #13101.

PiperOrigin-RevId: 196598650

6 years agoIntroduce LossScalingOptimizer for mixed precision training.
James Qin [Tue, 15 May 2018 00:51:11 +0000 (17:51 -0700)]
Introduce LossScalingOptimizer for mixed precision training.

PiperOrigin-RevId: 196597196

6 years agoAdd an option to execute eval on cpu, regardless of training runs on TPU.
A. Unique TensorFlower [Mon, 14 May 2018 23:39:11 +0000 (16:39 -0700)]
Add an option to execute eval on cpu, regardless of training runs on TPU.
This will let users to benefit from TPU training, but avoid complex
eval metrics functions to be ported to TPU.

PiperOrigin-RevId: 196587755

6 years agoRefactoring: Make OpResolver return const pointer.
Yu-Cheng Ling [Mon, 14 May 2018 23:35:22 +0000 (16:35 -0700)]
Refactoring: Make OpResolver return const pointer.
PiperOrigin-RevId: 196587227

6 years agoAutomated g4 rollback of changelist 196565296
Pavithra Vijay [Mon, 14 May 2018 23:30:49 +0000 (16:30 -0700)]
Automated g4 rollback of changelist 196565296

PiperOrigin-RevId: 196586601

6 years agoClangTidy - Readability cleanup:/code-findings-fixes.
A. Unique TensorFlower [Mon, 14 May 2018 23:26:33 +0000 (16:26 -0700)]
ClangTidy - Readability cleanup:/code-findings-fixes.

* unused using-declarations
* redundant string conversions
* C-style casts
* redundant get() call on smart pointer
* the 'empty' method should be used to check for emptiness instead of 'size'

PiperOrigin-RevId: 196585984

6 years agoMake sure that variables aren't created as partition variables since only non-scalar...
Suharsh Sivakumar [Mon, 14 May 2018 23:17:46 +0000 (16:17 -0700)]
Make sure that variables aren't created as partition variables since only non-scalar partition variables are supported.

PiperOrigin-RevId: 196584749

6 years agoFix bug where custom layers could crash.
Reed Wanderman-Milne [Mon, 14 May 2018 23:07:33 +0000 (16:07 -0700)]
Fix bug where custom layers could crash.

Layer.add_weight would crash when called without a dtype or initializer.

PiperOrigin-RevId: 196583182

6 years agoFix functional.While(), functional.For(rewrite_with_while)
James Qin [Mon, 14 May 2018 22:51:36 +0000 (15:51 -0700)]
Fix functional.While(), functional.For(rewrite_with_while)

When executing on GPU, synchronously copy cond result from device to host.

PiperOrigin-RevId: 196580820

6 years agoGo: Update generated wrapper functions for TensorFlow ops.
A. Unique TensorFlower [Mon, 14 May 2018 22:50:06 +0000 (15:50 -0700)]
Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId: 196580619

6 years agoAdd ExplicitShapes as a new shape inference function for Ops with
A. Unique TensorFlower [Mon, 14 May 2018 22:45:33 +0000 (15:45 -0700)]
Add ExplicitShapes as a new shape inference function for Ops with
multiple outputs, each of which is explicitly declared.

PiperOrigin-RevId: 196579920

6 years agoRemove CuDNNRNN timing test.
Pavithra Vijay [Mon, 14 May 2018 22:32:44 +0000 (15:32 -0700)]
Remove CuDNNRNN timing test.

PiperOrigin-RevId: 196578043

6 years agoFix copy functions of MutableOpResolver
Yu-Cheng Ling [Mon, 14 May 2018 22:27:51 +0000 (15:27 -0700)]
Fix copy functions of MutableOpResolver

PiperOrigin-RevId: 196577314

6 years agoUsed aligned allocation for vector cache.
Shashi Shekhar [Mon, 14 May 2018 22:22:04 +0000 (15:22 -0700)]
Used aligned allocation for vector cache.

PiperOrigin-RevId: 196576497

6 years agoUpdate ops-related pbtxt files.
A. Unique TensorFlower [Mon, 14 May 2018 22:20:07 +0000 (15:20 -0700)]
Update ops-related pbtxt files.

PiperOrigin-RevId: 196576189

6 years agoInternal change.
A. Unique TensorFlower [Mon, 14 May 2018 22:15:38 +0000 (15:15 -0700)]
Internal change.

PiperOrigin-RevId: 196575483

6 years agoRefactoring: Remove lite/tools:mutable_op_resolver dependency.
Yu-Cheng Ling [Mon, 14 May 2018 22:14:53 +0000 (15:14 -0700)]
Refactoring: Remove lite/tools:mutable_op_resolver dependency.
PiperOrigin-RevId: 196575387

6 years agoStricter analysis for functional conditional generation
A. Unique TensorFlower [Mon, 14 May 2018 22:06:17 +0000 (15:06 -0700)]
Stricter analysis for functional conditional generation

PiperOrigin-RevId: 196573938

6 years agoDo shape validation in ScatterNd kernel, not just the shape inference function.
Alexandre Passos [Mon, 14 May 2018 21:58:17 +0000 (14:58 -0700)]
Do shape validation in ScatterNd kernel, not just the shape inference function.

Fixes #18648

PiperOrigin-RevId: 196572262

6 years agoFail gracefully with a helpful error message when provided with conflicting
Asim Shankar [Mon, 14 May 2018 21:56:36 +0000 (14:56 -0700)]
Fail gracefully with a helpful error message when provided with conflicting
visible_devices_list.

See #19083
See #18861

More generally, this change avoids assertion failures (that will bring the
whole process down) on a few code-paths that can be triggerred by user input.

PiperOrigin-RevId: 196572013

6 years agoInternal change.
A. Unique TensorFlower [Mon, 14 May 2018 21:49:08 +0000 (14:49 -0700)]
Internal change.

PiperOrigin-RevId: 196570742

6 years agoMake CollectiveParamReducerLocal::InitInstanceSharedParams non-blocking.
A. Unique TensorFlower [Mon, 14 May 2018 21:44:31 +0000 (14:44 -0700)]
Make CollectiveParamReducerLocal::InitInstanceSharedParams non-blocking.

PiperOrigin-RevId: 196570011

6 years agoAutomated g4 rollback of changelist 196456687
Gunhan Gulsoy [Mon, 14 May 2018 21:32:15 +0000 (14:32 -0700)]
Automated g4 rollback of changelist 196456687

PiperOrigin-RevId: 196567964

6 years agoAdd score filtering to tf.image.non_max_suppression.
A. Unique TensorFlower [Mon, 14 May 2018 21:32:03 +0000 (14:32 -0700)]
Add score filtering to tf.image.non_max_suppression.

PiperOrigin-RevId: 196567928

6 years agoUpdate the eager programmer's guide to reflect the fact that "==" is not
Akshay Agrawal [Mon, 14 May 2018 21:25:55 +0000 (14:25 -0700)]
Update the eager programmer's guide to reflect the fact that "==" is not
implemented in the natural way for the Tensor class.

PiperOrigin-RevId: 196566940

6 years agoReverseDFS scheduler reverses the heuristics used in DFSScheduler.
Yunxing Dai [Mon, 14 May 2018 21:18:11 +0000 (14:18 -0700)]
ReverseDFS scheduler reverses the heuristics used in DFSScheduler.

Also fixes hlo_schedule_test to remove the expected order on unrelated operations.

PiperOrigin-RevId: 196565651

6 years agoDisable flaky cudnn_recurrent test
Gunhan Gulsoy [Mon, 14 May 2018 21:16:09 +0000 (14:16 -0700)]
Disable flaky cudnn_recurrent test

PiperOrigin-RevId: 196565296

6 years agoReenable virtual gpu test, and decrease the number of testing rounds.
Guangda Lai [Mon, 14 May 2018 21:15:14 +0000 (14:15 -0700)]
Reenable virtual gpu test, and decrease the number of testing rounds.

PiperOrigin-RevId: 196565153

6 years ago[XLA] Ergonomic improvements to --xla_hlo_profile.
Justin Lebar [Mon, 14 May 2018 21:09:01 +0000 (14:09 -0700)]
[XLA] Ergonomic improvements to --xla_hlo_profile.

- Don't display ops with 0 optimal seconds and 0 actual cycles.  These
  are ops that were expected to be free and were actually free.

- Fix HloCostAnalysis to mark parameters, constants, and
  get-tuple-element as expected-to-be-free per the definition above.

- Allow optimal-seconds < 0 to indicate "I don't know".  Use this for
  custom calls, and then hide such ops from the "seconds above the
  optimum" table.

- Don't display "<none>" and "<unknown>" -- instead, just display the
  empty string.  Less visual noise.

- Instead of showing ~5 ops per category in the categories tables, show
  everything.  This isn't so noisy now that we're hiding "free" ops, and
  it makes finding optimization opportunities much easier.

PiperOrigin-RevId: 196564177

6 years agoAdd If op rewriter.
Jacques Pienaar [Mon, 14 May 2018 21:04:05 +0000 (14:04 -0700)]
Add If op rewriter.

* Add attribute to If op to indicate if lowering to switch-merge form is
  needed;
* Add initial version of If op rewriter than transforms a If op into
  switch/merge nodes (as would have been constructed via tf.cond) if the If op
  has the lowering attribute set.
  - The pass is not ready for general use and, for example, does not support
    reference data types.

PiperOrigin-RevId: 196563421

6 years agoReserves 'context' key in TPUEstimator params dict.
Jianwei Xie [Mon, 14 May 2018 20:53:00 +0000 (13:53 -0700)]
Reserves 'context' key in TPUEstimator params dict.

PiperOrigin-RevId: 196561620

6 years agoAdd CheckpointInputPipelineHook to the API docs.
Saurabh Saxena [Mon, 14 May 2018 20:44:52 +0000 (13:44 -0700)]
Add CheckpointInputPipelineHook to the API docs.

PiperOrigin-RevId: 196560221

6 years agoAdded support for strided slicing of symbolic shapes
Benoit Steiner [Mon, 14 May 2018 20:33:46 +0000 (13:33 -0700)]
Added support for strided slicing of symbolic shapes

PiperOrigin-RevId: 196558466

6 years agoResolve inlined function input/output types from GrapplerFunctionItem.
A. Unique TensorFlower [Mon, 14 May 2018 20:30:53 +0000 (13:30 -0700)]
Resolve inlined function input/output types from GrapplerFunctionItem.

Remove duplicated code to resolve type from attributes.

PiperOrigin-RevId: 196558061

6 years agoUpdated speech commands example to use new dataset
Pete Warden [Mon, 14 May 2018 20:24:58 +0000 (13:24 -0700)]
Updated speech commands example to use new dataset

PiperOrigin-RevId: 196557132

6 years agoVarious ClangTidy-inspired fixes.
A. Unique TensorFlower [Mon, 14 May 2018 20:22:09 +0000 (13:22 -0700)]
Various ClangTidy-inspired fixes.

PiperOrigin-RevId: 196556727

6 years ago add memory utilization estimate for HLO op profile.
A. Unique TensorFlower [Mon, 14 May 2018 20:00:26 +0000 (13:00 -0700)]
  add memory utilization estimate for HLO op profile.

PiperOrigin-RevId: 196553696

6 years agoDeletes an unused private method in head.py
A. Unique TensorFlower [Mon, 14 May 2018 19:03:50 +0000 (12:03 -0700)]
Deletes an unused private method in head.py

PiperOrigin-RevId: 196545696

6 years agoDon't check that bool arrays are quantized.
A. Unique TensorFlower [Mon, 14 May 2018 18:40:50 +0000 (11:40 -0700)]
Don't check that bool arrays are quantized.

PiperOrigin-RevId: 196541955

6 years agoUse utility methods to compute AttrValue hash code and check for equality.
A. Unique TensorFlower [Mon, 14 May 2018 17:43:08 +0000 (10:43 -0700)]
Use utility methods to compute AttrValue hash code and check for equality.

PiperOrigin-RevId: 196531355

6 years agoavoid having stream_executor depend on tensorflow/core
A. Unique TensorFlower [Mon, 14 May 2018 16:51:52 +0000 (09:51 -0700)]
avoid having stream_executor depend on tensorflow/core

PiperOrigin-RevId: 196521381

6 years agoExtracts the following optimizations into methods:
A. Unique TensorFlower [Mon, 14 May 2018 16:45:42 +0000 (09:45 -0700)]
Extracts the following optimizations into methods:

PartialConstPropThroughIdentityN
ConstantPushDown

PiperOrigin-RevId: 196520167

6 years agoPre-factoring: Fix overly specific test expectations to prepare for multi-output...
A. Unique TensorFlower [Mon, 14 May 2018 16:06:25 +0000 (09:06 -0700)]
Pre-factoring: Fix overly specific test expectations to prepare for multi-output fusion.
PiperOrigin-RevId: 196514026

6 years agoPrevent removal of constant inputs to passthrough ops.
A. Unique TensorFlower [Mon, 14 May 2018 14:53:04 +0000 (07:53 -0700)]
Prevent removal of constant inputs to passthrough ops.

PiperOrigin-RevId: 196505061