platform/upstream/tensorflow.git
6 years agoInternal change.
Anna R [Thu, 29 Mar 2018 11:34:29 +0000 (04:34 -0700)]
Internal change.

PiperOrigin-RevId: 190913047

6 years agoMove the swapping kernels to the all_kernels library to avoid registering them
Benoit Steiner [Thu, 29 Mar 2018 06:31:26 +0000 (23:31 -0700)]
Move the swapping kernels to the all_kernels library to avoid registering them
more than once from tensorflow/contrib.

PiperOrigin-RevId: 190887394

6 years agoAdd --announce_rc Bazel arg to several of our builds.
Michael Case [Thu, 29 Mar 2018 05:46:25 +0000 (22:46 -0700)]
Add --announce_rc Bazel arg to several of our builds.

This will help to...
 - Refactor the build scripts without accidently adding functional changes.
 - Help debug several issues where some options aren't being added correctly
   by configure script.

PiperOrigin-RevId: 190884531

6 years agoDistributionStrategy-enable Estimator.
A. Unique TensorFlower [Thu, 29 Mar 2018 04:52:30 +0000 (21:52 -0700)]
DistributionStrategy-enable Estimator.

PiperOrigin-RevId: 190882152

6 years agoFix TensorList decoding bug. Thanks to Alexandre Passos for finding this.
A. Unique TensorFlower [Thu, 29 Mar 2018 04:11:16 +0000 (21:11 -0700)]
Fix TensorList decoding bug. Thanks to Alexandre Passos for finding this.

PiperOrigin-RevId: 190879840

6 years agoFixed the shape function of the SplitV op that incorrectly often assumed that
Benoit Steiner [Thu, 29 Mar 2018 04:07:02 +0000 (21:07 -0700)]
Fixed the shape function of the SplitV op that incorrectly often assumed that
the shape of all the outputs is the same.

PiperOrigin-RevId: 190879600

6 years agoSupport structured source in GradientTape.gradient
Igor Ganichev [Thu, 29 Mar 2018 03:51:01 +0000 (20:51 -0700)]
Support structured source in GradientTape.gradient

Before this change, it was easy to forget [] around the source tensor.
This mistake lead to GradientTape.gradient(), returning a list of Nones.
Nones normally tell to the user that the source and the target are
not connected via differentiable operations, which is not the source
of the error in this case.

Instead of adding a check that `sources` is a list of tensors, this CL
adds ability to handle structured source (which includes a lone tensor),
similarly to many existing TensorFlow APIs.

Also, with Alex's help, it fixes a bug where repeated tensors in
`sources` were not handled correctly.

PiperOrigin-RevId: 190878583

6 years agoRemove all_opensource_files. It's not needed any more.
Martin Wicke [Thu, 29 Mar 2018 03:46:14 +0000 (20:46 -0700)]
Remove all_opensource_files. It's not needed any more.

PiperOrigin-RevId: 190878279

6 years ago[tf.data] Maintain a reference on the FunctionBufferingResource while a get-next...
Derek Murray [Thu, 29 Mar 2018 03:44:51 +0000 (20:44 -0700)]
[tf.data] Maintain a reference on the FunctionBufferingResource while a get-next operation is active.

Previously, the reference count on a FunctionBufferingResource could drop to 0 and it could be deleted (e.g. by a DestroyResourceOp) while a get-next operation is active on it. This would lead to use-after-free errors.

PiperOrigin-RevId: 190878208

6 years agoRelax limitations on rerouting graph outputs.
Suharsh Sivakumar [Thu, 29 Mar 2018 02:21:08 +0000 (19:21 -0700)]
Relax limitations on rerouting graph outputs.

- Allow multiple outputs of output_tensors in fold_batch_norms.
- Allow duplicate consumers in quantize.
- I also quick a fix issue for matching final layers that have batch norm.

PiperOrigin-RevId: 190873003

6 years ago[XLA] Redesign: implement GetComputationStats.
A. Unique TensorFlower [Thu, 29 Mar 2018 01:54:09 +0000 (18:54 -0700)]
[XLA] Redesign: implement GetComputationStats.

PiperOrigin-RevId: 190871262

6 years agoRelax CuDNN version requirements because CuDNN is backwards compatible within
Smit Hinsu [Thu, 29 Mar 2018 01:26:46 +0000 (18:26 -0700)]
Relax CuDNN version requirements because CuDNN is backwards compatible within
a major release starting with CuDNN 7.0

PiperOrigin-RevId: 190869028

6 years agoImplement assert_same_structure in C++
Igor Ganichev [Thu, 29 Mar 2018 01:26:30 +0000 (18:26 -0700)]
Implement assert_same_structure in C++

Also implements helper functions nest._is_namedtuple
nest._same_namedtuple.

Also, fix a bug in FlattenHelper where error from recursive
calls were not propagated up immediately.

This change implements a good chunk of machinery that will
allow us to move map_structure to C++.

Before:
entry {
  name: "NestBenchmark.assert_same_structure_6_elem"
  iters: 30000
  wall_time: 4.79532718658e-05
}

entry {
  name: "NestBenchmark.assert_same_structure_60_elem"
  iters: 30000
  wall_time: 0.000403008667628
}

After:
entry {
  name: "NestBenchmark.assert_same_structure_6_elem"
  iters: 30000
  wall_time: 1.65301720301e-05
}

entry {
  name: "NestBenchmark.assert_same_structure_60_elem"
  iters: 30000
  wall_time: 0.000147621099154
}

PiperOrigin-RevId: 190869007

6 years agoTPU: Implement 3rd gen input pipeline config.
Brennan Saeta [Thu, 29 Mar 2018 00:54:01 +0000 (17:54 -0700)]
TPU: Implement 3rd gen input pipeline config.

In this new configuration, we are able to drive a Cloud TPU at full device performance, and achieve over 3k images/sec on ResNet-50. The previous bottleneck was the un-pipeline-able split that occurred after the iterator.get_next() call. This split (when not splitting on the batch-major dimension) caused the training job to be single-threaded-CPU-bottlenecked, resulting in a performance of only ~2650 images/sec on ResNet-50.

This latest input pipeline configuration requires the use of datasets. By requiring datasets, we gain the ability to call get_next() num_replicas times per host, and avoid the expensive split op. (Note: this also opens up potential future avenues for further optimization.) Despite this, we retain a lot of nice usability properties that per_host_v1 (aka input pipeline config v2) gave us.

PiperOrigin-RevId: 190865741

6 years agoFurther speed up statistical_testing_test by breaking up DKWM test.
A. Unique TensorFlower [Thu, 29 Mar 2018 00:36:30 +0000 (17:36 -0700)]
Further speed up statistical_testing_test by breaking up DKWM test.

PiperOrigin-RevId: 190863893

6 years agoMissed ScopedUnref in ResourceGather
Alexandre Passos [Thu, 29 Mar 2018 00:16:10 +0000 (17:16 -0700)]
Missed ScopedUnref in ResourceGather

PiperOrigin-RevId: 190861558

6 years agoCollective Ops Part 1
A. Unique TensorFlower [Thu, 29 Mar 2018 00:06:44 +0000 (17:06 -0700)]
Collective Ops Part 1

The basic interface definitions, local-only versions of remote-access,
param-resolution, device-resolution and mgr.

A collective op is able to execute synchronously across devices
and across separate graphs. Collective ops to be introduced eventually
include broadcast and all-reduce.  This change is part of a series of
changes that will introduce the necessary infrastructure then the
initial op implementations.

PiperOrigin-RevId: 190860248

6 years agoAutomated g4 rollback of changelist 190835392
Anna R [Wed, 28 Mar 2018 23:52:39 +0000 (16:52 -0700)]
Automated g4 rollback of changelist 190835392

PiperOrigin-RevId: 190858242

6 years agoTower-local variable support for DistributionStrategy. Each tower has
A. Unique TensorFlower [Wed, 28 Mar 2018 23:16:48 +0000 (16:16 -0700)]
Tower-local variable support for DistributionStrategy. Each tower has
its own variable, but fetch() and checkpoint apply a reduction to get
a single value.

PiperOrigin-RevId: 190853123

6 years agoAdd IsSquare bool to the grappler op_types.
A. Unique TensorFlower [Wed, 28 Mar 2018 23:12:51 +0000 (16:12 -0700)]
Add IsSquare bool to the grappler op_types.

PiperOrigin-RevId: 190852501

6 years ago[tf.data] Expose the symbol `tf.contrib.data.make_csv_dataset()`.
Derek Murray [Wed, 28 Mar 2018 22:54:31 +0000 (15:54 -0700)]
[tf.data] Expose the symbol `tf.contrib.data.make_csv_dataset()`.

PiperOrigin-RevId: 190849333

6 years agoRefresh Community pages to surface new resources, SIGs and mailing lists.
A. Unique TensorFlower [Wed, 28 Mar 2018 22:31:19 +0000 (15:31 -0700)]
Refresh Community pages to surface new resources, SIGs and mailing lists.

PiperOrigin-RevId: 190845545

6 years agoAutomated g4 rollback of changelist 190801044
A. Unique TensorFlower [Wed, 28 Mar 2018 21:59:53 +0000 (14:59 -0700)]
Automated g4 rollback of changelist 190801044

PiperOrigin-RevId: 190839672

6 years agoAdd DistributionStrategy support to Optimizer.
A. Unique TensorFlower [Wed, 28 Mar 2018 21:52:25 +0000 (14:52 -0700)]
Add DistributionStrategy support to Optimizer.

PiperOrigin-RevId: 190838314

6 years agoInternal change
A. Unique TensorFlower [Wed, 28 Mar 2018 21:48:50 +0000 (14:48 -0700)]
Internal change

PiperOrigin-RevId: 190837707

6 years agoUse high precision to compute softmax_cross_entropy_with_logits.
A. Unique TensorFlower [Wed, 28 Mar 2018 21:47:00 +0000 (14:47 -0700)]
Use high precision to compute softmax_cross_entropy_with_logits.

PiperOrigin-RevId: 190837379

6 years agoInternal change.
Anna R [Wed, 28 Mar 2018 21:42:57 +0000 (14:42 -0700)]
Internal change.

PiperOrigin-RevId: 190836675

6 years agoMerge changes from github.
Jianwei Xie [Wed, 28 Mar 2018 21:36:18 +0000 (14:36 -0700)]
Merge changes from github.

PiperOrigin-RevId: 190835392

6 years ago[tf.data] Fix reference leak in FunctionBufferingResource.
Derek Murray [Wed, 28 Mar 2018 21:30:39 +0000 (14:30 -0700)]
[tf.data] Fix reference leak in FunctionBufferingResource.

Previously, the FunctionBufferingResource's destructor would never be called, which led to use-after-free (of the underlying Device object) errors in the prefetching function.

PiperOrigin-RevId: 190834415

6 years ago[tf.data] Autotune prefetch buffer sizes
Brennan Saeta [Wed, 28 Mar 2018 21:04:01 +0000 (14:04 -0700)]
[tf.data] Autotune prefetch buffer sizes

In order to make it easier for tf.data users to achieve
high performance with their input pipelines, this change adds
the ability for the prefetch op to automatically tune its buffer
size.

To use the auto-tuning configuration of the `prefetch` transformation,
simply skip passing in a buffer size. Example:

```python
dataset = # ...
dataset = dataset.prefetch()  # Look ma, no buffer value req'd!
```
PiperOrigin-RevId: 190829736

6 years agoMake sure tensor size match before inspecting their content.
A. Unique TensorFlower [Wed, 28 Mar 2018 20:27:07 +0000 (13:27 -0700)]
Make sure tensor size match before inspecting their content.

PiperOrigin-RevId: 190823557

6 years ago[XLA] Redesign: add the rest of client-service interfaces.
A. Unique TensorFlower [Wed, 28 Mar 2018 20:21:05 +0000 (13:21 -0700)]
[XLA] Redesign: add the rest of client-service interfaces.

The basic idea is, on the client side, for each public method that has a Computation parameter, add an overload with XlaCompuation. If such method needs to call the service side, add corresponding service interfaces.

Also make XlaCompuation::GetProgramShape return StatusOr, to be consistent with the Computation class.

PiperOrigin-RevId: 190822601

6 years agoAdd comment that explicitly states that InitTableIterator is Thread-unsafe.
A. Unique TensorFlower [Wed, 28 Mar 2018 20:13:20 +0000 (13:13 -0700)]
Add comment that explicitly states that InitTableIterator is Thread-unsafe.

PiperOrigin-RevId: 190821427

6 years agoAdd op cost model for MaxPool, AvgPool, FusedBatchNorm, their grad ops, and
A. Unique TensorFlower [Wed, 28 Mar 2018 20:11:12 +0000 (13:11 -0700)]
Add op cost model for MaxPool, AvgPool, FusedBatchNorm, their grad ops, and
ReluGrad.

PiperOrigin-RevId: 190821116

6 years agoSupports quantized reduce_mean in TF Lite.
A. Unique TensorFlower [Wed, 28 Mar 2018 19:29:32 +0000 (12:29 -0700)]
Supports quantized reduce_mean in TF Lite.

PiperOrigin-RevId: 190813997

6 years agoUpgrade gRPC version used in OSS Tensorflow
Noah Eisen [Wed, 28 Mar 2018 19:28:32 +0000 (12:28 -0700)]
Upgrade gRPC version used in OSS Tensorflow

PiperOrigin-RevId: 190813848

6 years agoWhen importing meta graphs under name scopes,
A. Unique TensorFlower [Wed, 28 Mar 2018 19:19:27 +0000 (12:19 -0700)]
When importing meta graphs under name scopes,
the names of the created ops are prepended with the scopes.
Since the saver_def of the meta graph does not contain this information,
we need to pass it explicitly to Saver.

PiperOrigin-RevId: 190812434

6 years ago[XLA] Redesign: add the rest of the XlaBuilder public methods.
A. Unique TensorFlower [Wed, 28 Mar 2018 19:18:24 +0000 (12:18 -0700)]
[XLA] Redesign: add the rest of the XlaBuilder public methods.

PiperOrigin-RevId: 190812260

6 years agoDon't access properties in case they're not present
Benoit Steiner [Wed, 28 Mar 2018 19:16:51 +0000 (12:16 -0700)]
Don't access properties in case they're not present

PiperOrigin-RevId: 190811935

6 years agoAdd some VLOGs to make it easier to see why things don't go through the fast path
Akshay Modi [Wed, 28 Mar 2018 19:06:33 +0000 (12:06 -0700)]
Add some VLOGs to make it easier to see why things don't go through the fast path

PiperOrigin-RevId: 190809906

6 years agoReorder element wise operators across reshape operators.
A. Unique TensorFlower [Wed, 28 Mar 2018 19:00:18 +0000 (12:00 -0700)]
Reorder element wise operators across reshape operators.

This allows batch-norm folding to work across reshape operators.

PiperOrigin-RevId: 190808678

6 years agoFixes to DepthwiseConv kernel
A. Unique TensorFlower [Wed, 28 Mar 2018 18:48:28 +0000 (11:48 -0700)]
Fixes to DepthwiseConv kernel

PiperOrigin-RevId: 190806668

6 years agoIn contrib/all_reduce raise a ValueError if the input tensors
A. Unique TensorFlower [Wed, 28 Mar 2018 18:34:32 +0000 (11:34 -0700)]
In contrib/all_reduce raise a ValueError if the input tensors
do not have fully-defined shapes.

PiperOrigin-RevId: 190804146

6 years agoUpdating tests in constant_folding_test.cc so that it evaluates the optimized and...
A. Unique TensorFlower [Wed, 28 Mar 2018 18:24:01 +0000 (11:24 -0700)]
Updating tests in constant_folding_test.cc so that it evaluates the optimized and original graph and checks whether the output tensors produced by them are the same.

PiperOrigin-RevId: 190802264

6 years agoFix TPUClusterResolver tpu parameter for profiler tool.
Chris Ying [Wed, 28 Mar 2018 18:22:28 +0000 (11:22 -0700)]
Fix TPUClusterResolver tpu parameter for profiler tool.

PiperOrigin-RevId: 190801968

6 years agoMake ArithmeticOptimizer robust to failures of shape inference and individual stages.
A. Unique TensorFlower [Wed, 28 Mar 2018 18:18:45 +0000 (11:18 -0700)]
Make ArithmeticOptimizer robust to failures of shape inference and individual stages.

Get rid of graph annotation and use GraphProperties directly.

PiperOrigin-RevId: 190801044

6 years agoProperly serialize ResourceVariable global_step into the metagraph.
Eugene Brevdo [Wed, 28 Mar 2018 17:25:27 +0000 (10:25 -0700)]
Properly serialize ResourceVariable global_step into the metagraph.

Prior to this, saving and restoring a graph with a resource variable global_step
would cause the global_step collection of the reimported graph to contain
a resource tensor (the object underlying the ResourceVariable); the actual
metadata associated with it would be serialized.

PiperOrigin-RevId: 190791443

6 years agointernal change
A. Unique TensorFlower [Wed, 28 Mar 2018 17:15:46 +0000 (10:15 -0700)]
internal change

PiperOrigin-RevId: 190789794

6 years agoAvoid overwriting existing namespace items that might replace the converted functions.
A. Unique TensorFlower [Wed, 28 Mar 2018 17:15:42 +0000 (10:15 -0700)]
Avoid overwriting existing namespace items that might replace the converted functions.

PiperOrigin-RevId: 190789781

6 years agoEnable the Grappler arithmetic optimizer by default in Python tests.
A. Unique TensorFlower [Wed, 28 Mar 2018 17:03:37 +0000 (10:03 -0700)]
Enable the Grappler arithmetic optimizer by default in Python tests.

PiperOrigin-RevId: 190787954

6 years agoAllow positional arguments in tf.keras.Model subclasses
Allen Lavoie [Wed, 28 Mar 2018 17:03:06 +0000 (10:03 -0700)]
Allow positional arguments in tf.keras.Model subclasses

Makes the tf.keras.Layer.__call__ signature identical to tf.layers.Layer.__call__, but makes passing positional arguments other than "inputs" an error in most cases. The only case it's allowed is subclassed Models which do not have an "inputs" argument to their call() method.

This means subclassed Models no longer need to pass all but the first argument as a keyword argument (or do list packing/unpacking) when call() takes multiple Tensor arguments.

Includes errors for cases where whether an argument indicates an input is ambiguous, but otherwise doesn't do much to support non-"inputs" call() signatures for shape inference or deferred Tensors. The definition of an input/non-input is pretty clear, so that cleanup will mostly be tracking down all of the users of "self.call" and getting them to pass inputs as positional arguments if necessary.

PiperOrigin-RevId: 190787899

6 years agoMove ExecuteNode and CopyToDevice_Internal
Alexandre Passos [Wed, 28 Mar 2018 15:25:14 +0000 (08:25 -0700)]
Move ExecuteNode and CopyToDevice_Internal

PiperOrigin-RevId: 190775681

6 years agoInternal change
A. Unique TensorFlower [Wed, 28 Mar 2018 07:57:37 +0000 (00:57 -0700)]
Internal change

PiperOrigin-RevId: 190735724

6 years agoHave TensorFlow Distributions share name scopes across method calls.
Dustin Tran [Wed, 28 Mar 2018 06:15:08 +0000 (23:15 -0700)]
Have TensorFlow Distributions share name scopes across method calls.

PiperOrigin-RevId: 190728742

6 years agoSpeed up statistical_testing_test by consolidating sess.run calls.
A. Unique TensorFlower [Wed, 28 Mar 2018 04:22:54 +0000 (21:22 -0700)]
Speed up statistical_testing_test by consolidating sess.run calls.

PiperOrigin-RevId: 190721153

6 years agoFix non-uniformity of orthogonal matrices.
A. Unique TensorFlower [Wed, 28 Mar 2018 04:03:41 +0000 (21:03 -0700)]
Fix non-uniformity of orthogonal matrices.
Add test code for this purpose.

PiperOrigin-RevId: 190719729

6 years agoFix _force_data_dependency for scalar inputs
A. Unique TensorFlower [Wed, 28 Mar 2018 02:58:53 +0000 (19:58 -0700)]
Fix _force_data_dependency for scalar inputs

PiperOrigin-RevId: 190715033

6 years agoImplement strip assert in DebugStripper.
A. Unique TensorFlower [Wed, 28 Mar 2018 02:42:55 +0000 (19:42 -0700)]
Implement strip assert in DebugStripper.

PiperOrigin-RevId: 190713919

6 years agoFixed the interaction between virtual cluster and measuring cost estimator.
Benoit Steiner [Wed, 28 Mar 2018 02:24:40 +0000 (19:24 -0700)]
Fixed the interaction between virtual cluster and measuring cost estimator.

PiperOrigin-RevId: 190712404

6 years agoFix problem with HandleElementwiseUnary/Binary in DfsHloVisitorWithDefault.
Mark Heffernan [Wed, 28 Mar 2018 01:31:55 +0000 (18:31 -0700)]
Fix problem with HandleElementwiseUnary/Binary in DfsHloVisitorWithDefault.

DfsHloVisitorWithDefault incorrectly included some overrides for handling
several elementwise binary and unary opcodes. These overrides explicitly
called DefaultAction which meant that these opcodes were not handled by
HandleElementwiseUnary/Binary. This CL removes these overrides and adds a
comment describing the potential problem. Unfortunately, I don't see a way
of automatically catching these issues when new opcodes are added, so the
comment will have to do.

PiperOrigin-RevId: 190708245

6 years agoPass options to TFE_ContextOptionsSetAsync
Akshay Modi [Wed, 28 Mar 2018 01:18:33 +0000 (18:18 -0700)]
Pass options to TFE_ContextOptionsSetAsync

PiperOrigin-RevId: 190707017

6 years ago[XLA] Remove CheckShape and CheckSameShape in ComputationBuilder, they are not/rarely...
A. Unique TensorFlower [Wed, 28 Mar 2018 01:09:37 +0000 (18:09 -0700)]
[XLA] Remove CheckShape and CheckSameShape in ComputationBuilder, they are not/rarely used.

PiperOrigin-RevId: 190706088

6 years agoDisable new Gather/Slice estimators for now to fix the crashes during some TF graphs...
Max Galkin [Wed, 28 Mar 2018 01:06:30 +0000 (18:06 -0700)]
Disable new Gather/Slice estimators for now to fix the crashes during some TF graphs optimizations.

PiperOrigin-RevId: 190705686

6 years agoSupport GatherV2 (using Gather)
A. Unique TensorFlower [Wed, 28 Mar 2018 00:35:55 +0000 (17:35 -0700)]
Support GatherV2 (using Gather)

PiperOrigin-RevId: 190702442

6 years ago[XLA] Accurately measure FLOPs for base-dilated convolutions
David Majnemer [Wed, 28 Mar 2018 00:35:14 +0000 (17:35 -0700)]
[XLA] Accurately measure FLOPs for base-dilated convolutions

We incorrectly counted FLOPs when the output and kernel line up to access the
padding or the dilated area. These should not be accounted as contributing to
the FLOP count.

PiperOrigin-RevId: 190702384

6 years ago[XLA] Assert that all buffers and sub-buffers passed to XLA have an explicit pointer.
Justin Lebar [Wed, 28 Mar 2018 00:16:31 +0000 (17:16 -0700)]
[XLA] Assert that all buffers and sub-buffers passed to XLA have an explicit pointer.

In the past, we allowed sub-buffers to be null if the top-level tuple
was non-null.

This doesn't actually work well on the GPU: For ops that are implemented
using cudnn or cublas, we have to have a pointer to the sub-buffer on
the host in order to make the call.  Retrieving it from the GPU in an
efficient manner is complicated, and the best we can come up with isn't
all that efficient (fundamentally having to pull data down from the GPU
blocks the ability of the CPU to "run ahead" of the GPU).

Since TF wasn't making use of our flexibility *anyway*, we add the
requirement that XLA be given non-null pointers to all sub-buffers.

Changes to the XLA:GPU backend to take advantage of this will come
separately.

PiperOrigin-RevId: 190700021

6 years agoMake slot_creator use DistributionStrategy for co-locating variables.
A. Unique TensorFlower [Wed, 28 Mar 2018 00:15:14 +0000 (17:15 -0700)]
Make slot_creator use DistributionStrategy for co-locating variables.
Make DistributionStrategy.colocate_vars_with() match the existing
behavior of ops.colocate_with() by default, for compatibility.

PiperOrigin-RevId: 190699882

6 years agoMake tf.keras.Sequential (properly) Checkpointable
Allen Lavoie [Wed, 28 Mar 2018 00:14:50 +0000 (17:14 -0700)]
Make tf.keras.Sequential (properly) Checkpointable

Just numbers Layers like "layer-N". It may also make sense to track them by
"ClassName-M", but that's a backwards-compatible change.

Special-cases all of the dependency collection, since Layers can be added and
removed from Sequential.

PiperOrigin-RevId: 190699818

6 years agoK-FAC: Bugfixes for TPU compatibility with covariance update ops.
A. Unique TensorFlower [Wed, 28 Mar 2018 00:13:22 +0000 (17:13 -0700)]
K-FAC: Bugfixes for TPU compatibility with covariance update ops.

PiperOrigin-RevId: 190699635

6 years ago[XLA] Redesign: implement Tuple and GetTupleElement.
A. Unique TensorFlower [Wed, 28 Mar 2018 00:02:59 +0000 (17:02 -0700)]
[XLA] Redesign: implement Tuple and GetTupleElement.

PiperOrigin-RevId: 190698245

6 years ago Change the host-op result per TPU step from a single value to a collection of values.
A. Unique TensorFlower [Tue, 27 Mar 2018 23:53:13 +0000 (16:53 -0700)]
  Change the host-op result per TPU step from a single value to a collection of values.

PiperOrigin-RevId: 190696953

6 years agoImprove support for DT_HALF and DT_BFLOAT16 in Grappler graph optimizations.
A. Unique TensorFlower [Tue, 27 Mar 2018 23:48:31 +0000 (16:48 -0700)]
Improve support for DT_HALF and DT_BFLOAT16 in Grappler graph optimizations.

Update GrapplerTest::EvaluateNodes to take feeds as an argument, to make it easier to write tests with placeholders.

PiperOrigin-RevId: 190696386

6 years agoFixed a bug in ConvKFCBasicMultiIndepFB introduced in the last CL
A. Unique TensorFlower [Tue, 27 Mar 2018 23:43:48 +0000 (16:43 -0700)]
Fixed a bug in ConvKFCBasicMultiIndepFB introduced in the last CL

PiperOrigin-RevId: 190695737

6 years agoTest all TFLite kernel implementations for fully connected.
A. Unique TensorFlower [Tue, 27 Mar 2018 23:26:12 +0000 (16:26 -0700)]
Test all TFLite kernel implementations for fully connected.

PiperOrigin-RevId: 190693455

6 years agoTFTS: Fix a bug in the SavedModel cold-start export
Allen Lavoie [Tue, 27 Mar 2018 22:55:04 +0000 (15:55 -0700)]
TFTS: Fix a bug in the SavedModel cold-start export

It now correctly broadcasts start state across whatever batch dimension it is
passed rather than sqishing it down to a batch dimension of 1.

PiperOrigin-RevId: 190688855

6 years agoAdd node types for DFS traversal to catch more issues with deduping inputs to in...
A. Unique TensorFlower [Tue, 27 Mar 2018 22:47:23 +0000 (15:47 -0700)]
Add node types for DFS traversal to catch more issues with deduping inputs to in-place ops.

PiperOrigin-RevId: 190687820

6 years ago[XLA] Fold reduce-window(convert(pad(X))) into reduce-window(convert(X))
David Majnemer [Tue, 27 Mar 2018 22:37:35 +0000 (15:37 -0700)]
[XLA] Fold reduce-window(convert(pad(X))) into reduce-window(convert(X))

ReduceWindow operations are done in higher precision to avoid accumulation
error. Convert operations can find their way between a ReduceWindow and a Pad
which can prevent a Pad from combining with a ReduceWindow.

Fix this by looking past the Convert while also checking that the Convert'd
Pad's init value is identical to the reduce-window value.

PiperOrigin-RevId: 190686175

6 years agoMoves Execute() from c_api.cc
Alexandre Passos [Tue, 27 Mar 2018 22:08:12 +0000 (15:08 -0700)]
Moves Execute() from c_api.cc

PiperOrigin-RevId: 190681610

6 years agoMake _USE_C_API = True and _USE_C_SHAPES = False work with handle data, take 2.
Skye Wanderman-Milne [Tue, 27 Mar 2018 22:07:05 +0000 (15:07 -0700)]
Make _USE_C_API = True and _USE_C_SHAPES = False work with handle data, take 2.

This change makes _set_shapes_for_outputs_c_api fetch and set
Tensor._handle_data. This is necessary for running the
Python shape inference code on resource tensors.

PiperOrigin-RevId: 190681459

6 years ago[tf.data] Raise error when window size is 0 in `tf.contrib.data.group_by_window()`.
Derek Murray [Tue, 27 Mar 2018 21:23:28 +0000 (14:23 -0700)]
[tf.data] Raise error when window size is 0 in `tf.contrib.data.group_by_window()`.

PiperOrigin-RevId: 190673466

6 years agoImprove error message when users forget to pass toco cmdline args for quantization...
Suharsh Sivakumar [Tue, 27 Mar 2018 21:17:16 +0000 (14:17 -0700)]
Improve error message when users forget to pass toco cmdline args for quantization, but have a model that has FAKE_QUANT operations.

PiperOrigin-RevId: 190672414

6 years agoAdd "serve" as a default value for savedmodel_tagset.
Nupur Garg [Tue, 27 Mar 2018 21:14:01 +0000 (14:14 -0700)]
Add "serve" as a default value for savedmodel_tagset.

PiperOrigin-RevId: 190671867

6 years agoFix documentation of Clamp; it does not take a computation at all.
Dimitris Vardoulakis [Tue, 27 Mar 2018 21:12:00 +0000 (14:12 -0700)]
Fix documentation of Clamp; it does not take a computation at all.

See:
https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/compiler/xla/client/computation_builder.h#L668
PiperOrigin-RevId: 190671530

6 years agoFast path for calling pack when the list is full of eager tensors.
Akshay Modi [Tue, 27 Mar 2018 21:06:44 +0000 (14:06 -0700)]
Fast path for calling pack when the list is full of eager tensors.

FastPathExecute function also allows inputs to be sequences instead of just lists.

PiperOrigin-RevId: 190670587

6 years ago[TF:XLA] Force DebugOptions to be specified when calling HloModule::CreateModuleConfi...
Nick Desaulniers [Tue, 27 Mar 2018 19:55:56 +0000 (12:55 -0700)]
[TF:XLA] Force DebugOptions to be specified when calling HloModule::CreateModuleConfigFromProto

Otherwise it's easy to forget that you likely want the DebugOptions to be `legacy_flags::GetDebugOptionsFromFlags()`.

PiperOrigin-RevId: 190659046

6 years agoUpdating test so that it evaluates the optimized and original graph and checks whethe...
A. Unique TensorFlower [Tue, 27 Mar 2018 19:34:17 +0000 (12:34 -0700)]
Updating test so that it evaluates the optimized and original graph and checks whether the output tensors produced by them are the same.

PiperOrigin-RevId: 190655831

6 years agoImproved shape inference for reshape
Benoit Steiner [Tue, 27 Mar 2018 19:09:59 +0000 (12:09 -0700)]
Improved shape inference for reshape

PiperOrigin-RevId: 190651873

6 years agoReplaced calls to deprecated tensorflow::StringPiece methods with their
A. Unique TensorFlower [Tue, 27 Mar 2018 19:00:44 +0000 (12:00 -0700)]
Replaced calls to deprecated tensorflow::StringPiece methods with their
tensorflow::str_util equivalents.

This will allow the deprecated methods to be removed.

PiperOrigin-RevId: 190650553

6 years agoExclude Python C extension from tensorflow/c:srcs target.
Skye Wanderman-Milne [Tue, 27 Mar 2018 18:54:26 +0000 (11:54 -0700)]
Exclude Python C extension from tensorflow/c:srcs target.

The Python extensions aren't part of the official C API.

PiperOrigin-RevId: 190649576

6 years agoFix: Clamp takes three arguments after computation, not arbitrarily many.
Dimitris Vardoulakis [Tue, 27 Mar 2018 18:27:11 +0000 (11:27 -0700)]
Fix: Clamp takes three arguments after computation, not arbitrarily many.

PiperOrigin-RevId: 190644837

6 years agoMatch behavior of py_func in graph and eager.
Alexandre Passos [Tue, 27 Mar 2018 18:09:50 +0000 (11:09 -0700)]
Match behavior of py_func in graph and eager.

PiperOrigin-RevId: 190641841

6 years agoInternal cleanup.
A. Unique TensorFlower [Tue, 27 Mar 2018 17:20:05 +0000 (10:20 -0700)]
Internal cleanup.

PiperOrigin-RevId: 190633067

6 years ago import tpu profiler analysis grpc python stub to tensorflow.
A. Unique TensorFlower [Tue, 27 Mar 2018 17:04:41 +0000 (10:04 -0700)]
  import tpu profiler analysis grpc python stub to tensorflow.

PiperOrigin-RevId: 190630641

6 years agoPrevent warning every time someone imports contrib.learn.datasets.base
James Keeling [Tue, 27 Mar 2018 16:36:52 +0000 (09:36 -0700)]
Prevent warning every time someone imports contrib.learn.datasets.base

Everything in contrib/learn/python/learn/datasets/base.py has been deprecated. One of the function in there is a decorator, retry. Because another function in that file is decorated with retry, the function is called upon import, which prints a warning.

I have fixed this by adding a private function, _internal_retry, which is used internally, and redefining retry to simply call this. That way, using retry in user-code will still print the deprecated warning, but it's not printed upon every import.

I also cleaned up the docstrings slightly.

PiperOrigin-RevId: 190626717

6 years agoFlush the output of print (fixes out-of-order prints in public colab)
A. Unique TensorFlower [Tue, 27 Mar 2018 16:20:54 +0000 (09:20 -0700)]
Flush the output of print (fixes out-of-order prints in public colab)

PiperOrigin-RevId: 190624708

6 years agoAutomated g4 rollback of changelist 188385868
Peter Hawkins [Tue, 27 Mar 2018 15:33:22 +0000 (08:33 -0700)]
Automated g4 rollback of changelist 188385868

PiperOrigin-RevId: 190618988

6 years agoOptimized quantized fully-connected op for LSTMs.
A. Unique TensorFlower [Tue, 27 Mar 2018 15:16:59 +0000 (08:16 -0700)]
Optimized quantized fully-connected op for LSTMs.

PiperOrigin-RevId: 190617310

6 years ago- Added support a different strategy for cov computations in the multi-tower scenario...
A. Unique TensorFlower [Tue, 27 Mar 2018 15:01:25 +0000 (08:01 -0700)]
- Added support a different strategy for cov computations in the multi-tower scenario.  In this strategy we do the cov computations locally on each tower and then sum the results, as opposed to concatenating everything onto a single device.  This other strategy can be enabled by setting the global variable TOWER_STRATEGY to "separate" (default value is "concat", which implements the old strategy).  We might change this to use "separate" by default if this turns out to be the best default.
- The code and documentation now no longer refer to the towers as computing different "mini-batches", since this was a confusing use of terminology.  The best way to think about things is that the combine data over all the towers forms the mini-batch.  Note however when factors process multiple towers using the "separate" strategy their batch_size variable will still refer to the amount of data in a single tower.
- Fixed a bug in how the "option 1" and "option 2" RNN Fisher approximations were computed in the multi-tower scenario.
- The "time-folded-into-batch" feature recently added has now changed in terms of what format it uses. Time is now the first dimension before the reshape, not the second, which is consistent with the convention used in other codebases.

PiperOrigin-RevId: 190615398

6 years agoDon't flush denormals when calling Eigen::SelfAdjointEigenSolver.
A. Unique TensorFlower [Tue, 27 Mar 2018 10:48:57 +0000 (03:48 -0700)]
Don't flush denormals when calling Eigen::SelfAdjointEigenSolver.

PiperOrigin-RevId: 190595222

6 years agoAvoid reading the input file twice for InitializableLookupTable in combination with...
A. Unique TensorFlower [Tue, 27 Mar 2018 10:23:58 +0000 (03:23 -0700)]
Avoid reading the input file twice for InitializableLookupTable in combination with HashTable.

Before this cl, TextFileLineIterator::total_size() was called for HashTable::DoPrepare, even though HashTable::DoPrepare ignores the size parameter.
In order to have a result ready for TextFileLineIterator::total_size(), Init() called GetNumLinesInTextFile(), which read the whole file. Just to throw away the result :-/

This cl:
- adds a DoLazyPrepare, that gets a functor to get the size, only if needed.
- add HashTable::DoLazyPrepare which does not call this functor.
- modify TextFileLineIterator::Init() to not call GetNumLinesInTextFile() anymore, when vocab_size was given as -1.
- modify TextFileLineIterator::total_size() to call GetNumLinesInTextFile() lazily on the first call, if vocab_size_ was passed as -1.

PiperOrigin-RevId: 190593744