A. Unique TensorFlower [Thu, 29 Mar 2018 19:02:50 +0000 (12:02 -0700)]
Leaves attributes on outside_compilation nodes so they can be replicated in a later pass.
PiperOrigin-RevId:
190965218
Ayush Dubey [Thu, 29 Mar 2018 18:54:55 +0000 (11:54 -0700)]
Initialize pointer to ScopedAllocatorMgr in BaseGPUDevice.
PiperOrigin-RevId:
190964008
A. Unique TensorFlower [Thu, 29 Mar 2018 18:24:44 +0000 (11:24 -0700)]
Update ops-related pbtxt files.
PiperOrigin-RevId:
190959179
A. Unique TensorFlower [Thu, 29 Mar 2018 18:02:56 +0000 (11:02 -0700)]
Automated g4 rollback of changelist
190808678
PiperOrigin-RevId:
190955400
Jianwei Xie [Thu, 29 Mar 2018 17:50:46 +0000 (10:50 -0700)]
Automated g4 rollback of changelist
190858242
PiperOrigin-RevId:
190953197
Patrick Nguyen [Thu, 29 Mar 2018 17:41:36 +0000 (10:41 -0700)]
Allow experimental string attrs for functions.
PiperOrigin-RevId:
190951605
Skye Wanderman-Milne [Thu, 29 Mar 2018 17:40:55 +0000 (10:40 -0700)]
Cache op input's fetched from the C API.
PiperOrigin-RevId:
190951499
Justin Lebar [Thu, 29 Mar 2018 17:30:31 +0000 (10:30 -0700)]
[XLA:GPU] Assume that tuple sub-buffers are available at runtime.
Previously we assumed this was not the case, and allowed front-ends to
pass in a pointer to tuple without also passing in pointers to
sub-buffers.
This mostly worked: Whenever we wanted a tuple sub-buffer, we'd just
chase the tuple's pointers in our emitted kernel.
But this doesn't work if we ever need a pointer to that sub-buffer on
the host. Which we do if e.g. the sub-buffer is an input to a cudnn
call.
There are various ways to make this work, but by far the simplest and
most efficient is simply to specify away this problem, and say that the
front-end *must* give us all the pointers we want. This is what the
earlier change, "Assert that all buffers and sub-buffers passed to XLA
have an explicit pointer" did.
This change adds a testcase and lets us skip some pointer chasing when
we have a tuple whose sub-buffers are known statically.
PiperOrigin-RevId:
190949743
A. Unique TensorFlower [Thu, 29 Mar 2018 17:12:42 +0000 (10:12 -0700)]
LSTM support: Further quantized Fully-Connected op optimization.
PiperOrigin-RevId:
190946885
A. Unique TensorFlower [Thu, 29 Mar 2018 17:06:52 +0000 (10:06 -0700)]
Automated g4 rollback of changelist
190728742
PiperOrigin-RevId:
190946066
A. Unique TensorFlower [Thu, 29 Mar 2018 16:46:06 +0000 (09:46 -0700)]
Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId:
190942952
Younghee Kwon [Thu, 29 Mar 2018 16:43:19 +0000 (09:43 -0700)]
Added kernels and estimators for Gradient Boosting Trees algorithm.
BoostedTreesClassifier and BoostedTreesRegressor are added to tf.estimator.
Also some training utility functions are added to tf.contrib.estimator.
PiperOrigin-RevId:
190942599
Jacques Pienaar [Thu, 29 Mar 2018 16:42:05 +0000 (09:42 -0700)]
Add bitcast for equal bitwidth casts.
Map bitcasts to XLA bitcast HLO if the bitwidth of the elementtype is the same.
PiperOrigin-RevId:
190942398
A. Unique TensorFlower [Thu, 29 Mar 2018 16:41:53 +0000 (09:41 -0700)]
Upgrade Eigen version.
PiperOrigin-RevId:
190942370
Rachel Lim [Thu, 29 Mar 2018 15:19:17 +0000 (08:19 -0700)]
[tf.data] Optimizations on make_csv_dataset internals.
PiperOrigin-RevId:
190933143
Joshua V. Dillon [Thu, 29 Mar 2018 14:56:15 +0000 (07:56 -0700)]
Add meta-distribution which reshapes batch dims.
PiperOrigin-RevId:
190930846
Anna R [Thu, 29 Mar 2018 11:34:29 +0000 (04:34 -0700)]
Internal change.
PiperOrigin-RevId:
190913047
Benoit Steiner [Thu, 29 Mar 2018 06:31:26 +0000 (23:31 -0700)]
Move the swapping kernels to the all_kernels library to avoid registering them
more than once from tensorflow/contrib.
PiperOrigin-RevId:
190887394
Michael Case [Thu, 29 Mar 2018 05:46:25 +0000 (22:46 -0700)]
Add --announce_rc Bazel arg to several of our builds.
This will help to...
- Refactor the build scripts without accidently adding functional changes.
- Help debug several issues where some options aren't being added correctly
by configure script.
PiperOrigin-RevId:
190884531
A. Unique TensorFlower [Thu, 29 Mar 2018 04:52:30 +0000 (21:52 -0700)]
DistributionStrategy-enable Estimator.
PiperOrigin-RevId:
190882152
A. Unique TensorFlower [Thu, 29 Mar 2018 04:11:16 +0000 (21:11 -0700)]
Fix TensorList decoding bug. Thanks to Alexandre Passos for finding this.
PiperOrigin-RevId:
190879840
Benoit Steiner [Thu, 29 Mar 2018 04:07:02 +0000 (21:07 -0700)]
Fixed the shape function of the SplitV op that incorrectly often assumed that
the shape of all the outputs is the same.
PiperOrigin-RevId:
190879600
Igor Ganichev [Thu, 29 Mar 2018 03:51:01 +0000 (20:51 -0700)]
Support structured source in GradientTape.gradient
Before this change, it was easy to forget [] around the source tensor.
This mistake lead to GradientTape.gradient(), returning a list of Nones.
Nones normally tell to the user that the source and the target are
not connected via differentiable operations, which is not the source
of the error in this case.
Instead of adding a check that `sources` is a list of tensors, this CL
adds ability to handle structured source (which includes a lone tensor),
similarly to many existing TensorFlow APIs.
Also, with Alex's help, it fixes a bug where repeated tensors in
`sources` were not handled correctly.
PiperOrigin-RevId:
190878583
Martin Wicke [Thu, 29 Mar 2018 03:46:14 +0000 (20:46 -0700)]
Remove all_opensource_files. It's not needed any more.
PiperOrigin-RevId:
190878279
Derek Murray [Thu, 29 Mar 2018 03:44:51 +0000 (20:44 -0700)]
[tf.data] Maintain a reference on the FunctionBufferingResource while a get-next operation is active.
Previously, the reference count on a FunctionBufferingResource could drop to 0 and it could be deleted (e.g. by a DestroyResourceOp) while a get-next operation is active on it. This would lead to use-after-free errors.
PiperOrigin-RevId:
190878208
Suharsh Sivakumar [Thu, 29 Mar 2018 02:21:08 +0000 (19:21 -0700)]
Relax limitations on rerouting graph outputs.
- Allow multiple outputs of output_tensors in fold_batch_norms.
- Allow duplicate consumers in quantize.
- I also quick a fix issue for matching final layers that have batch norm.
PiperOrigin-RevId:
190873003
A. Unique TensorFlower [Thu, 29 Mar 2018 01:54:09 +0000 (18:54 -0700)]
[XLA] Redesign: implement GetComputationStats.
PiperOrigin-RevId:
190871262
Smit Hinsu [Thu, 29 Mar 2018 01:26:46 +0000 (18:26 -0700)]
Relax CuDNN version requirements because CuDNN is backwards compatible within
a major release starting with CuDNN 7.0
PiperOrigin-RevId:
190869028
Igor Ganichev [Thu, 29 Mar 2018 01:26:30 +0000 (18:26 -0700)]
Implement assert_same_structure in C++
Also implements helper functions nest._is_namedtuple
nest._same_namedtuple.
Also, fix a bug in FlattenHelper where error from recursive
calls were not propagated up immediately.
This change implements a good chunk of machinery that will
allow us to move map_structure to C++.
Before:
entry {
name: "NestBenchmark.assert_same_structure_6_elem"
iters: 30000
wall_time: 4.
79532718658e-05
}
entry {
name: "NestBenchmark.assert_same_structure_60_elem"
iters: 30000
wall_time: 0.
000403008667628
}
After:
entry {
name: "NestBenchmark.assert_same_structure_6_elem"
iters: 30000
wall_time: 1.
65301720301e-05
}
entry {
name: "NestBenchmark.assert_same_structure_60_elem"
iters: 30000
wall_time: 0.
000147621099154
}
PiperOrigin-RevId:
190869007
Brennan Saeta [Thu, 29 Mar 2018 00:54:01 +0000 (17:54 -0700)]
TPU: Implement 3rd gen input pipeline config.
In this new configuration, we are able to drive a Cloud TPU at full device performance, and achieve over 3k images/sec on ResNet-50. The previous bottleneck was the un-pipeline-able split that occurred after the iterator.get_next() call. This split (when not splitting on the batch-major dimension) caused the training job to be single-threaded-CPU-bottlenecked, resulting in a performance of only ~2650 images/sec on ResNet-50.
This latest input pipeline configuration requires the use of datasets. By requiring datasets, we gain the ability to call get_next() num_replicas times per host, and avoid the expensive split op. (Note: this also opens up potential future avenues for further optimization.) Despite this, we retain a lot of nice usability properties that per_host_v1 (aka input pipeline config v2) gave us.
PiperOrigin-RevId:
190865741
A. Unique TensorFlower [Thu, 29 Mar 2018 00:36:30 +0000 (17:36 -0700)]
Further speed up statistical_testing_test by breaking up DKWM test.
PiperOrigin-RevId:
190863893
Alexandre Passos [Thu, 29 Mar 2018 00:16:10 +0000 (17:16 -0700)]
Missed ScopedUnref in ResourceGather
PiperOrigin-RevId:
190861558
A. Unique TensorFlower [Thu, 29 Mar 2018 00:06:44 +0000 (17:06 -0700)]
Collective Ops Part 1
The basic interface definitions, local-only versions of remote-access,
param-resolution, device-resolution and mgr.
A collective op is able to execute synchronously across devices
and across separate graphs. Collective ops to be introduced eventually
include broadcast and all-reduce. This change is part of a series of
changes that will introduce the necessary infrastructure then the
initial op implementations.
PiperOrigin-RevId:
190860248
Anna R [Wed, 28 Mar 2018 23:52:39 +0000 (16:52 -0700)]
Automated g4 rollback of changelist
190835392
PiperOrigin-RevId:
190858242
A. Unique TensorFlower [Wed, 28 Mar 2018 23:16:48 +0000 (16:16 -0700)]
Tower-local variable support for DistributionStrategy. Each tower has
its own variable, but fetch() and checkpoint apply a reduction to get
a single value.
PiperOrigin-RevId:
190853123
A. Unique TensorFlower [Wed, 28 Mar 2018 23:12:51 +0000 (16:12 -0700)]
Add IsSquare bool to the grappler op_types.
PiperOrigin-RevId:
190852501
Derek Murray [Wed, 28 Mar 2018 22:54:31 +0000 (15:54 -0700)]
[tf.data] Expose the symbol `tf.contrib.data.make_csv_dataset()`.
PiperOrigin-RevId:
190849333
A. Unique TensorFlower [Wed, 28 Mar 2018 22:31:19 +0000 (15:31 -0700)]
Refresh Community pages to surface new resources, SIGs and mailing lists.
PiperOrigin-RevId:
190845545
A. Unique TensorFlower [Wed, 28 Mar 2018 21:59:53 +0000 (14:59 -0700)]
Automated g4 rollback of changelist
190801044
PiperOrigin-RevId:
190839672
A. Unique TensorFlower [Wed, 28 Mar 2018 21:52:25 +0000 (14:52 -0700)]
Add DistributionStrategy support to Optimizer.
PiperOrigin-RevId:
190838314
A. Unique TensorFlower [Wed, 28 Mar 2018 21:48:50 +0000 (14:48 -0700)]
Internal change
PiperOrigin-RevId:
190837707
A. Unique TensorFlower [Wed, 28 Mar 2018 21:47:00 +0000 (14:47 -0700)]
Use high precision to compute softmax_cross_entropy_with_logits.
PiperOrigin-RevId:
190837379
Anna R [Wed, 28 Mar 2018 21:42:57 +0000 (14:42 -0700)]
Internal change.
PiperOrigin-RevId:
190836675
Jianwei Xie [Wed, 28 Mar 2018 21:36:18 +0000 (14:36 -0700)]
Merge changes from github.
PiperOrigin-RevId:
190835392
Derek Murray [Wed, 28 Mar 2018 21:30:39 +0000 (14:30 -0700)]
[tf.data] Fix reference leak in FunctionBufferingResource.
Previously, the FunctionBufferingResource's destructor would never be called, which led to use-after-free (of the underlying Device object) errors in the prefetching function.
PiperOrigin-RevId:
190834415
Brennan Saeta [Wed, 28 Mar 2018 21:04:01 +0000 (14:04 -0700)]
[tf.data] Autotune prefetch buffer sizes
In order to make it easier for tf.data users to achieve
high performance with their input pipelines, this change adds
the ability for the prefetch op to automatically tune its buffer
size.
To use the auto-tuning configuration of the `prefetch` transformation,
simply skip passing in a buffer size. Example:
```python
dataset = # ...
dataset = dataset.prefetch() # Look ma, no buffer value req'd!
```
PiperOrigin-RevId:
190829736
A. Unique TensorFlower [Wed, 28 Mar 2018 20:27:07 +0000 (13:27 -0700)]
Make sure tensor size match before inspecting their content.
PiperOrigin-RevId:
190823557
A. Unique TensorFlower [Wed, 28 Mar 2018 20:21:05 +0000 (13:21 -0700)]
[XLA] Redesign: add the rest of client-service interfaces.
The basic idea is, on the client side, for each public method that has a Computation parameter, add an overload with XlaCompuation. If such method needs to call the service side, add corresponding service interfaces.
Also make XlaCompuation::GetProgramShape return StatusOr, to be consistent with the Computation class.
PiperOrigin-RevId:
190822601
A. Unique TensorFlower [Wed, 28 Mar 2018 20:13:20 +0000 (13:13 -0700)]
Add comment that explicitly states that InitTableIterator is Thread-unsafe.
PiperOrigin-RevId:
190821427
A. Unique TensorFlower [Wed, 28 Mar 2018 20:11:12 +0000 (13:11 -0700)]
Add op cost model for MaxPool, AvgPool, FusedBatchNorm, their grad ops, and
ReluGrad.
PiperOrigin-RevId:
190821116
A. Unique TensorFlower [Wed, 28 Mar 2018 19:29:32 +0000 (12:29 -0700)]
Supports quantized reduce_mean in TF Lite.
PiperOrigin-RevId:
190813997
Noah Eisen [Wed, 28 Mar 2018 19:28:32 +0000 (12:28 -0700)]
Upgrade gRPC version used in OSS Tensorflow
PiperOrigin-RevId:
190813848
A. Unique TensorFlower [Wed, 28 Mar 2018 19:19:27 +0000 (12:19 -0700)]
When importing meta graphs under name scopes,
the names of the created ops are prepended with the scopes.
Since the saver_def of the meta graph does not contain this information,
we need to pass it explicitly to Saver.
PiperOrigin-RevId:
190812434
A. Unique TensorFlower [Wed, 28 Mar 2018 19:18:24 +0000 (12:18 -0700)]
[XLA] Redesign: add the rest of the XlaBuilder public methods.
PiperOrigin-RevId:
190812260
Benoit Steiner [Wed, 28 Mar 2018 19:16:51 +0000 (12:16 -0700)]
Don't access properties in case they're not present
PiperOrigin-RevId:
190811935
Akshay Modi [Wed, 28 Mar 2018 19:06:33 +0000 (12:06 -0700)]
Add some VLOGs to make it easier to see why things don't go through the fast path
PiperOrigin-RevId:
190809906
A. Unique TensorFlower [Wed, 28 Mar 2018 19:00:18 +0000 (12:00 -0700)]
Reorder element wise operators across reshape operators.
This allows batch-norm folding to work across reshape operators.
PiperOrigin-RevId:
190808678
A. Unique TensorFlower [Wed, 28 Mar 2018 18:48:28 +0000 (11:48 -0700)]
Fixes to DepthwiseConv kernel
PiperOrigin-RevId:
190806668
A. Unique TensorFlower [Wed, 28 Mar 2018 18:34:32 +0000 (11:34 -0700)]
In contrib/all_reduce raise a ValueError if the input tensors
do not have fully-defined shapes.
PiperOrigin-RevId:
190804146
A. Unique TensorFlower [Wed, 28 Mar 2018 18:24:01 +0000 (11:24 -0700)]
Updating tests in constant_folding_test.cc so that it evaluates the optimized and original graph and checks whether the output tensors produced by them are the same.
PiperOrigin-RevId:
190802264
Chris Ying [Wed, 28 Mar 2018 18:22:28 +0000 (11:22 -0700)]
Fix TPUClusterResolver tpu parameter for profiler tool.
PiperOrigin-RevId:
190801968
A. Unique TensorFlower [Wed, 28 Mar 2018 18:18:45 +0000 (11:18 -0700)]
Make ArithmeticOptimizer robust to failures of shape inference and individual stages.
Get rid of graph annotation and use GraphProperties directly.
PiperOrigin-RevId:
190801044
Eugene Brevdo [Wed, 28 Mar 2018 17:25:27 +0000 (10:25 -0700)]
Properly serialize ResourceVariable global_step into the metagraph.
Prior to this, saving and restoring a graph with a resource variable global_step
would cause the global_step collection of the reimported graph to contain
a resource tensor (the object underlying the ResourceVariable); the actual
metadata associated with it would be serialized.
PiperOrigin-RevId:
190791443
A. Unique TensorFlower [Wed, 28 Mar 2018 17:15:46 +0000 (10:15 -0700)]
internal change
PiperOrigin-RevId:
190789794
A. Unique TensorFlower [Wed, 28 Mar 2018 17:15:42 +0000 (10:15 -0700)]
Avoid overwriting existing namespace items that might replace the converted functions.
PiperOrigin-RevId:
190789781
A. Unique TensorFlower [Wed, 28 Mar 2018 17:03:37 +0000 (10:03 -0700)]
Enable the Grappler arithmetic optimizer by default in Python tests.
PiperOrigin-RevId:
190787954
Allen Lavoie [Wed, 28 Mar 2018 17:03:06 +0000 (10:03 -0700)]
Allow positional arguments in tf.keras.Model subclasses
Makes the tf.keras.Layer.__call__ signature identical to tf.layers.Layer.__call__, but makes passing positional arguments other than "inputs" an error in most cases. The only case it's allowed is subclassed Models which do not have an "inputs" argument to their call() method.
This means subclassed Models no longer need to pass all but the first argument as a keyword argument (or do list packing/unpacking) when call() takes multiple Tensor arguments.
Includes errors for cases where whether an argument indicates an input is ambiguous, but otherwise doesn't do much to support non-"inputs" call() signatures for shape inference or deferred Tensors. The definition of an input/non-input is pretty clear, so that cleanup will mostly be tracking down all of the users of "self.call" and getting them to pass inputs as positional arguments if necessary.
PiperOrigin-RevId:
190787899
Alexandre Passos [Wed, 28 Mar 2018 15:25:14 +0000 (08:25 -0700)]
Move ExecuteNode and CopyToDevice_Internal
PiperOrigin-RevId:
190775681
A. Unique TensorFlower [Wed, 28 Mar 2018 07:57:37 +0000 (00:57 -0700)]
Internal change
PiperOrigin-RevId:
190735724
Dustin Tran [Wed, 28 Mar 2018 06:15:08 +0000 (23:15 -0700)]
Have TensorFlow Distributions share name scopes across method calls.
PiperOrigin-RevId:
190728742
A. Unique TensorFlower [Wed, 28 Mar 2018 04:22:54 +0000 (21:22 -0700)]
Speed up statistical_testing_test by consolidating sess.run calls.
PiperOrigin-RevId:
190721153
A. Unique TensorFlower [Wed, 28 Mar 2018 04:03:41 +0000 (21:03 -0700)]
Fix non-uniformity of orthogonal matrices.
Add test code for this purpose.
PiperOrigin-RevId:
190719729
A. Unique TensorFlower [Wed, 28 Mar 2018 02:58:53 +0000 (19:58 -0700)]
Fix _force_data_dependency for scalar inputs
PiperOrigin-RevId:
190715033
A. Unique TensorFlower [Wed, 28 Mar 2018 02:42:55 +0000 (19:42 -0700)]
Implement strip assert in DebugStripper.
PiperOrigin-RevId:
190713919
Benoit Steiner [Wed, 28 Mar 2018 02:24:40 +0000 (19:24 -0700)]
Fixed the interaction between virtual cluster and measuring cost estimator.
PiperOrigin-RevId:
190712404
Mark Heffernan [Wed, 28 Mar 2018 01:31:55 +0000 (18:31 -0700)]
Fix problem with HandleElementwiseUnary/Binary in DfsHloVisitorWithDefault.
DfsHloVisitorWithDefault incorrectly included some overrides for handling
several elementwise binary and unary opcodes. These overrides explicitly
called DefaultAction which meant that these opcodes were not handled by
HandleElementwiseUnary/Binary. This CL removes these overrides and adds a
comment describing the potential problem. Unfortunately, I don't see a way
of automatically catching these issues when new opcodes are added, so the
comment will have to do.
PiperOrigin-RevId:
190708245
Akshay Modi [Wed, 28 Mar 2018 01:18:33 +0000 (18:18 -0700)]
Pass options to TFE_ContextOptionsSetAsync
PiperOrigin-RevId:
190707017
A. Unique TensorFlower [Wed, 28 Mar 2018 01:09:37 +0000 (18:09 -0700)]
[XLA] Remove CheckShape and CheckSameShape in ComputationBuilder, they are not/rarely used.
PiperOrigin-RevId:
190706088
Max Galkin [Wed, 28 Mar 2018 01:06:30 +0000 (18:06 -0700)]
Disable new Gather/Slice estimators for now to fix the crashes during some TF graphs optimizations.
PiperOrigin-RevId:
190705686
A. Unique TensorFlower [Wed, 28 Mar 2018 00:35:55 +0000 (17:35 -0700)]
Support GatherV2 (using Gather)
PiperOrigin-RevId:
190702442
David Majnemer [Wed, 28 Mar 2018 00:35:14 +0000 (17:35 -0700)]
[XLA] Accurately measure FLOPs for base-dilated convolutions
We incorrectly counted FLOPs when the output and kernel line up to access the
padding or the dilated area. These should not be accounted as contributing to
the FLOP count.
PiperOrigin-RevId:
190702384
Justin Lebar [Wed, 28 Mar 2018 00:16:31 +0000 (17:16 -0700)]
[XLA] Assert that all buffers and sub-buffers passed to XLA have an explicit pointer.
In the past, we allowed sub-buffers to be null if the top-level tuple
was non-null.
This doesn't actually work well on the GPU: For ops that are implemented
using cudnn or cublas, we have to have a pointer to the sub-buffer on
the host in order to make the call. Retrieving it from the GPU in an
efficient manner is complicated, and the best we can come up with isn't
all that efficient (fundamentally having to pull data down from the GPU
blocks the ability of the CPU to "run ahead" of the GPU).
Since TF wasn't making use of our flexibility *anyway*, we add the
requirement that XLA be given non-null pointers to all sub-buffers.
Changes to the XLA:GPU backend to take advantage of this will come
separately.
PiperOrigin-RevId:
190700021
A. Unique TensorFlower [Wed, 28 Mar 2018 00:15:14 +0000 (17:15 -0700)]
Make slot_creator use DistributionStrategy for co-locating variables.
Make DistributionStrategy.colocate_vars_with() match the existing
behavior of ops.colocate_with() by default, for compatibility.
PiperOrigin-RevId:
190699882
Allen Lavoie [Wed, 28 Mar 2018 00:14:50 +0000 (17:14 -0700)]
Make tf.keras.Sequential (properly) Checkpointable
Just numbers Layers like "layer-N". It may also make sense to track them by
"ClassName-M", but that's a backwards-compatible change.
Special-cases all of the dependency collection, since Layers can be added and
removed from Sequential.
PiperOrigin-RevId:
190699818
A. Unique TensorFlower [Wed, 28 Mar 2018 00:13:22 +0000 (17:13 -0700)]
K-FAC: Bugfixes for TPU compatibility with covariance update ops.
PiperOrigin-RevId:
190699635
A. Unique TensorFlower [Wed, 28 Mar 2018 00:02:59 +0000 (17:02 -0700)]
[XLA] Redesign: implement Tuple and GetTupleElement.
PiperOrigin-RevId:
190698245
A. Unique TensorFlower [Tue, 27 Mar 2018 23:53:13 +0000 (16:53 -0700)]
Change the host-op result per TPU step from a single value to a collection of values.
PiperOrigin-RevId:
190696953
A. Unique TensorFlower [Tue, 27 Mar 2018 23:48:31 +0000 (16:48 -0700)]
Improve support for DT_HALF and DT_BFLOAT16 in Grappler graph optimizations.
Update GrapplerTest::EvaluateNodes to take feeds as an argument, to make it easier to write tests with placeholders.
PiperOrigin-RevId:
190696386
A. Unique TensorFlower [Tue, 27 Mar 2018 23:43:48 +0000 (16:43 -0700)]
Fixed a bug in ConvKFCBasicMultiIndepFB introduced in the last CL
PiperOrigin-RevId:
190695737
A. Unique TensorFlower [Tue, 27 Mar 2018 23:26:12 +0000 (16:26 -0700)]
Test all TFLite kernel implementations for fully connected.
PiperOrigin-RevId:
190693455
Allen Lavoie [Tue, 27 Mar 2018 22:55:04 +0000 (15:55 -0700)]
TFTS: Fix a bug in the SavedModel cold-start export
It now correctly broadcasts start state across whatever batch dimension it is
passed rather than sqishing it down to a batch dimension of 1.
PiperOrigin-RevId:
190688855
A. Unique TensorFlower [Tue, 27 Mar 2018 22:47:23 +0000 (15:47 -0700)]
Add node types for DFS traversal to catch more issues with deduping inputs to in-place ops.
PiperOrigin-RevId:
190687820
David Majnemer [Tue, 27 Mar 2018 22:37:35 +0000 (15:37 -0700)]
[XLA] Fold reduce-window(convert(pad(X))) into reduce-window(convert(X))
ReduceWindow operations are done in higher precision to avoid accumulation
error. Convert operations can find their way between a ReduceWindow and a Pad
which can prevent a Pad from combining with a ReduceWindow.
Fix this by looking past the Convert while also checking that the Convert'd
Pad's init value is identical to the reduce-window value.
PiperOrigin-RevId:
190686175
Alexandre Passos [Tue, 27 Mar 2018 22:08:12 +0000 (15:08 -0700)]
Moves Execute() from c_api.cc
PiperOrigin-RevId:
190681610
Skye Wanderman-Milne [Tue, 27 Mar 2018 22:07:05 +0000 (15:07 -0700)]
Make _USE_C_API = True and _USE_C_SHAPES = False work with handle data, take 2.
This change makes _set_shapes_for_outputs_c_api fetch and set
Tensor._handle_data. This is necessary for running the
Python shape inference code on resource tensors.
PiperOrigin-RevId:
190681459
Derek Murray [Tue, 27 Mar 2018 21:23:28 +0000 (14:23 -0700)]
[tf.data] Raise error when window size is 0 in `tf.contrib.data.group_by_window()`.
PiperOrigin-RevId:
190673466
Suharsh Sivakumar [Tue, 27 Mar 2018 21:17:16 +0000 (14:17 -0700)]
Improve error message when users forget to pass toco cmdline args for quantization, but have a model that has FAKE_QUANT operations.
PiperOrigin-RevId:
190672414
Nupur Garg [Tue, 27 Mar 2018 21:14:01 +0000 (14:14 -0700)]
Add "serve" as a default value for savedmodel_tagset.
PiperOrigin-RevId:
190671867
Dimitris Vardoulakis [Tue, 27 Mar 2018 21:12:00 +0000 (14:12 -0700)]
Fix documentation of Clamp; it does not take a computation at all.
See:
https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/compiler/xla/client/computation_builder.h#L668
PiperOrigin-RevId:
190671530
Akshay Modi [Tue, 27 Mar 2018 21:06:44 +0000 (14:06 -0700)]
Fast path for calling pack when the list is full of eager tensors.
FastPathExecute function also allows inputs to be sequences instead of just lists.
PiperOrigin-RevId:
190670587