Skye Wanderman-Milne [Mon, 7 May 2018 23:16:32 +0000 (16:16 -0700)]
ShapeRefiner fix: some variant-type tensors have handle data.
ShapeRefiner::AddNode() would only propagate handle data for
DT_RESOURCE tensors, but not DT_VARIANT. The Python shape inference
logic in common_shapes.py handled this correct, which is why we didn't
notice this earlier. In particular, list ops use DT_VARIANT with
handle data.
PiperOrigin-RevId:
195739586
Akshay Agrawal [Mon, 7 May 2018 23:16:24 +0000 (16:16 -0700)]
Refactor TensorArray to avoid copies and memory allocations when executing eagerly.
With this change, writes to TensorArrays when eager execution is enabled take O(1) time instead of O(n). Additionally, whereas writing to a TensorArray when constructing a graph results in allocating a new Python TensorArray object, writing to a TensorArray with eager enabled no longer performs that allocation (graph construction uses these allocations to ensure correctness of control flow and gradients, but this isn't necessary when executing eagerly). Finally, this change also removes the artificial write-once semantics of TensorArrays when executing eagerly.
PiperOrigin-RevId:
195739572
Blake Hechtman [Mon, 7 May 2018 22:58:29 +0000 (15:58 -0700)]
[XLA] Make post order a possible schedule as it sometimes uses less memory than
the DFS or list scheduler and it is very simple.
PiperOrigin-RevId:
195736916
Derek Murray [Mon, 7 May 2018 22:51:05 +0000 (15:51 -0700)]
[Remote functions] Only set the default runner *after* resolving the remote FLR.
Previously, if the `runner` was not specified for a function
execution, we would immediately set it to the default runner of the
*local* FLR, even if the function was to be executed remotely. This
change postpones the resolution of the default runner until after the
function invocation has been routed to the FLR that will actually
execute it.
As a result, we avoid the pathological case where a GPU device using a
private threadpool (TF_GPU_THREAD_MODE=gpu_private) ends up running
all of the ops for the CPU-side input pipeline on the private
threadpool.
PiperOrigin-RevId:
195735734
A. Unique TensorFlower [Mon, 7 May 2018 22:47:57 +0000 (15:47 -0700)]
Add EvaluateNodes to tests: RemoveIdentityTransposesMultipleOutputs, RemoveTransposesWithControlDependency, CombineBitcasts, CombineAndRemoveBitcasts, RemoveRedundantCast
PiperOrigin-RevId:
195735234
Michael Kuperstein [Mon, 7 May 2018 22:41:52 +0000 (15:41 -0700)]
[XLA] Fix a "we're we're" in the operation semantics.
PiperOrigin-RevId:
195734316
A. Unique TensorFlower [Mon, 7 May 2018 22:41:22 +0000 (15:41 -0700)]
Add support for select (via tf.where) support to tflite.
PiperOrigin-RevId:
195734246
Michael Case [Mon, 7 May 2018 22:24:02 +0000 (15:24 -0700)]
Internal Change.
PiperOrigin-RevId:
195731675
Michael Case [Mon, 7 May 2018 22:20:49 +0000 (15:20 -0700)]
Fix TypeError in update_version.py
PiperOrigin-RevId:
195731183
Pavithra Vijay [Mon, 7 May 2018 22:16:59 +0000 (15:16 -0700)]
Add support for tf.data.Dataset iterators in model training/eval methods in eager-mode
PiperOrigin-RevId:
195730534
Igor Ganichev [Mon, 7 May 2018 22:15:01 +0000 (15:15 -0700)]
Replace references to TensorInfo with XlaTensor
PiperOrigin-RevId:
195730139
A. Unique TensorFlower [Mon, 7 May 2018 21:55:26 +0000 (14:55 -0700)]
Disable automated testing of tensorflow/compiler/tests:extract_image_patches_op_test_cpu_ondemand
A recent change has made this test flaky.
PiperOrigin-RevId:
195726647
A. Unique TensorFlower [Mon, 7 May 2018 21:34:11 +0000 (14:34 -0700)]
Allow output has a different shape from input in the image.transform (#17011).
PiperOrigin-RevId:
195723288
Alexandre Passos [Mon, 7 May 2018 21:29:03 +0000 (14:29 -0700)]
Fix resource variable in cond gradient.
PiperOrigin-RevId:
195722449
Justin Lebar [Mon, 7 May 2018 21:23:19 +0000 (14:23 -0700)]
[XLA] Shard compilation of HloEvaluator.
PiperOrigin-RevId:
195721404
A. Unique TensorFlower [Mon, 7 May 2018 21:15:37 +0000 (14:15 -0700)]
Move PadV2Options to the end, in order to maintain schema compatibility.
PiperOrigin-RevId:
195720133
A. Unique TensorFlower [Mon, 7 May 2018 21:13:23 +0000 (14:13 -0700)]
Use 64bit aggregation for gradients and hessians since the 32 bit version is numerically unstable for large minibatches.
PiperOrigin-RevId:
195719795
Anna R [Mon, 7 May 2018 21:03:15 +0000 (14:03 -0700)]
Change deprecation_version field in api_def.proto to a string.
PiperOrigin-RevId:
195718061
Sanjoy Das [Mon, 7 May 2018 21:00:09 +0000 (14:00 -0700)]
[TF:XLA] Bump open source llvm revision to r331624
PiperOrigin-RevId:
195717497
A. Unique TensorFlower [Mon, 7 May 2018 20:24:12 +0000 (13:24 -0700)]
Fixes for accessing variables with a MirroredStrategy in a
cross-tower context:
* only provide read-only access to variables via get()
* don't fail if use the variable isn't copied to the current device in
get()
* make _as_graph_element() return the aggregate value for tower-local
variables (instead of the incorrect previous behavior of returning
the primary)
PiperOrigin-RevId:
195711474
A. Unique TensorFlower [Mon, 7 May 2018 20:18:33 +0000 (13:18 -0700)]
Specialize functions only once per unique context.
PiperOrigin-RevId:
195710562
Sanjoy Das [Mon, 7 May 2018 19:48:38 +0000 (12:48 -0700)]
Rename HloDotWithContractDimsMatcher to HloDotWithContractingDimsMatcher
This is a typo I introduced in cr/
195514907.
PiperOrigin-RevId:
195706006
A. Unique TensorFlower [Mon, 7 May 2018 19:41:07 +0000 (12:41 -0700)]
Make test tensorflow/python/keras:resnet50_test be size "medium"
This test sometimes runs longer than 60s, and has been getting flaky timeouts
as a result. With a longer timeout, it succeeds reliably.
PiperOrigin-RevId:
195704998
A. Unique TensorFlower [Mon, 7 May 2018 19:37:36 +0000 (12:37 -0700)]
Adding Greater/GreaterEqual/LessEqual ops to complement Less.
PiperOrigin-RevId:
195704492
Ian Langmore [Mon, 7 May 2018 19:17:02 +0000 (12:17 -0700)]
Add 'optonly' directive to linear_operator_circulant tests.
PiperOrigin-RevId:
195701399
Bixia Zheng [Mon, 7 May 2018 19:15:52 +0000 (12:15 -0700)]
[TF:XLA:GPU] Allow the use of linear address when there are size one dimensions
in a tensor.
The current implementation of EmitArrayElementAddress incorrectly concludes
that having a size one dimension in a tensor indicates broadcasting is needed
and the linear address can't be used to access the tensor. We fix this by
leaving LinearValidOnShape to decide whether the linear address can be used to
access the tensor. This enables the vectorization of loads/stores in unrolled
elementwise op kernels when other criteria are met.
Add a test case.
PiperOrigin-RevId:
195701194
Justin Lebar [Mon, 7 May 2018 19:10:15 +0000 (12:10 -0700)]
[XLA] Add FusionKind matcher to pattern_matcher.h.
PiperOrigin-RevId:
195700319
Shivani Agrawal [Mon, 7 May 2018 19:03:20 +0000 (12:03 -0700)]
[tf.data] Patch to unref iterator_resource in DeserializeIteratorOp.
PiperOrigin-RevId:
195698980
Gunhan Gulsoy [Mon, 7 May 2018 18:52:46 +0000 (11:52 -0700)]
Disable autograph cfg_test in windows.
PiperOrigin-RevId:
195697446
Alexandre Passos [Mon, 7 May 2018 18:49:08 +0000 (11:49 -0700)]
Register bool scatter_update for resource variables
Fixes #17784
PiperOrigin-RevId:
195696915
Igor Saprykin [Mon, 7 May 2018 18:48:31 +0000 (11:48 -0700)]
Generalize the input to TPU distribution strategy. Add cross-shard-replica sum.
TPUStrategy passes tests in minimize_loss_test. That caused me to add a capability to have `iterations x cores` inputs of any structure. I also resolved a big number of small issues and uncovered more things to resolve that are documented as todos.
PiperOrigin-RevId:
195696833
Yu-Cheng Ling [Mon, 7 May 2018 18:27:02 +0000 (11:27 -0700)]
Release notes for TensorFlow Lite.
PiperOrigin-RevId:
195693362
A. Unique TensorFlower [Mon, 7 May 2018 18:11:28 +0000 (11:11 -0700)]
Extracts PartialConcatConstFolding into a method.
PiperOrigin-RevId:
195690333
A. Unique TensorFlower [Mon, 7 May 2018 18:09:47 +0000 (11:09 -0700)]
Add tests for broadcasting KL divergence calculations.
PiperOrigin-RevId:
195690035
A. Unique TensorFlower [Mon, 7 May 2018 18:05:56 +0000 (11:05 -0700)]
Replaced calls to tensorflow::StringPiece::ToString with std::string conversions.
That is, instances of sp.ToString() are replaced with std::string(sp).
This will allow tensorflow::StringPiece::ToString to be removed, which is necessary before it can be replaced with absl::string_view.
PiperOrigin-RevId:
195689392
A. Unique TensorFlower [Mon, 7 May 2018 17:49:26 +0000 (10:49 -0700)]
Extend block sparsity support for TPUs
PiperOrigin-RevId:
195685740
A. Unique TensorFlower [Mon, 7 May 2018 17:47:35 +0000 (10:47 -0700)]
Add EvaluateNodes to HoistFactorDiv test.
PiperOrigin-RevId:
195685340
A. Unique TensorFlower [Mon, 7 May 2018 17:42:13 +0000 (10:42 -0700)]
Removing quotations to fix the broken link found on https://tensorflow.org/programmers_guide/embedding
The link at "Follow this link to see a fun example of thumbnail images in the Embedding Projector." It should go to https://www.tensorflow.org/images/embedding-mnist.mp4 but instead goes to the TF index page.
PiperOrigin-RevId:
195684456
Abhijit Karmarkar [Mon, 7 May 2018 17:27:50 +0000 (10:27 -0700)]
Internal Change
PiperOrigin-RevId:
195681946
A. Unique TensorFlower [Mon, 7 May 2018 13:27:43 +0000 (06:27 -0700)]
Control flow graph with forward and backward analysis
PiperOrigin-RevId:
195654450
A. Unique TensorFlower [Mon, 7 May 2018 11:25:03 +0000 (04:25 -0700)]
Automated g4 rollback of changelist
195638795
PiperOrigin-RevId:
195645734
A. Unique TensorFlower [Mon, 7 May 2018 10:00:34 +0000 (03:00 -0700)]
Improve fusion logic of (a dot b) * alpha
The previous approach didn't work because a multiplication by a scalar value
will be changed into an explicit broadcast.
Another issue that is fixed in this CL is retrieving the constant value from
the literal. This depends on the PrimitiveType, before we always assumed it to be double.
Also when checking ImplementedAsGemm() we should not call it recursively, but instead just the check related to kDot.
Finally add an execution test and adjust the fusion logic test.
PiperOrigin-RevId:
195638795
A. Unique TensorFlower [Mon, 7 May 2018 08:57:29 +0000 (01:57 -0700)]
Remove unused threadpool from stream executor.
PiperOrigin-RevId:
195632175
Russell Power [Sun, 6 May 2018 01:55:28 +0000 (18:55 -0700)]
Gracefully handle workers without heartbeat support enabled.
PiperOrigin-RevId:
195560525
Justin Lebar [Sat, 5 May 2018 19:34:32 +0000 (12:34 -0700)]
[XLA:GPU] Zero out input buffers before running cudnn conv autotune.
We don't need a corresponding change in gemm_thunk.cc because for gemms,
we do our autotune at runtime, at which point we have some real data in
our input/output buffers.
PiperOrigin-RevId:
195548896
Jacques Pienaar [Sat, 5 May 2018 19:02:32 +0000 (12:02 -0700)]
[TPU] Add option to only compile a replicated graph.
Useful when wanting to compile a computation but not run it. Returns a serialized CompilationResult string with the error message.
PiperOrigin-RevId:
195547847
Shashi Shekhar [Sat, 5 May 2018 18:55:53 +0000 (11:55 -0700)]
Allow benchmark model graph to be specified in text proto format.
PiperOrigin-RevId:
195547670
Justin Lebar [Sat, 5 May 2018 18:24:06 +0000 (11:24 -0700)]
[XLA] Always be willing to duplicate widening kConvert instructions during fusion.
This has the effect of pushing widening kConvert HLOs into consumers.
This is what we want, because it means that the producer writes the
narrower type (e.g. f16) and the consumer reads it and internally
upcasts to the wider type (e.g. f32). This lets the producer and
consumer both run faster, because they have to touch less memory.
PiperOrigin-RevId:
195546910
Mingsheng Hong [Sat, 5 May 2018 14:10:58 +0000 (07:10 -0700)]
Part 2 of Swift<->TF sends/recvs: support receiving tensors in TF from
Swift via direct session.
The changes are:
1. Added a TF experimental C API for Swift host to enqueue a tensor for sending
to TF. Again, the C APIs can be removed once the Fifo-queue based design
proves stable later.
2. TFLowerGraph is extended to generate Fifo related nodes for TF to receive
tensors. This is similar to the extension for TF to send tensors.
3. TFPartition is extended to support host send (createHostSend()), which does
the tensor send via a new protocol method TensorSendableReceivable.sendToDevice().
The main complexity is in sending a scalar, where a new protocol method
TensorSendableReceivable.createScalarTensor() is called to first create a tensor
out of it, and then send it over to TF.
Also removed code for protocol conformance on AccelerableByTensorFlow. Instead
have compiler look up that conformance from the SILFunction on sending/receiving
tensors.
AccelerableByTensorFlow could be removed from the compiler-known protocol list
now, but we'll defer that till things can stabilized more (in the past this
protocol has been added to and removed from the list at different times).
PiperOrigin-RevId:
195539436
Sanjoy Das [Sat, 5 May 2018 05:04:20 +0000 (22:04 -0700)]
Remove uses of the kTransposeDot fusion
I didn't remove the enum itself, but after this change removing the enum should
be a simple NFC change (famous last words!).
This will make it easier to implement BatchDot on CPU.
The change removes usages of kTransposeDot by:
- Teaching TransposeFolding to "fuse" transposes into dots by flipping the
lhs_contracting_dims/rhs_contracting_dims fields.
- Replacing the notion of transpose_lhs/transpose_rhs in the IR emitters with
"has a non-canonical LHS contraction dimension"/"has a non-canonical RHS
contraction dimension" where the canonical LHS and RHS contraction dims [0]
are 1 and 0.
Some tests were getting away with creating Dot instructions with their
dimensions numbers unset. I've fixed these to create canonical dot operations
instead.
It is possible (but hard to tell without trying) that some of the IR emission
logic and Eigen runtime calls can now be simplified further. For instance,
instead of passing in a `transpose_lhs` and `transpose_rhs` to the Eigen GEMM
routines, we could instead pass in the LHS and RHS contraction dimensions
directly.
[0] See HloInstruction::CreateCanonicalDot.
PiperOrigin-RevId:
195514907
Shashi Shekhar [Sat, 5 May 2018 02:18:44 +0000 (19:18 -0700)]
Fix landscape layout.
PiperOrigin-RevId:
195506194
A. Unique TensorFlower [Sat, 5 May 2018 01:49:08 +0000 (18:49 -0700)]
OVIC Benchmarker App (current without the functionality to bind to a CPU).
PiperOrigin-RevId:
195503895
A. Unique TensorFlower [Sat, 5 May 2018 01:49:08 +0000 (18:49 -0700)]
add support for PadV2
PiperOrigin-RevId:
195503894
Allen Lavoie [Sat, 5 May 2018 01:25:18 +0000 (18:25 -0700)]
Checkpointable: A small utility for exempting objects from __setattr__ tracking
Exposes it as tf.contrib.checkpoint.NoDependency. Objects wrapped in a
NoDependency object get unwrapped in __setattr__ and not tracked.
Removes the _save_counter dependency from tf.train.Checkpoint (the save counter
is still tracked as "save_counter" and always has been, so this is a
backwards-compatible dependency removal).
PiperOrigin-RevId:
195502562
Max Galkin [Sat, 5 May 2018 01:17:45 +0000 (18:17 -0700)]
GuaranteeConst is a NoOp for the op_level_cost_estiamtor.
PiperOrigin-RevId:
195501990
A. Unique TensorFlower [Sat, 5 May 2018 01:07:41 +0000 (18:07 -0700)]
Internal Change
PiperOrigin-RevId:
195501342
A. Unique TensorFlower [Sat, 5 May 2018 00:46:13 +0000 (17:46 -0700)]
Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId:
195499636
A. Unique TensorFlower [Sat, 5 May 2018 00:18:35 +0000 (17:18 -0700)]
Update ops-related pbtxt files.
PiperOrigin-RevId:
195497084
Jiri Simsa [Sat, 5 May 2018 00:03:52 +0000 (17:03 -0700)]
[tf.data] Adding `num_parallel_calls` to `map_and_batch`.
PiperOrigin-RevId:
195495206
Bjarke Hammersholt Roune [Fri, 4 May 2018 23:51:06 +0000 (16:51 -0700)]
Add infrastructure for a backend-specific configuration for each op. This is intentionally not exposed in ComputationBuilder and is not intended for use or to be set at all prior to the last backend-specific part of compilation.
PiperOrigin-RevId:
195493500
A. Unique TensorFlower [Fri, 4 May 2018 23:37:27 +0000 (16:37 -0700)]
Expose read-only versions of tensors in tflite.
PiperOrigin-RevId:
195491701
Karmel Allison [Fri, 4 May 2018 23:01:02 +0000 (16:01 -0700)]
Add the ability to export separate SavedModels for train and eval mode to Estimator with two new methods, available in tf.contrib: export_all_saved_models and export_saved_model_for_mode.
PiperOrigin-RevId:
195485922
Justin Lebar [Fri, 4 May 2018 22:40:07 +0000 (15:40 -0700)]
[XLA] Print allowed attributes when the user specifies an invalid attr.
PiperOrigin-RevId:
195482974
Benoit Steiner [Fri, 4 May 2018 22:14:00 +0000 (15:14 -0700)]
Identify and prune nodes that can never be executed
PiperOrigin-RevId:
195478951
James Qin [Fri, 4 May 2018 21:53:58 +0000 (14:53 -0700)]
Fix build failure for macos py3
PiperOrigin-RevId:
195475780
Justin Lebar [Fri, 4 May 2018 21:40:02 +0000 (14:40 -0700)]
[XLA:GPU] Mark floating-point division as an inexpensive op.
"Expensive" really means "so expensive you'd choose not to fuse in order
to avoid doing it twice". FP division definitely isn't that expensive.
PiperOrigin-RevId:
195473524
A. Unique TensorFlower [Fri, 4 May 2018 19:28:42 +0000 (12:28 -0700)]
Some fixes to support another TF graph:
1. Fix ResolveBatchNormalization to avoid deleting arrays that may still be
used.
2. Correctly count the number of ops using a given array, even when some ops
use the same array as more than one of their inputs.
3. In PropagateFixedSizes for Concatenation ops, when resolving a -1 wildcard
to a fixed value, we were doing so in a local 'axis' variable without actually
updating op->axis! The resulting -1 value still in op->axis tripped runtime code,
causing the concatenation to misbehave during inference.
PiperOrigin-RevId:
195454037
A. Unique TensorFlower [Fri, 4 May 2018 18:40:20 +0000 (11:40 -0700)]
[XLA] Redesign: migrate the SWIG wrapped xla client.
Added LocalOp that wraps XlaOp, so that it's fully visible to swig.
PiperOrigin-RevId:
195446939
A. Unique TensorFlower [Fri, 4 May 2018 18:40:01 +0000 (11:40 -0700)]
[XLA] Cleanup client_library_test_base: move definition of CreateParameterAndTransferLiteral to .cc file
PiperOrigin-RevId:
195446864
Suharsh Sivakumar [Fri, 4 May 2018 18:17:17 +0000 (11:17 -0700)]
Add operations before Identity operations should be quantized.
Fixes #19014
PiperOrigin-RevId:
195443326
A. Unique TensorFlower [Fri, 4 May 2018 17:50:29 +0000 (10:50 -0700)]
Internal clean up: change scanf to use int64_t instead of int64
PiperOrigin-RevId:
195438212
A. Unique TensorFlower [Fri, 4 May 2018 17:47:38 +0000 (10:47 -0700)]
Improve broadcast add implementation.
PiperOrigin-RevId:
195437679
Allen Lavoie [Fri, 4 May 2018 17:37:42 +0000 (10:37 -0700)]
TFTS: Make it easier to swap in different autoregressive models.
Adds a very simple LSTM encoder/decoder option as an example.
ARModel's new constructor argument is a bit awkward, since Estimator's new graphs mean we need a Model factory rather than a Model (or to un-build the model?). It's still a much more pleasant way to write autoregressive models than fiddling with ARModel directly, since ARModel handles collecting all the features (and the prediction loop, etc.). Happy to hear other ideas for an API.
PiperOrigin-RevId:
195436186
Alan Chiao [Fri, 4 May 2018 17:31:01 +0000 (10:31 -0700)]
Implement neg op
PiperOrigin-RevId:
195435079
Peter Hawkins [Fri, 4 May 2018 17:18:46 +0000 (10:18 -0700)]
Change RecvTensor RPC implementation to use DeviceContext::CopyDeviceTensorToCPU rather than calling GPUUtil::CopyGPUTensorToCPU. The direct call into the GPU code is problematic for non-GPU devices.
PiperOrigin-RevId:
195433287
Benjamin Kramer [Fri, 4 May 2018 10:43:00 +0000 (03:43 -0700)]
[XLA] Remove template keyword on non-template methods.
This is an error with clang trunk.
PiperOrigin-RevId:
195394277
A. Unique TensorFlower [Fri, 4 May 2018 10:27:18 +0000 (03:27 -0700)]
Fix HloSharding::GetSubSharding to return correct array shardings
Previously it always returned a tuple sharding even if the specified
index was referenceing a non-tuple element.
PiperOrigin-RevId:
195393313
A. Unique TensorFlower [Fri, 4 May 2018 09:22:14 +0000 (02:22 -0700)]
Do not crash on ROOT outfeed operations.
PiperOrigin-RevId:
195388075
A. Unique TensorFlower [Fri, 4 May 2018 09:04:33 +0000 (02:04 -0700)]
Fixing some linter errors in TF documentation (Github > GitHub, the the > the).
PiperOrigin-RevId:
195386172
Tom Hennigan [Fri, 4 May 2018 08:57:02 +0000 (01:57 -0700)]
Prefer non-nested GradientTape.gradient call when only one source is passed.
PiperOrigin-RevId:
195385406
A. Unique TensorFlower [Fri, 4 May 2018 08:47:12 +0000 (01:47 -0700)]
* Don't copy on-host and on-device shapes locally.
* Use ForEachMutableElement rather than the iterators, as it is much quicker.
There is still room for improvement; ForEachMutableElement is linear in the number of nodes in the shape tree but we want to be linear in the number of nodes in the sub shape tree. But I feel this is a good enough improvement.
PiperOrigin-RevId:
195384423
HyoukJoong Lee [Fri, 4 May 2018 07:51:58 +0000 (00:51 -0700)]
Automated g4 rollback of changelist
194829761
PiperOrigin-RevId:
195379693
A. Unique TensorFlower [Fri, 4 May 2018 06:36:02 +0000 (23:36 -0700)]
Internal change.
PiperOrigin-RevId:
195374319
Yuefeng Zhou [Fri, 4 May 2018 05:01:39 +0000 (22:01 -0700)]
Add the MultiWorkerMirroredStrategy
PiperOrigin-RevId:
195368876
A. Unique TensorFlower [Fri, 4 May 2018 02:45:59 +0000 (19:45 -0700)]
[XLA] Redesign: cleanup client_library_test_base.
PiperOrigin-RevId:
195357555
Chris Leary [Fri, 4 May 2018 01:08:34 +0000 (18:08 -0700)]
[XLA] Make LocalShapedBuffer::FromLiteral fallible by passing StatusOr wrapper.
PiperOrigin-RevId:
195345724
Ruoxin Sang [Fri, 4 May 2018 00:21:26 +0000 (17:21 -0700)]
Clear the stat cache of the target when renaming the file.
PiperOrigin-RevId:
195337886
A. Unique TensorFlower [Fri, 4 May 2018 00:03:03 +0000 (17:03 -0700)]
[XLA] Redesign: deprecate ComputationBuilder.
PiperOrigin-RevId:
195335330
A. Unique TensorFlower [Thu, 3 May 2018 23:34:11 +0000 (16:34 -0700)]
Fix flaky test time-outs for dnn_test and rnn_test.
PiperOrigin-RevId:
195331183
Russell Power [Thu, 3 May 2018 23:16:05 +0000 (16:16 -0700)]
Adjust worker shutdown hooks for TPUs
PiperOrigin-RevId:
195328247
Jeremy Lau [Thu, 3 May 2018 23:08:48 +0000 (16:08 -0700)]
Fix bugs in LogicalBuffer::ToString and BufferValue::ToProto: these functions
may be called before set_color(), but color() check fails when no color is set.
PiperOrigin-RevId:
195327063
Justin Lebar [Thu, 3 May 2018 22:58:43 +0000 (15:58 -0700)]
Fix oom_test so that it doesn't try to allocate a giant host buffer when
run without --config=cuda. Sadly the best way I could come up with is
pretty hacky.
PiperOrigin-RevId:
195325149
A. Unique TensorFlower [Thu, 3 May 2018 22:42:23 +0000 (15:42 -0700)]
Do not hoist nodes that modify frame info.
PiperOrigin-RevId:
195322927
Dan Moldovan [Thu, 3 May 2018 22:39:46 +0000 (15:39 -0700)]
Use tuple instead of list to reduce the chance of it being picked by the list conversions.
PiperOrigin-RevId:
195322522
A. Unique TensorFlower [Thu, 3 May 2018 22:20:05 +0000 (15:20 -0700)]
Fix bug that disabled loop invariant node motion optimizer. Disable it options, since it is broken in the presence of gradient stacks.
Get rid of an unnecessary copy of the graph.
PiperOrigin-RevId:
195319766
Zhixian Yan [Thu, 3 May 2018 21:34:27 +0000 (14:34 -0700)]
Add tflite listed models with accuracy and performance numbers.
PiperOrigin-RevId:
195312636
A. Unique TensorFlower [Thu, 3 May 2018 21:18:07 +0000 (14:18 -0700)]
Add separate get_read and get_updated helpers that work on code exceprts. Handle corner case for AugAssign. Fix bug in _node_sets_self_attribute.
PiperOrigin-RevId:
195309809
Nick Desaulniers [Thu, 3 May 2018 21:16:27 +0000 (14:16 -0700)]
[TF:XLA] clean up interface to xla::VerifyHloModule
It seems that the first argument, platform, is unused.
PiperOrigin-RevId:
195309504
A. Unique TensorFlower [Thu, 3 May 2018 21:11:13 +0000 (14:11 -0700)]
Optimize idempotent ops, e.g., Snapshot(Snapshot(x)) => Snapshot(x)
PiperOrigin-RevId:
195308675
Sanjoy Das [Thu, 3 May 2018 21:00:56 +0000 (14:00 -0700)]
[XLA:CPU] Remove dead function + DCHECK, NFC
There isn't a lot of benefit to fixing the function to do that it says it does
since I'm adding support for lowering batch matmul which will break this
precondition anyway.
PiperOrigin-RevId:
195306803