A. Unique TensorFlower [Tue, 8 May 2018 22:33:37 +0000 (15:33 -0700)]
Increase size of tensorflow/contrib/distributions:batch_reshape_test to medium to avoid flaky timeouts
PiperOrigin-RevId:
195887374
A. Unique TensorFlower [Tue, 8 May 2018 22:26:44 +0000 (15:26 -0700)]
Increase shard count of tensorflow/python/keras:lstm_test to avoid flaky timeouts
PiperOrigin-RevId:
195886372
A. Unique TensorFlower [Tue, 8 May 2018 21:50:24 +0000 (14:50 -0700)]
Add missing ":haswell" match to list of platform selectors.
PiperOrigin-RevId:
195880275
A. Unique TensorFlower [Tue, 8 May 2018 21:45:01 +0000 (14:45 -0700)]
Increase shard_count of tensorflow/python/estimator:estimator_test to avoid flaky asan timeouts
PiperOrigin-RevId:
195879364
Alexandre Passos [Tue, 8 May 2018 21:42:35 +0000 (14:42 -0700)]
Do not differentiage integers in the eager API.
This is similar to the change made in:
https://github.com/tensorflow/tensorflow/commit/
f63750645826df65b05cad505546a86f0e347674
for backpropagation during graph construction via tf.gradients()
PiperOrigin-RevId:
195878952
A. Unique TensorFlower [Tue, 8 May 2018 21:41:48 +0000 (14:41 -0700)]
Increase shard count of tensorflow/contrib/distributions:mixture_test to avoid flaky timeouts in asan mode
PiperOrigin-RevId:
195878809
A. Unique TensorFlower [Tue, 8 May 2018 21:00:48 +0000 (14:00 -0700)]
Increase size of test tensorflow/contrib/layers:rev_block_lib_test to medium
to avoid flaky timeouts.
PiperOrigin-RevId:
195871947
Allen Lavoie [Tue, 8 May 2018 21:00:30 +0000 (14:00 -0700)]
Avoid string formatting in assert_same_float_dtype unless there's an error
Especially helpful when executing eagerly
PiperOrigin-RevId:
195871887
A. Unique TensorFlower [Tue, 8 May 2018 19:10:36 +0000 (12:10 -0700)]
Better wrapping of stream executor's cuDNN API calls. Replacing mutex locking and setting the cuDNN stream followed by calling wrap::cudnn... with an RAII CudnnHandle object that handles the former two operations.
Distinguish three different API types:
A) APIs that don't take a cudnnHandle_t: These are thread-safe APIs that don't enqueue any CUDA work on a stream. They can be called directly without any extra precautions.
B) APIs that take a cudnnHandle_t and perform CUDA work. The CUDA context needs to be acquired and the stream needs to be set beforehand, calls need to be serialized. A CudnnHandle instance guarantees that this work has been performed before calling cuDNN.
C) APIs that do take a cudnnHandle_t, but (presumably, the API makes no guarantees) still don't perform any CUDA work. This is limited to the API to setup RNN descriptors. Calls need to be serialized, but most likely we wouldn't need to acquire the CUDA context or set the stream. We still do though using the legacy default stream, because there are no guarantees.
PiperOrigin-RevId:
195856300
A. Unique TensorFlower [Tue, 8 May 2018 19:04:38 +0000 (12:04 -0700)]
Remove outdated CUDA SDK string (the text is now consistent with other version choices, and the '9.0' format is already present in the default).
PiperOrigin-RevId:
195855416
Alina Sbirlea [Tue, 8 May 2018 18:54:03 +0000 (11:54 -0700)]
Re-land: Optimize dot(DynamicSlice(ConstA), ConstantB) by memoizing dot(ConstA, ConstB)
Make transformation when ConstA and ConstB are 2D, and DynamicSlice is slicing a full row, column respectively.
Handle:
dot(DynamicSlice(Index, ConstA), ConstB) => DynamicSlice(Index, dot*(ConstA, ConstB));
and
dot(ConstA, DynamicSlice(Index, ConstB)) => DynamicSlice(Index, dot*(ConstA, ConstB));
Reason to roll forward: Previous issue of getting out of memory errors when generating LLVM constants was resolved by CSE-ing constants before allocation.
PiperOrigin-RevId:
195853680
A. Unique TensorFlower [Tue, 8 May 2018 18:45:53 +0000 (11:45 -0700)]
Fix docstring for flush() method
PiperOrigin-RevId:
195852402
Ilya Biryukov [Tue, 8 May 2018 18:25:50 +0000 (11:25 -0700)]
Update version of downloadable clang toolchain
PiperOrigin-RevId:
195849091
A. Unique TensorFlower [Tue, 8 May 2018 18:19:46 +0000 (11:19 -0700)]
Change visibility of hlo_proto.
PiperOrigin-RevId:
195848035
A. Unique TensorFlower [Tue, 8 May 2018 18:15:53 +0000 (11:15 -0700)]
Add affinity binding functionality and documentation to OVIC benchmarker.
PiperOrigin-RevId:
195847378
A. Unique TensorFlower [Tue, 8 May 2018 18:12:07 +0000 (11:12 -0700)]
Increase size of test //third_party/tensorflow/python:saver_large_variable_test
from "small" to "medium" to prevent flaky timeouts.
PiperOrigin-RevId:
195846802
Andrew Selle [Tue, 8 May 2018 18:10:23 +0000 (11:10 -0700)]
Fix Raspberry Pi build by making PNG not try to use Neon (by autodetect).
This involves patching to override the png neon option. In the future
it might be worth enabling PNG optimization.
PiperOrigin-RevId:
195846513
Akshay Agrawal [Tue, 8 May 2018 18:07:45 +0000 (11:07 -0700)]
When building functions, capture tensors in `internal_convert_to_tensor`.
This change is motivated by the fact that, when eager execution is disabled, library functions assume that tensors returned from `internal_convert_to_tensor` are in fact `Tensor`s and not `EagerTensor`s.
PiperOrigin-RevId:
195846039
A. Unique TensorFlower [Tue, 8 May 2018 17:25:30 +0000 (10:25 -0700)]
Add cost model of depthwiseConv2dNative. Tensorflow computes depthwise separable convolutions as depthwiseConv2dNative followed by 1x1 Conv2D
PiperOrigin-RevId:
195838887
A. Unique TensorFlower [Tue, 8 May 2018 16:46:45 +0000 (09:46 -0700)]
Free ANeuralNetworksCompilation object in NNAPIDelegate destructor
PiperOrigin-RevId:
195832807
Shanqing Cai [Tue, 8 May 2018 16:04:17 +0000 (09:04 -0700)]
Minor formatting tweaks to distribute.py and simple_tfkeras_example.py
PiperOrigin-RevId:
195827029
A. Unique TensorFlower [Tue, 8 May 2018 15:57:45 +0000 (08:57 -0700)]
Update comment clarifying continuous eval behavior.
PiperOrigin-RevId:
195826025
Peter Hawkins [Tue, 8 May 2018 15:07:08 +0000 (08:07 -0700)]
[TF:XLA] Fix NaN in StatelessRandomNormal if the underlying uniform distribution returned -1.
PiperOrigin-RevId:
195819645
A. Unique TensorFlower [Tue, 8 May 2018 15:04:07 +0000 (08:04 -0700)]
Automated g4 rollback of changelist
195723288
PiperOrigin-RevId:
195819297
A. Unique TensorFlower [Tue, 8 May 2018 14:57:12 +0000 (07:57 -0700)]
Add missing #include for OpResponse. This class currently happens to be forward
declared by xla.proto.h, but that proto doesn't actually need this type
anywhere and we are working on removing such unneeded forward declarations.
PiperOrigin-RevId:
195818397
Asim Shankar [Tue, 8 May 2018 14:28:43 +0000 (07:28 -0700)]
ProfileHandler: Remove unnecessary interface method.
PiperOrigin-RevId:
195815565
A. Unique TensorFlower [Tue, 8 May 2018 10:26:06 +0000 (03:26 -0700)]
Fix a test expectation.
PiperOrigin-RevId:
195796348
A. Unique TensorFlower [Tue, 8 May 2018 09:11:52 +0000 (02:11 -0700)]
Automated g4 rollback of changelist
195748721
PiperOrigin-RevId:
195790581
A. Unique TensorFlower [Tue, 8 May 2018 02:56:26 +0000 (19:56 -0700)]
Temporarily disable concat rewrite.
PiperOrigin-RevId:
195762860
Brennan Saeta [Tue, 8 May 2018 01:31:47 +0000 (18:31 -0700)]
[tf.data] Move tensorflow::dataset::MakeIteratorContext to core/framework
PiperOrigin-RevId:
195756342
Skye Wanderman-Milne [Tue, 8 May 2018 00:28:41 +0000 (17:28 -0700)]
Add logic for StridedSlice ops in ShapeRefiner::ConstantPartialShape().
This mimics the logic in tensor_util.constant_value_as_shape, allowing
the C++ shape inference code to infer more shapes than it could before.
This change also adds an optional stride argument to InferenceContext::Subshape().
PiperOrigin-RevId:
195749522
Skye Wanderman-Milne [Tue, 8 May 2018 00:24:28 +0000 (17:24 -0700)]
Make conv2d_tranpose_test.py work with C API shapes enabled.
The C API provides more accurate shape information in many cases.
PiperOrigin-RevId:
195749030
Igor Ganichev [Tue, 8 May 2018 00:21:53 +0000 (17:21 -0700)]
Make eager functions runable on TPU
PiperOrigin-RevId:
195748721
Skye Wanderman-Milne [Tue, 8 May 2018 00:21:39 +0000 (17:21 -0700)]
Raise an error if we try to take the gradient wrt to the initial value of a loop variable.
Fixes #14101
PiperOrigin-RevId:
195748688
Blake Hechtman [Tue, 8 May 2018 00:00:27 +0000 (17:00 -0700)]
Internal change
PiperOrigin-RevId:
195745819
Jacques Pienaar [Mon, 7 May 2018 23:59:41 +0000 (16:59 -0700)]
Add test with tf.cond.
PiperOrigin-RevId:
195745718
Sanjoy Das [Mon, 7 May 2018 23:55:10 +0000 (16:55 -0700)]
Delete kTransposeDot (it is no longer in use)
PiperOrigin-RevId:
195745124
Alexandre Passos [Mon, 7 May 2018 23:49:44 +0000 (16:49 -0700)]
Fast-path to VarHandleOp
PiperOrigin-RevId:
195744374
Billy Lamberta [Mon, 7 May 2018 23:38:02 +0000 (16:38 -0700)]
Add TFX section. Add Ecosystem page and dropdown menu.
PiperOrigin-RevId:
195742728
A. Unique TensorFlower [Mon, 7 May 2018 23:31:07 +0000 (16:31 -0700)]
Reorder executor NodeItem variable length data section so
that all multi-byte aligned types precede all byte-aligned
types so that alignment is satisfied without padding.
PiperOrigin-RevId:
195741712
Skye Wanderman-Milne [Mon, 7 May 2018 23:16:32 +0000 (16:16 -0700)]
ShapeRefiner fix: some variant-type tensors have handle data.
ShapeRefiner::AddNode() would only propagate handle data for
DT_RESOURCE tensors, but not DT_VARIANT. The Python shape inference
logic in common_shapes.py handled this correct, which is why we didn't
notice this earlier. In particular, list ops use DT_VARIANT with
handle data.
PiperOrigin-RevId:
195739586
Akshay Agrawal [Mon, 7 May 2018 23:16:24 +0000 (16:16 -0700)]
Refactor TensorArray to avoid copies and memory allocations when executing eagerly.
With this change, writes to TensorArrays when eager execution is enabled take O(1) time instead of O(n). Additionally, whereas writing to a TensorArray when constructing a graph results in allocating a new Python TensorArray object, writing to a TensorArray with eager enabled no longer performs that allocation (graph construction uses these allocations to ensure correctness of control flow and gradients, but this isn't necessary when executing eagerly). Finally, this change also removes the artificial write-once semantics of TensorArrays when executing eagerly.
PiperOrigin-RevId:
195739572
Blake Hechtman [Mon, 7 May 2018 22:58:29 +0000 (15:58 -0700)]
[XLA] Make post order a possible schedule as it sometimes uses less memory than
the DFS or list scheduler and it is very simple.
PiperOrigin-RevId:
195736916
Derek Murray [Mon, 7 May 2018 22:51:05 +0000 (15:51 -0700)]
[Remote functions] Only set the default runner *after* resolving the remote FLR.
Previously, if the `runner` was not specified for a function
execution, we would immediately set it to the default runner of the
*local* FLR, even if the function was to be executed remotely. This
change postpones the resolution of the default runner until after the
function invocation has been routed to the FLR that will actually
execute it.
As a result, we avoid the pathological case where a GPU device using a
private threadpool (TF_GPU_THREAD_MODE=gpu_private) ends up running
all of the ops for the CPU-side input pipeline on the private
threadpool.
PiperOrigin-RevId:
195735734
A. Unique TensorFlower [Mon, 7 May 2018 22:47:57 +0000 (15:47 -0700)]
Add EvaluateNodes to tests: RemoveIdentityTransposesMultipleOutputs, RemoveTransposesWithControlDependency, CombineBitcasts, CombineAndRemoveBitcasts, RemoveRedundantCast
PiperOrigin-RevId:
195735234
Michael Kuperstein [Mon, 7 May 2018 22:41:52 +0000 (15:41 -0700)]
[XLA] Fix a "we're we're" in the operation semantics.
PiperOrigin-RevId:
195734316
A. Unique TensorFlower [Mon, 7 May 2018 22:41:22 +0000 (15:41 -0700)]
Add support for select (via tf.where) support to tflite.
PiperOrigin-RevId:
195734246
Michael Case [Mon, 7 May 2018 22:24:02 +0000 (15:24 -0700)]
Internal Change.
PiperOrigin-RevId:
195731675
Michael Case [Mon, 7 May 2018 22:20:49 +0000 (15:20 -0700)]
Fix TypeError in update_version.py
PiperOrigin-RevId:
195731183
Pavithra Vijay [Mon, 7 May 2018 22:16:59 +0000 (15:16 -0700)]
Add support for tf.data.Dataset iterators in model training/eval methods in eager-mode
PiperOrigin-RevId:
195730534
Igor Ganichev [Mon, 7 May 2018 22:15:01 +0000 (15:15 -0700)]
Replace references to TensorInfo with XlaTensor
PiperOrigin-RevId:
195730139
A. Unique TensorFlower [Mon, 7 May 2018 21:55:26 +0000 (14:55 -0700)]
Disable automated testing of tensorflow/compiler/tests:extract_image_patches_op_test_cpu_ondemand
A recent change has made this test flaky.
PiperOrigin-RevId:
195726647
A. Unique TensorFlower [Mon, 7 May 2018 21:34:11 +0000 (14:34 -0700)]
Allow output has a different shape from input in the image.transform (#17011).
PiperOrigin-RevId:
195723288
Alexandre Passos [Mon, 7 May 2018 21:29:03 +0000 (14:29 -0700)]
Fix resource variable in cond gradient.
PiperOrigin-RevId:
195722449
Justin Lebar [Mon, 7 May 2018 21:23:19 +0000 (14:23 -0700)]
[XLA] Shard compilation of HloEvaluator.
PiperOrigin-RevId:
195721404
A. Unique TensorFlower [Mon, 7 May 2018 21:15:37 +0000 (14:15 -0700)]
Move PadV2Options to the end, in order to maintain schema compatibility.
PiperOrigin-RevId:
195720133
A. Unique TensorFlower [Mon, 7 May 2018 21:13:23 +0000 (14:13 -0700)]
Use 64bit aggregation for gradients and hessians since the 32 bit version is numerically unstable for large minibatches.
PiperOrigin-RevId:
195719795
Anna R [Mon, 7 May 2018 21:03:15 +0000 (14:03 -0700)]
Change deprecation_version field in api_def.proto to a string.
PiperOrigin-RevId:
195718061
Sanjoy Das [Mon, 7 May 2018 21:00:09 +0000 (14:00 -0700)]
[TF:XLA] Bump open source llvm revision to r331624
PiperOrigin-RevId:
195717497
A. Unique TensorFlower [Mon, 7 May 2018 20:24:12 +0000 (13:24 -0700)]
Fixes for accessing variables with a MirroredStrategy in a
cross-tower context:
* only provide read-only access to variables via get()
* don't fail if use the variable isn't copied to the current device in
get()
* make _as_graph_element() return the aggregate value for tower-local
variables (instead of the incorrect previous behavior of returning
the primary)
PiperOrigin-RevId:
195711474
A. Unique TensorFlower [Mon, 7 May 2018 20:18:33 +0000 (13:18 -0700)]
Specialize functions only once per unique context.
PiperOrigin-RevId:
195710562
Sanjoy Das [Mon, 7 May 2018 19:48:38 +0000 (12:48 -0700)]
Rename HloDotWithContractDimsMatcher to HloDotWithContractingDimsMatcher
This is a typo I introduced in cr/
195514907.
PiperOrigin-RevId:
195706006
A. Unique TensorFlower [Mon, 7 May 2018 19:41:07 +0000 (12:41 -0700)]
Make test tensorflow/python/keras:resnet50_test be size "medium"
This test sometimes runs longer than 60s, and has been getting flaky timeouts
as a result. With a longer timeout, it succeeds reliably.
PiperOrigin-RevId:
195704998
A. Unique TensorFlower [Mon, 7 May 2018 19:37:36 +0000 (12:37 -0700)]
Adding Greater/GreaterEqual/LessEqual ops to complement Less.
PiperOrigin-RevId:
195704492
Ian Langmore [Mon, 7 May 2018 19:17:02 +0000 (12:17 -0700)]
Add 'optonly' directive to linear_operator_circulant tests.
PiperOrigin-RevId:
195701399
Bixia Zheng [Mon, 7 May 2018 19:15:52 +0000 (12:15 -0700)]
[TF:XLA:GPU] Allow the use of linear address when there are size one dimensions
in a tensor.
The current implementation of EmitArrayElementAddress incorrectly concludes
that having a size one dimension in a tensor indicates broadcasting is needed
and the linear address can't be used to access the tensor. We fix this by
leaving LinearValidOnShape to decide whether the linear address can be used to
access the tensor. This enables the vectorization of loads/stores in unrolled
elementwise op kernels when other criteria are met.
Add a test case.
PiperOrigin-RevId:
195701194
Justin Lebar [Mon, 7 May 2018 19:10:15 +0000 (12:10 -0700)]
[XLA] Add FusionKind matcher to pattern_matcher.h.
PiperOrigin-RevId:
195700319
Shivani Agrawal [Mon, 7 May 2018 19:03:20 +0000 (12:03 -0700)]
[tf.data] Patch to unref iterator_resource in DeserializeIteratorOp.
PiperOrigin-RevId:
195698980
Gunhan Gulsoy [Mon, 7 May 2018 18:52:46 +0000 (11:52 -0700)]
Disable autograph cfg_test in windows.
PiperOrigin-RevId:
195697446
Alexandre Passos [Mon, 7 May 2018 18:49:08 +0000 (11:49 -0700)]
Register bool scatter_update for resource variables
Fixes #17784
PiperOrigin-RevId:
195696915
Igor Saprykin [Mon, 7 May 2018 18:48:31 +0000 (11:48 -0700)]
Generalize the input to TPU distribution strategy. Add cross-shard-replica sum.
TPUStrategy passes tests in minimize_loss_test. That caused me to add a capability to have `iterations x cores` inputs of any structure. I also resolved a big number of small issues and uncovered more things to resolve that are documented as todos.
PiperOrigin-RevId:
195696833
Yu-Cheng Ling [Mon, 7 May 2018 18:27:02 +0000 (11:27 -0700)]
Release notes for TensorFlow Lite.
PiperOrigin-RevId:
195693362
A. Unique TensorFlower [Mon, 7 May 2018 18:11:28 +0000 (11:11 -0700)]
Extracts PartialConcatConstFolding into a method.
PiperOrigin-RevId:
195690333
A. Unique TensorFlower [Mon, 7 May 2018 18:09:47 +0000 (11:09 -0700)]
Add tests for broadcasting KL divergence calculations.
PiperOrigin-RevId:
195690035
A. Unique TensorFlower [Mon, 7 May 2018 18:05:56 +0000 (11:05 -0700)]
Replaced calls to tensorflow::StringPiece::ToString with std::string conversions.
That is, instances of sp.ToString() are replaced with std::string(sp).
This will allow tensorflow::StringPiece::ToString to be removed, which is necessary before it can be replaced with absl::string_view.
PiperOrigin-RevId:
195689392
A. Unique TensorFlower [Mon, 7 May 2018 17:49:26 +0000 (10:49 -0700)]
Extend block sparsity support for TPUs
PiperOrigin-RevId:
195685740
A. Unique TensorFlower [Mon, 7 May 2018 17:47:35 +0000 (10:47 -0700)]
Add EvaluateNodes to HoistFactorDiv test.
PiperOrigin-RevId:
195685340
A. Unique TensorFlower [Mon, 7 May 2018 17:42:13 +0000 (10:42 -0700)]
Removing quotations to fix the broken link found on https://tensorflow.org/programmers_guide/embedding
The link at "Follow this link to see a fun example of thumbnail images in the Embedding Projector." It should go to https://www.tensorflow.org/images/embedding-mnist.mp4 but instead goes to the TF index page.
PiperOrigin-RevId:
195684456
Abhijit Karmarkar [Mon, 7 May 2018 17:27:50 +0000 (10:27 -0700)]
Internal Change
PiperOrigin-RevId:
195681946
A. Unique TensorFlower [Mon, 7 May 2018 13:27:43 +0000 (06:27 -0700)]
Control flow graph with forward and backward analysis
PiperOrigin-RevId:
195654450
A. Unique TensorFlower [Mon, 7 May 2018 11:25:03 +0000 (04:25 -0700)]
Automated g4 rollback of changelist
195638795
PiperOrigin-RevId:
195645734
A. Unique TensorFlower [Mon, 7 May 2018 10:00:34 +0000 (03:00 -0700)]
Improve fusion logic of (a dot b) * alpha
The previous approach didn't work because a multiplication by a scalar value
will be changed into an explicit broadcast.
Another issue that is fixed in this CL is retrieving the constant value from
the literal. This depends on the PrimitiveType, before we always assumed it to be double.
Also when checking ImplementedAsGemm() we should not call it recursively, but instead just the check related to kDot.
Finally add an execution test and adjust the fusion logic test.
PiperOrigin-RevId:
195638795
A. Unique TensorFlower [Mon, 7 May 2018 08:57:29 +0000 (01:57 -0700)]
Remove unused threadpool from stream executor.
PiperOrigin-RevId:
195632175
Russell Power [Sun, 6 May 2018 01:55:28 +0000 (18:55 -0700)]
Gracefully handle workers without heartbeat support enabled.
PiperOrigin-RevId:
195560525
Justin Lebar [Sat, 5 May 2018 19:34:32 +0000 (12:34 -0700)]
[XLA:GPU] Zero out input buffers before running cudnn conv autotune.
We don't need a corresponding change in gemm_thunk.cc because for gemms,
we do our autotune at runtime, at which point we have some real data in
our input/output buffers.
PiperOrigin-RevId:
195548896
Jacques Pienaar [Sat, 5 May 2018 19:02:32 +0000 (12:02 -0700)]
[TPU] Add option to only compile a replicated graph.
Useful when wanting to compile a computation but not run it. Returns a serialized CompilationResult string with the error message.
PiperOrigin-RevId:
195547847
Shashi Shekhar [Sat, 5 May 2018 18:55:53 +0000 (11:55 -0700)]
Allow benchmark model graph to be specified in text proto format.
PiperOrigin-RevId:
195547670
Justin Lebar [Sat, 5 May 2018 18:24:06 +0000 (11:24 -0700)]
[XLA] Always be willing to duplicate widening kConvert instructions during fusion.
This has the effect of pushing widening kConvert HLOs into consumers.
This is what we want, because it means that the producer writes the
narrower type (e.g. f16) and the consumer reads it and internally
upcasts to the wider type (e.g. f32). This lets the producer and
consumer both run faster, because they have to touch less memory.
PiperOrigin-RevId:
195546910
Mingsheng Hong [Sat, 5 May 2018 14:10:58 +0000 (07:10 -0700)]
Part 2 of Swift<->TF sends/recvs: support receiving tensors in TF from
Swift via direct session.
The changes are:
1. Added a TF experimental C API for Swift host to enqueue a tensor for sending
to TF. Again, the C APIs can be removed once the Fifo-queue based design
proves stable later.
2. TFLowerGraph is extended to generate Fifo related nodes for TF to receive
tensors. This is similar to the extension for TF to send tensors.
3. TFPartition is extended to support host send (createHostSend()), which does
the tensor send via a new protocol method TensorSendableReceivable.sendToDevice().
The main complexity is in sending a scalar, where a new protocol method
TensorSendableReceivable.createScalarTensor() is called to first create a tensor
out of it, and then send it over to TF.
Also removed code for protocol conformance on AccelerableByTensorFlow. Instead
have compiler look up that conformance from the SILFunction on sending/receiving
tensors.
AccelerableByTensorFlow could be removed from the compiler-known protocol list
now, but we'll defer that till things can stabilized more (in the past this
protocol has been added to and removed from the list at different times).
PiperOrigin-RevId:
195539436
Sanjoy Das [Sat, 5 May 2018 05:04:20 +0000 (22:04 -0700)]
Remove uses of the kTransposeDot fusion
I didn't remove the enum itself, but after this change removing the enum should
be a simple NFC change (famous last words!).
This will make it easier to implement BatchDot on CPU.
The change removes usages of kTransposeDot by:
- Teaching TransposeFolding to "fuse" transposes into dots by flipping the
lhs_contracting_dims/rhs_contracting_dims fields.
- Replacing the notion of transpose_lhs/transpose_rhs in the IR emitters with
"has a non-canonical LHS contraction dimension"/"has a non-canonical RHS
contraction dimension" where the canonical LHS and RHS contraction dims [0]
are 1 and 0.
Some tests were getting away with creating Dot instructions with their
dimensions numbers unset. I've fixed these to create canonical dot operations
instead.
It is possible (but hard to tell without trying) that some of the IR emission
logic and Eigen runtime calls can now be simplified further. For instance,
instead of passing in a `transpose_lhs` and `transpose_rhs` to the Eigen GEMM
routines, we could instead pass in the LHS and RHS contraction dimensions
directly.
[0] See HloInstruction::CreateCanonicalDot.
PiperOrigin-RevId:
195514907
Shashi Shekhar [Sat, 5 May 2018 02:18:44 +0000 (19:18 -0700)]
Fix landscape layout.
PiperOrigin-RevId:
195506194
A. Unique TensorFlower [Sat, 5 May 2018 01:49:08 +0000 (18:49 -0700)]
OVIC Benchmarker App (current without the functionality to bind to a CPU).
PiperOrigin-RevId:
195503895
A. Unique TensorFlower [Sat, 5 May 2018 01:49:08 +0000 (18:49 -0700)]
add support for PadV2
PiperOrigin-RevId:
195503894
Allen Lavoie [Sat, 5 May 2018 01:25:18 +0000 (18:25 -0700)]
Checkpointable: A small utility for exempting objects from __setattr__ tracking
Exposes it as tf.contrib.checkpoint.NoDependency. Objects wrapped in a
NoDependency object get unwrapped in __setattr__ and not tracked.
Removes the _save_counter dependency from tf.train.Checkpoint (the save counter
is still tracked as "save_counter" and always has been, so this is a
backwards-compatible dependency removal).
PiperOrigin-RevId:
195502562
Max Galkin [Sat, 5 May 2018 01:17:45 +0000 (18:17 -0700)]
GuaranteeConst is a NoOp for the op_level_cost_estiamtor.
PiperOrigin-RevId:
195501990
A. Unique TensorFlower [Sat, 5 May 2018 01:07:41 +0000 (18:07 -0700)]
Internal Change
PiperOrigin-RevId:
195501342
A. Unique TensorFlower [Sat, 5 May 2018 00:46:13 +0000 (17:46 -0700)]
Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId:
195499636
A. Unique TensorFlower [Sat, 5 May 2018 00:18:35 +0000 (17:18 -0700)]
Update ops-related pbtxt files.
PiperOrigin-RevId:
195497084
Jiri Simsa [Sat, 5 May 2018 00:03:52 +0000 (17:03 -0700)]
[tf.data] Adding `num_parallel_calls` to `map_and_batch`.
PiperOrigin-RevId:
195495206
Bjarke Hammersholt Roune [Fri, 4 May 2018 23:51:06 +0000 (16:51 -0700)]
Add infrastructure for a backend-specific configuration for each op. This is intentionally not exposed in ComputationBuilder and is not intended for use or to be set at all prior to the last backend-specific part of compilation.
PiperOrigin-RevId:
195493500