Sanjoy Das [Tue, 20 Feb 2018 19:55:57 +0000 (11:55 -0800)]
[TF:XLA] Bump open source llvm revision to r325553
PiperOrigin-RevId:
186339171
A. Unique TensorFlower [Tue, 20 Feb 2018 19:40:04 +0000 (11:40 -0800)]
Temporarily disable flaky test.
PiperOrigin-RevId:
186336341
Mark Daoust [Tue, 20 Feb 2018 19:13:55 +0000 (11:13 -0800)]
Add numpy compatibility note to transpose operations.
fixes #15994
PiperOrigin-RevId:
186331307
Mark Daoust [Tue, 20 Feb 2018 19:12:53 +0000 (11:12 -0800)]
Doc fixes for switching to 10.12.6 (Sierra) as min supported macOS
see: #15933
PiperOrigin-RevId:
186331121
A. Unique TensorFlower [Tue, 20 Feb 2018 19:11:35 +0000 (11:11 -0800)]
Introduce tflite diff test to verify difference between tf and tf lite model
PiperOrigin-RevId:
186330891
Yao Zhang [Tue, 20 Feb 2018 19:03:08 +0000 (11:03 -0800)]
Support multiple fetch nodes and add a flag for memory report.
PiperOrigin-RevId:
186329308
A. Unique TensorFlower [Tue, 20 Feb 2018 18:58:39 +0000 (10:58 -0800)]
Replace private method call _ref() with read_value()
PiperOrigin-RevId:
186328404
Yu-Cheng Ling [Tue, 20 Feb 2018 18:47:06 +0000 (10:47 -0800)]
TFLite: Check if builtin_code is in valid range by best effort.
PiperOrigin-RevId:
186326496
Igor Saprykin [Tue, 20 Feb 2018 17:34:09 +0000 (09:34 -0800)]
Add API to switch certain parts of Graph state to be thread-local.
For example, this can allow two threads to create ops under varying ops.device().
PiperOrigin-RevId:
186314978
A. Unique TensorFlower [Tue, 20 Feb 2018 17:20:28 +0000 (09:20 -0800)]
Implementation of `len` that uses multiple dispatch. Replaces the current blank `tf.shape()[0]` code.
PiperOrigin-RevId:
186313178
A. Unique TensorFlower [Tue, 20 Feb 2018 15:28:14 +0000 (07:28 -0800)]
Internal change.
PiperOrigin-RevId:
186300438
Dustin Tran [Tue, 20 Feb 2018 07:55:21 +0000 (23:55 -0800)]
Automated g4 rollback of changelist
186260342
PiperOrigin-RevId:
186266857
Dustin Tran [Tue, 20 Feb 2018 05:39:03 +0000 (21:39 -0800)]
Reduce tfp.layers boilerplate via programmable docstrings.
PiperOrigin-RevId:
186260342
Derek Murray [Tue, 20 Feb 2018 01:36:56 +0000 (17:36 -0800)]
[tf.data] Delete contrib version of dataset_ops.py, which was re-added by a merge from GitHub.
PiperOrigin-RevId:
186249376
A. Unique TensorFlower [Mon, 19 Feb 2018 14:14:10 +0000 (06:14 -0800)]
Share Variable objects among collections when importing metagraphs.
This mirrors the behavior of usual graph construction where a Variable object is added to multiple collections.
PiperOrigin-RevId:
186214551
A. Unique TensorFlower [Mon, 19 Feb 2018 13:54:27 +0000 (05:54 -0800)]
Remove experimental C API from srcs rule as it requires other sources
PiperOrigin-RevId:
186213207
Blake Hechtman [Mon, 19 Feb 2018 11:29:08 +0000 (03:29 -0800)]
[TF:XLA] Select the update value instead of the buffer to support negative
index scatter.
PiperOrigin-RevId:
186202761
A. Unique TensorFlower [Sat, 17 Feb 2018 16:42:40 +0000 (08:42 -0800)]
Tweak `tf.slice` documentation.
Add the input argument (`foo`) to `tf.slice` example so that it actually works if it were run.
Previously, the input argument was missing (perhaps implied), but the example is clearer with its inclusion.
PiperOrigin-RevId:
186105694
Bixia Zheng [Sat, 17 Feb 2018 15:46:14 +0000 (07:46 -0800)]
[XLA:GPU] Fix a problem in DoGemmAutotune.
Replace DCHECK with CHECK so that DoGemmWithAlgorithm is also called in
non-debug mode to perform autotune.
PiperOrigin-RevId:
186103809
A. Unique TensorFlower [Sat, 17 Feb 2018 12:46:24 +0000 (04:46 -0800)]
Automated g4 rollback of changelist
186019263
PiperOrigin-RevId:
186098155
Mingsheng Hong [Sat, 17 Feb 2018 06:05:07 +0000 (22:05 -0800)]
Added an experimental C API TF_EnableXLACompilation() to enable XLA compilation.
Also ran "buildozer warn //third_party/tensorflow/c/BUILD" and removed an unused symbol.
PiperOrigin-RevId:
186081948
Yu-Cheng Ling [Sat, 17 Feb 2018 03:02:58 +0000 (19:02 -0800)]
Automated g4 rollback of changelist
186053793
PiperOrigin-RevId:
186075274
A. Unique TensorFlower [Sat, 17 Feb 2018 03:01:28 +0000 (19:01 -0800)]
Modify reference quantized LSTM implementation so that it only needs one instantiation of fixed-point Tanh, for 3 integer bits, regardless of the value of StateIntegerBits
PiperOrigin-RevId:
186075161
Ankur Taly [Sat, 17 Feb 2018 02:22:55 +0000 (18:22 -0800)]
Merge changes from github.
PiperOrigin-RevId:
186073337
A. Unique TensorFlower [Sat, 17 Feb 2018 02:18:35 +0000 (18:18 -0800)]
Adds a `shape` property to LabeledTensor.
#labeledtensor
PiperOrigin-RevId:
186073035
Michael Kuperstein [Sat, 17 Feb 2018 02:13:53 +0000 (18:13 -0800)]
[XLA] Pass the module to HloDataflowAnalysis by const reference.
PiperOrigin-RevId:
186072673
A. Unique TensorFlower [Sat, 17 Feb 2018 01:56:36 +0000 (17:56 -0800)]
Activates Eigen path for CPU implementation of atrous/dilated convolution (only forward path).
PiperOrigin-RevId:
186071285
A. Unique TensorFlower [Sat, 17 Feb 2018 01:55:07 +0000 (17:55 -0800)]
Changes keep_dims to keepdims to remove deprecation warning.
#labeledtensor
PiperOrigin-RevId:
186071210
Akshay Modi [Sat, 17 Feb 2018 00:40:02 +0000 (16:40 -0800)]
Make tf.py_func and tf.smart_cond play better with eager mode.
PiperOrigin-RevId:
186063941
Alexandre Passos [Sat, 17 Feb 2018 00:30:17 +0000 (16:30 -0800)]
Initializing the thread-local device to the right value.
PiperOrigin-RevId:
186062850
Sanjoy Das [Sat, 17 Feb 2018 00:15:21 +0000 (16:15 -0800)]
Reset the DAZ bit when entering the XLA CPU/GPU compiler
In an ideal world this won't make a difference since the compiler should be
disciplined about not leaking host-level optimization artifacts into generated
code. However, I think this provides some defense-in-depth in preventing
non-obvious denormal behavior on the host side from messing up floating point
constants etc. we want to embed into generated code.
PiperOrigin-RevId:
186061140
Allen Lavoie [Sat, 17 Feb 2018 00:01:54 +0000 (16:01 -0800)]
Checkpointable: Don't run ops automatically when graph building.
This is a prerequisite to moving toward a Saver-like model when graph building. We no longer mess with initializers (when graph building; eager needs it), and restore ops just get queued up and returned.
Since initializers are left alone when graph building, there is a new special case for slot variables which needs to be handled. This is the third(!) queue for deferred slot restorations ((1) variable -> slot, (2) optimizer -> slot, (3) (optimizer, variable) -> slot), and should be the last one I need (it's a hypergraph with 3-tuple edges).
The plan after this is to switch over to tf.train.Saver's existing restore op creation infrastructure, which will handle any SaveableObjects. There will also be a few CLs for making graph usage prettier, and eventually allowing eager/graph agnostic save/restore.
PiperOrigin-RevId:
186059387
Alexandre Passos [Fri, 16 Feb 2018 23:30:46 +0000 (15:30 -0800)]
Default eager tensor device name should match default device name.
PiperOrigin-RevId:
186055679
Sanjoy Das [Fri, 16 Feb 2018 23:29:35 +0000 (15:29 -0800)]
[XLA] Add some plumbing, documentation, verification and shape inference for Gather
Pretty much everything other than HLO verification and shape inference will fail
for Gather with Unimplemented.
Note that this CL is intentionally incomplete -- I figured it would be nicer to
get some of the boiler-platey stuff out of the way early. Let me know if you
want me to send in a larger but more complete CL instead.
PiperOrigin-RevId:
186055521
A. Unique TensorFlower [Fri, 16 Feb 2018 23:17:04 +0000 (15:17 -0800)]
Expose the main API to the generated code as well. This allows recursive runtime conversion, and is a prerequisite to supporting dynamic non-recursive functions.
PiperOrigin-RevId:
186053846
Yu-Cheng Ling [Fri, 16 Feb 2018 23:16:38 +0000 (15:16 -0800)]
TFLite Conv2D: Create temporary tensors in Prepare phase.
PiperOrigin-RevId:
186053793
A. Unique TensorFlower [Fri, 16 Feb 2018 23:10:48 +0000 (15:10 -0800)]
Add qint8 to list of types supported by the GPU ConstOp.
PiperOrigin-RevId:
186053061
Francois Chollet [Fri, 16 Feb 2018 23:02:22 +0000 (15:02 -0800)]
Add support for explicit `training` argument in subclassed models.
PiperOrigin-RevId:
186051752
Yuefeng Zhou [Fri, 16 Feb 2018 22:55:43 +0000 (14:55 -0800)]
Add a `hash_keys` argument to the sparse hash column to enable it hash a single input to multiple hash ids. This column can be then used by one_hot_column to create a multi-hot column.
PiperOrigin-RevId:
186050928
Billy Lamberta [Fri, 16 Feb 2018 22:52:36 +0000 (14:52 -0800)]
Fix sentence in Getting Started for ML Beginners guide.
PiperOrigin-RevId:
186050529
Billy Lamberta [Fri, 16 Feb 2018 22:42:19 +0000 (14:42 -0800)]
Fix crop on images in datasets_performance guide.
PiperOrigin-RevId:
186049156
A. Unique TensorFlower [Fri, 16 Feb 2018 22:39:04 +0000 (14:39 -0800)]
Changed FTRL formula for scalars to match vector version better.
PiperOrigin-RevId:
186048665
A. Unique TensorFlower [Fri, 16 Feb 2018 22:34:16 +0000 (14:34 -0800)]
[TF:XLA] Adds HostCompute HLO - a pseudo-op to represent host-side computation.
PiperOrigin-RevId:
186047964
Reed Wanderman-Milne [Fri, 16 Feb 2018 22:20:36 +0000 (14:20 -0800)]
Automated g4 rollback of changelist
186018787
PiperOrigin-RevId:
186046129
Yuanzhong Xu [Fri, 16 Feb 2018 22:17:13 +0000 (14:17 -0800)]
[XLA] HLO scheduling: update entries in ready queue when priority changes.
PiperOrigin-RevId:
186045619
A. Unique TensorFlower [Fri, 16 Feb 2018 22:06:26 +0000 (14:06 -0800)]
Optimization of quantized LSTM cell for the common case of batch size 1,
where it needs efficient matrix*vector ("GEMV") code, but it's not
exactly the same as the case of stand-alone fully-connected layers
as here the output activations are 16bit-quantized.
PiperOrigin-RevId:
186044068
Allen Lavoie [Fri, 16 Feb 2018 21:46:47 +0000 (13:46 -0800)]
Remove the __setattr__ override for Variables
Was slowing down the creation of _UnreadVariable objects. Adds CheckpointableBase without the __setattr__ override.
It's tempting to just override __setattr__ in variables to try making it faster, but it's already just doing an isinstance check. Removing the override entirely seems to be the cleanest option.
PiperOrigin-RevId:
186041147
Allen Lavoie [Fri, 16 Feb 2018 21:38:11 +0000 (13:38 -0800)]
TFTS: Support tf.Example input
PiperOrigin-RevId:
186039949
Sanjoy Das [Fri, 16 Feb 2018 21:33:55 +0000 (13:33 -0800)]
[XLA:CPU] Minor cleanup to simple_orc_jit
SimpleResolver became unused after an LLVM upstream merge, and we never needed
the name mangling logic in what is now FindCompiledSymbol.
PiperOrigin-RevId:
186039307
A. Unique TensorFlower [Fri, 16 Feb 2018 21:31:04 +0000 (13:31 -0800)]
Automated g4 rollback of changelist
185623948
PiperOrigin-RevId:
186038783
A. Unique TensorFlower [Fri, 16 Feb 2018 21:21:17 +0000 (13:21 -0800)]
Clarifying the docstring for how gradients are reduced across towers in replicate_model_fn
PiperOrigin-RevId:
186037416
A. Unique TensorFlower [Fri, 16 Feb 2018 21:20:13 +0000 (13:20 -0800)]
Fix pontential issue with number of blocks launched for depthwise kernels: the number of work_elements was too small, which could return a block_count that is too small to cover all elements.
We also have been ignoring the suggested thread_per_block, so were potentially launching more blocks than necessary to fill the GPU (which is inefficient, but functionally correct).
Changing 'assert(false && ...' to LOG(FATAL) because it shouldn't be debug only.
PiperOrigin-RevId:
186037306
Bjarke Hammersholt Roune [Fri, 16 Feb 2018 20:41:27 +0000 (12:41 -0800)]
Add TODOs.
PiperOrigin-RevId:
186032527
A. Unique TensorFlower [Fri, 16 Feb 2018 20:29:00 +0000 (12:29 -0800)]
Optimized quantized LSTM cell runtime NEON implementation.
Notice: unlike many NEON paths that we have in this optimized_ops.h file,
which are enabled also on x86 by means of arm_neon_sse.h (#ifdef USE_NEON),
this one is only enabled on real NEON (#ifdef GEMMLOWP_NEON). The reason
for that is that gemmlowp's FixedPoint class is templatized in the
underlying raw integer/register type, e.g. here int16x8_t, and on SSE
there is only a single __m128i type for all integer types (both int16x8_t
and int32x4_t), making it non-trivial to support this on SSE without
contriving this code on NEON.
PiperOrigin-RevId:
186031054
David Majnemer [Fri, 16 Feb 2018 19:37:49 +0000 (11:37 -0800)]
[XLA] Factor out the code which adds operands to a fusion node
This makes it easier for Hlo passes to do interesting rewrites with new,
additional parameters which were not operands to the original fusion node.
PiperOrigin-RevId:
186024182
Akshay Modi [Fri, 16 Feb 2018 19:21:08 +0000 (11:21 -0800)]
Cache a variable scope context manager in EagerTemplate as a minor optimization
PiperOrigin-RevId:
186021666
A. Unique TensorFlower [Fri, 16 Feb 2018 19:19:15 +0000 (11:19 -0800)]
Add getmodule to tf_inspect.
PiperOrigin-RevId:
186021386
A. Unique TensorFlower [Fri, 16 Feb 2018 19:05:02 +0000 (11:05 -0800)]
Internal change
PiperOrigin-RevId:
186019263
Reed Wanderman-Milne [Fri, 16 Feb 2018 19:02:33 +0000 (11:02 -0800)]
Automated g4 rollback of changelist
185927310
PiperOrigin-RevId:
186018787
A. Unique TensorFlower [Fri, 16 Feb 2018 18:06:14 +0000 (10:06 -0800)]
Made cost_analyzer_tool accept fetch nodes when running with metagraph option. Also made it read metagraph in either binary or text format.
PiperOrigin-RevId:
186010810
Mark Daoust [Fri, 16 Feb 2018 17:26:14 +0000 (09:26 -0800)]
Remove "make_oneshot_iterator" from "datasets_quickstart.md"
Also mention iterator initialization in the "datasets" section of "low_level_intro.md"
see: PR #3389
PiperOrigin-RevId:
186005742
A. Unique TensorFlower [Fri, 16 Feb 2018 17:20:46 +0000 (09:20 -0800)]
Avoid running //third_party/tensorflow/contrib/gan:train_test under tsan
PiperOrigin-RevId:
186005130
Benjamin Kramer [Fri, 16 Feb 2018 17:16:27 +0000 (09:16 -0800)]
[TF:XLA] Bump open source llvm revision to r325320
PiperOrigin-RevId:
186004694
A. Unique TensorFlower [Fri, 16 Feb 2018 15:54:23 +0000 (07:54 -0800)]
build fix
PiperOrigin-RevId:
185996203
A. Unique TensorFlower [Fri, 16 Feb 2018 15:51:43 +0000 (07:51 -0800)]
Unifying common CMake CUDA file copy between Windows and Linux.
PiperOrigin-RevId:
185995922
Benjamin Kramer [Fri, 16 Feb 2018 12:33:57 +0000 (04:33 -0800)]
Adapt to API changes in LLVM revisions r325155 and r325180.
PiperOrigin-RevId:
185979538
A. Unique TensorFlower [Fri, 16 Feb 2018 09:53:59 +0000 (01:53 -0800)]
Remove a possible ambiguity in the `py_func` documentation.
PiperOrigin-RevId:
185968663
Yu-Cheng Ling [Fri, 16 Feb 2018 07:44:47 +0000 (23:44 -0800)]
Code generator for builtin_ops.h, and a test to ensure its consistency
PiperOrigin-RevId:
185957720
Suharsh Sivakumar [Fri, 16 Feb 2018 06:21:02 +0000 (22:21 -0800)]
Make the default values for experimental and non experimental apis match.
PiperOrigin-RevId:
185952648
Alina Sbirlea [Fri, 16 Feb 2018 04:18:11 +0000 (20:18 -0800)]
Automated g4 rollback of changelist
185891869
PiperOrigin-RevId:
185944719
A. Unique TensorFlower [Fri, 16 Feb 2018 03:52:03 +0000 (19:52 -0800)]
optimized quantized softmax
PiperOrigin-RevId:
185943132
A. Unique TensorFlower [Fri, 16 Feb 2018 03:47:47 +0000 (19:47 -0800)]
Fix handling of types in RNN state import. Sanitize TF node names.
PiperOrigin-RevId:
185942921
A. Unique TensorFlower [Fri, 16 Feb 2018 03:34:18 +0000 (19:34 -0800)]
Add tuple targets to the context handling mechanism in templates.
PiperOrigin-RevId:
185941851
Sanjoy Das [Fri, 16 Feb 2018 03:31:11 +0000 (19:31 -0800)]
Error out when building XLA's CPU and GPU backends with fast-math
In an ideal world this won't make a difference since the compiler should be
disciplined about not leaking host-level optimization artifacts into generated
code. However, I think this provides some defense-in-depth in preventing
fast-math optimization on the host side from messing up floating point constants
etc. we want to embed into generated code.
PiperOrigin-RevId:
185941549
Shanqing Cai [Fri, 16 Feb 2018 03:12:05 +0000 (19:12 -0800)]
TFE SPINN example: use tensor instead of numpy array
in inference output.
PiperOrigin-RevId:
185939805
Guangda Lai [Fri, 16 Feb 2018 02:55:22 +0000 (18:55 -0800)]
Add a new tag no_cuda_on_cpu_tap for excluding failing non-gpu cuda tests.
PiperOrigin-RevId:
185937687
Francois Chollet [Fri, 16 Feb 2018 02:22:21 +0000 (18:22 -0800)]
Bug fix and typo fixes.
PiperOrigin-RevId:
185935199
Francois Chollet [Fri, 16 Feb 2018 02:21:11 +0000 (18:21 -0800)]
Add stateful metrics support in tf.keras.
PiperOrigin-RevId:
185935092
A. Unique TensorFlower [Fri, 16 Feb 2018 01:44:59 +0000 (17:44 -0800)]
Address timeout of conv_ops_test.
PiperOrigin-RevId:
185931585
A. Unique TensorFlower [Fri, 16 Feb 2018 01:38:55 +0000 (17:38 -0800)]
Keep the results below 2^31 in exp() test to avoid overflowing.
PiperOrigin-RevId:
185931075
Reed Wanderman-Milne [Fri, 16 Feb 2018 01:05:41 +0000 (17:05 -0800)]
Use np.frombuffer instead of np.fromstring to avoid DeprecationWarning.
Resolves #17020
PiperOrigin-RevId:
185927310
Alexandre Passos [Fri, 16 Feb 2018 01:02:12 +0000 (17:02 -0800)]
Fixes broken test
PiperOrigin-RevId:
185926797
Rohan Jain [Fri, 16 Feb 2018 00:40:24 +0000 (16:40 -0800)]
Adding Shape inference functions to infeed ops.
PiperOrigin-RevId:
185923685
A. Unique TensorFlower [Fri, 16 Feb 2018 00:23:43 +0000 (16:23 -0800)]
K-FAC: Support for embedding layers, add FisherFactor.{multiply, multiply_inverse}.
PiperOrigin-RevId:
185920837
Akshay Agrawal [Thu, 15 Feb 2018 23:56:10 +0000 (15:56 -0800)]
Update eager's MNIST example to inherit from `tf.keras.Model`.
Also make estimator utils compatible with `tf_decorator`-wrapped functions.
PiperOrigin-RevId:
185916290
Yu-Cheng Ling [Thu, 15 Feb 2018 23:55:32 +0000 (15:55 -0800)]
Fix a typo in model.cc error message.
PiperOrigin-RevId:
185916196
A. Unique TensorFlower [Thu, 15 Feb 2018 23:54:39 +0000 (15:54 -0800)]
Don't spam the logs.
PiperOrigin-RevId:
185916071
A. Unique TensorFlower [Thu, 15 Feb 2018 23:32:17 +0000 (15:32 -0800)]
Add /learning/tfx/ to the visibility group of tensorflow/compiler/tf2xla/python.
PiperOrigin-RevId:
185912486
Anjali Sridhar [Thu, 15 Feb 2018 23:09:19 +0000 (15:09 -0800)]
Update tf.keras to Keras 2.1.4 API
PiperOrigin-RevId:
185908711
A. Unique TensorFlower [Thu, 15 Feb 2018 22:49:01 +0000 (14:49 -0800)]
Implement Split
PiperOrigin-RevId:
185904437
A. Unique TensorFlower [Thu, 15 Feb 2018 22:25:00 +0000 (14:25 -0800)]
Automated g4 rollback of changelist
185072479
PiperOrigin-RevId:
185900165
Derek Murray [Thu, 15 Feb 2018 22:24:40 +0000 (14:24 -0800)]
[tf.data] Return OK and set `*end_of_sequence = true` when repeating an empty dataset.
Returning an error status could lead to situations (like
`empty_ds.repeat(None).interleave(...)`) where the wrong exception was raised.
This change ensures that the proper `OutOfRangeError` is raised in the user
program.
PiperOrigin-RevId:
185900119
Anjali Sridhar [Thu, 15 Feb 2018 22:08:54 +0000 (14:08 -0800)]
Update tf.keras to version 2.1.4.
PiperOrigin-RevId:
185897606
Allen Lavoie [Thu, 15 Feb 2018 21:43:47 +0000 (13:43 -0800)]
Object-based saving: Switch to "everything is Checkpointable"
The only sane way to use/test this is to have Variables be Checkpointable, so this CL includes a move of the base class to core. No public methods are exposed, and I've attempted to not throw any errors on __setattr__.
Allows dynamic dependencies (track after restore) and restoring variables on assignment to a Checkpointable object, and includes the protocol buffer modifications necessary for saving information with each object.
There are still some prominent TODOs:
- Stop modifying the graph after the first save/restore (likely cache ops in Checkpointable objects)
- Add some overridable methods for saving Python strings when restore() is called, fed when graph building rather than embedded as constants in the graph
- Work on the initialization story for graph building. Currently the unit tests rely on collections for this.
- Support for more objects, move the prototype modifications in checkpointable_test to core.
The diff is larger than I was hoping (mostly deletions and unit tests); that could be reduced a bit (or at least "lines added" converted to "lines deleted") by diffbasing on cl/
180950921, which was my first attempt at dynamic dependencies. This CL is more of a re-write than a modification, so sending that one out seems a bit silly. The unit tests are still good, though.
PiperOrigin-RevId:
185893387
Jacques Pienaar [Thu, 15 Feb 2018 21:40:10 +0000 (13:40 -0800)]
Wrap XlaOpRegistry::DeviceKernels call to call in python.
PiperOrigin-RevId:
185892888
Alina Sbirlea [Thu, 15 Feb 2018 21:35:19 +0000 (13:35 -0800)]
Optimize dot(DynamicSlice(ConstA), ConstantB) by memoizing dot(ConstA, ConstB)
Make transformation when ConstA and ConstB are 2D, and DynamicSlice is slicing a full row, column respectively.
Handle:
dot(DynamicSlice(Index, ConstA), ConstB) => DynamicSlice(Index, dot*(ConstA, ConstB));
and
dot(ConstA, DynamicSlice(Index, ConstB)) => DynamicSlice(Index, dot*(ConstA, ConstB));
PiperOrigin-RevId:
185891869
A. Unique TensorFlower [Thu, 15 Feb 2018 20:50:03 +0000 (12:50 -0800)]
Add auc_with_confidence_intervals
This method computes the AUC and corresponding confidence intervals using an efficient algorithm.
PiperOrigin-RevId:
185884228
Alexandre Passos [Thu, 15 Feb 2018 20:39:52 +0000 (12:39 -0800)]
Register kernels for Assign and AssignVariableOp on GPU for integer types.
PiperOrigin-RevId:
185882834
Yuanzhong Xu [Thu, 15 Feb 2018 20:08:36 +0000 (12:08 -0800)]
[XLA] Fix priority queue in HLO scheduling.
The priority of an HLO can change during the scheduling. Use immutable values in
priority queue entries, and reinsert an entry if its priority goes up.
PiperOrigin-RevId:
185878562
A. Unique TensorFlower [Thu, 15 Feb 2018 19:44:04 +0000 (11:44 -0800)]
Register "Snapshot" op inserted by Grappler arithmetic optimization. For (mostly) pure XLA graphs, this op is identical to "Identity".
PiperOrigin-RevId:
185873337