Ankur Taly [Sat, 17 Feb 2018 02:22:55 +0000 (18:22 -0800)]
Merge changes from github.
PiperOrigin-RevId:
186073337
A. Unique TensorFlower [Sat, 17 Feb 2018 02:18:35 +0000 (18:18 -0800)]
Adds a `shape` property to LabeledTensor.
#labeledtensor
PiperOrigin-RevId:
186073035
Michael Kuperstein [Sat, 17 Feb 2018 02:13:53 +0000 (18:13 -0800)]
[XLA] Pass the module to HloDataflowAnalysis by const reference.
PiperOrigin-RevId:
186072673
A. Unique TensorFlower [Sat, 17 Feb 2018 01:56:36 +0000 (17:56 -0800)]
Activates Eigen path for CPU implementation of atrous/dilated convolution (only forward path).
PiperOrigin-RevId:
186071285
A. Unique TensorFlower [Sat, 17 Feb 2018 01:55:07 +0000 (17:55 -0800)]
Changes keep_dims to keepdims to remove deprecation warning.
#labeledtensor
PiperOrigin-RevId:
186071210
Akshay Modi [Sat, 17 Feb 2018 00:40:02 +0000 (16:40 -0800)]
Make tf.py_func and tf.smart_cond play better with eager mode.
PiperOrigin-RevId:
186063941
Alexandre Passos [Sat, 17 Feb 2018 00:30:17 +0000 (16:30 -0800)]
Initializing the thread-local device to the right value.
PiperOrigin-RevId:
186062850
Sanjoy Das [Sat, 17 Feb 2018 00:15:21 +0000 (16:15 -0800)]
Reset the DAZ bit when entering the XLA CPU/GPU compiler
In an ideal world this won't make a difference since the compiler should be
disciplined about not leaking host-level optimization artifacts into generated
code. However, I think this provides some defense-in-depth in preventing
non-obvious denormal behavior on the host side from messing up floating point
constants etc. we want to embed into generated code.
PiperOrigin-RevId:
186061140
Allen Lavoie [Sat, 17 Feb 2018 00:01:54 +0000 (16:01 -0800)]
Checkpointable: Don't run ops automatically when graph building.
This is a prerequisite to moving toward a Saver-like model when graph building. We no longer mess with initializers (when graph building; eager needs it), and restore ops just get queued up and returned.
Since initializers are left alone when graph building, there is a new special case for slot variables which needs to be handled. This is the third(!) queue for deferred slot restorations ((1) variable -> slot, (2) optimizer -> slot, (3) (optimizer, variable) -> slot), and should be the last one I need (it's a hypergraph with 3-tuple edges).
The plan after this is to switch over to tf.train.Saver's existing restore op creation infrastructure, which will handle any SaveableObjects. There will also be a few CLs for making graph usage prettier, and eventually allowing eager/graph agnostic save/restore.
PiperOrigin-RevId:
186059387
Alexandre Passos [Fri, 16 Feb 2018 23:30:46 +0000 (15:30 -0800)]
Default eager tensor device name should match default device name.
PiperOrigin-RevId:
186055679
Sanjoy Das [Fri, 16 Feb 2018 23:29:35 +0000 (15:29 -0800)]
[XLA] Add some plumbing, documentation, verification and shape inference for Gather
Pretty much everything other than HLO verification and shape inference will fail
for Gather with Unimplemented.
Note that this CL is intentionally incomplete -- I figured it would be nicer to
get some of the boiler-platey stuff out of the way early. Let me know if you
want me to send in a larger but more complete CL instead.
PiperOrigin-RevId:
186055521
A. Unique TensorFlower [Fri, 16 Feb 2018 23:17:04 +0000 (15:17 -0800)]
Expose the main API to the generated code as well. This allows recursive runtime conversion, and is a prerequisite to supporting dynamic non-recursive functions.
PiperOrigin-RevId:
186053846
Yu-Cheng Ling [Fri, 16 Feb 2018 23:16:38 +0000 (15:16 -0800)]
TFLite Conv2D: Create temporary tensors in Prepare phase.
PiperOrigin-RevId:
186053793
A. Unique TensorFlower [Fri, 16 Feb 2018 23:10:48 +0000 (15:10 -0800)]
Add qint8 to list of types supported by the GPU ConstOp.
PiperOrigin-RevId:
186053061
Francois Chollet [Fri, 16 Feb 2018 23:02:22 +0000 (15:02 -0800)]
Add support for explicit `training` argument in subclassed models.
PiperOrigin-RevId:
186051752
Yuefeng Zhou [Fri, 16 Feb 2018 22:55:43 +0000 (14:55 -0800)]
Add a `hash_keys` argument to the sparse hash column to enable it hash a single input to multiple hash ids. This column can be then used by one_hot_column to create a multi-hot column.
PiperOrigin-RevId:
186050928
Billy Lamberta [Fri, 16 Feb 2018 22:52:36 +0000 (14:52 -0800)]
Fix sentence in Getting Started for ML Beginners guide.
PiperOrigin-RevId:
186050529
Billy Lamberta [Fri, 16 Feb 2018 22:42:19 +0000 (14:42 -0800)]
Fix crop on images in datasets_performance guide.
PiperOrigin-RevId:
186049156
A. Unique TensorFlower [Fri, 16 Feb 2018 22:39:04 +0000 (14:39 -0800)]
Changed FTRL formula for scalars to match vector version better.
PiperOrigin-RevId:
186048665
A. Unique TensorFlower [Fri, 16 Feb 2018 22:34:16 +0000 (14:34 -0800)]
[TF:XLA] Adds HostCompute HLO - a pseudo-op to represent host-side computation.
PiperOrigin-RevId:
186047964
Reed Wanderman-Milne [Fri, 16 Feb 2018 22:20:36 +0000 (14:20 -0800)]
Automated g4 rollback of changelist
186018787
PiperOrigin-RevId:
186046129
Yuanzhong Xu [Fri, 16 Feb 2018 22:17:13 +0000 (14:17 -0800)]
[XLA] HLO scheduling: update entries in ready queue when priority changes.
PiperOrigin-RevId:
186045619
A. Unique TensorFlower [Fri, 16 Feb 2018 22:06:26 +0000 (14:06 -0800)]
Optimization of quantized LSTM cell for the common case of batch size 1,
where it needs efficient matrix*vector ("GEMV") code, but it's not
exactly the same as the case of stand-alone fully-connected layers
as here the output activations are 16bit-quantized.
PiperOrigin-RevId:
186044068
Allen Lavoie [Fri, 16 Feb 2018 21:46:47 +0000 (13:46 -0800)]
Remove the __setattr__ override for Variables
Was slowing down the creation of _UnreadVariable objects. Adds CheckpointableBase without the __setattr__ override.
It's tempting to just override __setattr__ in variables to try making it faster, but it's already just doing an isinstance check. Removing the override entirely seems to be the cleanest option.
PiperOrigin-RevId:
186041147
Allen Lavoie [Fri, 16 Feb 2018 21:38:11 +0000 (13:38 -0800)]
TFTS: Support tf.Example input
PiperOrigin-RevId:
186039949
Sanjoy Das [Fri, 16 Feb 2018 21:33:55 +0000 (13:33 -0800)]
[XLA:CPU] Minor cleanup to simple_orc_jit
SimpleResolver became unused after an LLVM upstream merge, and we never needed
the name mangling logic in what is now FindCompiledSymbol.
PiperOrigin-RevId:
186039307
A. Unique TensorFlower [Fri, 16 Feb 2018 21:31:04 +0000 (13:31 -0800)]
Automated g4 rollback of changelist
185623948
PiperOrigin-RevId:
186038783
A. Unique TensorFlower [Fri, 16 Feb 2018 21:21:17 +0000 (13:21 -0800)]
Clarifying the docstring for how gradients are reduced across towers in replicate_model_fn
PiperOrigin-RevId:
186037416
A. Unique TensorFlower [Fri, 16 Feb 2018 21:20:13 +0000 (13:20 -0800)]
Fix pontential issue with number of blocks launched for depthwise kernels: the number of work_elements was too small, which could return a block_count that is too small to cover all elements.
We also have been ignoring the suggested thread_per_block, so were potentially launching more blocks than necessary to fill the GPU (which is inefficient, but functionally correct).
Changing 'assert(false && ...' to LOG(FATAL) because it shouldn't be debug only.
PiperOrigin-RevId:
186037306
Bjarke Hammersholt Roune [Fri, 16 Feb 2018 20:41:27 +0000 (12:41 -0800)]
Add TODOs.
PiperOrigin-RevId:
186032527
A. Unique TensorFlower [Fri, 16 Feb 2018 20:29:00 +0000 (12:29 -0800)]
Optimized quantized LSTM cell runtime NEON implementation.
Notice: unlike many NEON paths that we have in this optimized_ops.h file,
which are enabled also on x86 by means of arm_neon_sse.h (#ifdef USE_NEON),
this one is only enabled on real NEON (#ifdef GEMMLOWP_NEON). The reason
for that is that gemmlowp's FixedPoint class is templatized in the
underlying raw integer/register type, e.g. here int16x8_t, and on SSE
there is only a single __m128i type for all integer types (both int16x8_t
and int32x4_t), making it non-trivial to support this on SSE without
contriving this code on NEON.
PiperOrigin-RevId:
186031054
David Majnemer [Fri, 16 Feb 2018 19:37:49 +0000 (11:37 -0800)]
[XLA] Factor out the code which adds operands to a fusion node
This makes it easier for Hlo passes to do interesting rewrites with new,
additional parameters which were not operands to the original fusion node.
PiperOrigin-RevId:
186024182
Akshay Modi [Fri, 16 Feb 2018 19:21:08 +0000 (11:21 -0800)]
Cache a variable scope context manager in EagerTemplate as a minor optimization
PiperOrigin-RevId:
186021666
A. Unique TensorFlower [Fri, 16 Feb 2018 19:19:15 +0000 (11:19 -0800)]
Add getmodule to tf_inspect.
PiperOrigin-RevId:
186021386
A. Unique TensorFlower [Fri, 16 Feb 2018 19:05:02 +0000 (11:05 -0800)]
Internal change
PiperOrigin-RevId:
186019263
Reed Wanderman-Milne [Fri, 16 Feb 2018 19:02:33 +0000 (11:02 -0800)]
Automated g4 rollback of changelist
185927310
PiperOrigin-RevId:
186018787
A. Unique TensorFlower [Fri, 16 Feb 2018 18:06:14 +0000 (10:06 -0800)]
Made cost_analyzer_tool accept fetch nodes when running with metagraph option. Also made it read metagraph in either binary or text format.
PiperOrigin-RevId:
186010810
Mark Daoust [Fri, 16 Feb 2018 17:26:14 +0000 (09:26 -0800)]
Remove "make_oneshot_iterator" from "datasets_quickstart.md"
Also mention iterator initialization in the "datasets" section of "low_level_intro.md"
see: PR #3389
PiperOrigin-RevId:
186005742
A. Unique TensorFlower [Fri, 16 Feb 2018 17:20:46 +0000 (09:20 -0800)]
Avoid running //third_party/tensorflow/contrib/gan:train_test under tsan
PiperOrigin-RevId:
186005130
Benjamin Kramer [Fri, 16 Feb 2018 17:16:27 +0000 (09:16 -0800)]
[TF:XLA] Bump open source llvm revision to r325320
PiperOrigin-RevId:
186004694
A. Unique TensorFlower [Fri, 16 Feb 2018 15:54:23 +0000 (07:54 -0800)]
build fix
PiperOrigin-RevId:
185996203
A. Unique TensorFlower [Fri, 16 Feb 2018 15:51:43 +0000 (07:51 -0800)]
Unifying common CMake CUDA file copy between Windows and Linux.
PiperOrigin-RevId:
185995922
Benjamin Kramer [Fri, 16 Feb 2018 12:33:57 +0000 (04:33 -0800)]
Adapt to API changes in LLVM revisions r325155 and r325180.
PiperOrigin-RevId:
185979538
A. Unique TensorFlower [Fri, 16 Feb 2018 09:53:59 +0000 (01:53 -0800)]
Remove a possible ambiguity in the `py_func` documentation.
PiperOrigin-RevId:
185968663
Yu-Cheng Ling [Fri, 16 Feb 2018 07:44:47 +0000 (23:44 -0800)]
Code generator for builtin_ops.h, and a test to ensure its consistency
PiperOrigin-RevId:
185957720
Suharsh Sivakumar [Fri, 16 Feb 2018 06:21:02 +0000 (22:21 -0800)]
Make the default values for experimental and non experimental apis match.
PiperOrigin-RevId:
185952648
Alina Sbirlea [Fri, 16 Feb 2018 04:18:11 +0000 (20:18 -0800)]
Automated g4 rollback of changelist
185891869
PiperOrigin-RevId:
185944719
A. Unique TensorFlower [Fri, 16 Feb 2018 03:52:03 +0000 (19:52 -0800)]
optimized quantized softmax
PiperOrigin-RevId:
185943132
A. Unique TensorFlower [Fri, 16 Feb 2018 03:47:47 +0000 (19:47 -0800)]
Fix handling of types in RNN state import. Sanitize TF node names.
PiperOrigin-RevId:
185942921
A. Unique TensorFlower [Fri, 16 Feb 2018 03:34:18 +0000 (19:34 -0800)]
Add tuple targets to the context handling mechanism in templates.
PiperOrigin-RevId:
185941851
Sanjoy Das [Fri, 16 Feb 2018 03:31:11 +0000 (19:31 -0800)]
Error out when building XLA's CPU and GPU backends with fast-math
In an ideal world this won't make a difference since the compiler should be
disciplined about not leaking host-level optimization artifacts into generated
code. However, I think this provides some defense-in-depth in preventing
fast-math optimization on the host side from messing up floating point constants
etc. we want to embed into generated code.
PiperOrigin-RevId:
185941549
Shanqing Cai [Fri, 16 Feb 2018 03:12:05 +0000 (19:12 -0800)]
TFE SPINN example: use tensor instead of numpy array
in inference output.
PiperOrigin-RevId:
185939805
Guangda Lai [Fri, 16 Feb 2018 02:55:22 +0000 (18:55 -0800)]
Add a new tag no_cuda_on_cpu_tap for excluding failing non-gpu cuda tests.
PiperOrigin-RevId:
185937687
Francois Chollet [Fri, 16 Feb 2018 02:22:21 +0000 (18:22 -0800)]
Bug fix and typo fixes.
PiperOrigin-RevId:
185935199
Francois Chollet [Fri, 16 Feb 2018 02:21:11 +0000 (18:21 -0800)]
Add stateful metrics support in tf.keras.
PiperOrigin-RevId:
185935092
A. Unique TensorFlower [Fri, 16 Feb 2018 01:44:59 +0000 (17:44 -0800)]
Address timeout of conv_ops_test.
PiperOrigin-RevId:
185931585
A. Unique TensorFlower [Fri, 16 Feb 2018 01:38:55 +0000 (17:38 -0800)]
Keep the results below 2^31 in exp() test to avoid overflowing.
PiperOrigin-RevId:
185931075
Reed Wanderman-Milne [Fri, 16 Feb 2018 01:05:41 +0000 (17:05 -0800)]
Use np.frombuffer instead of np.fromstring to avoid DeprecationWarning.
Resolves #17020
PiperOrigin-RevId:
185927310
Alexandre Passos [Fri, 16 Feb 2018 01:02:12 +0000 (17:02 -0800)]
Fixes broken test
PiperOrigin-RevId:
185926797
Rohan Jain [Fri, 16 Feb 2018 00:40:24 +0000 (16:40 -0800)]
Adding Shape inference functions to infeed ops.
PiperOrigin-RevId:
185923685
A. Unique TensorFlower [Fri, 16 Feb 2018 00:23:43 +0000 (16:23 -0800)]
K-FAC: Support for embedding layers, add FisherFactor.{multiply, multiply_inverse}.
PiperOrigin-RevId:
185920837
Akshay Agrawal [Thu, 15 Feb 2018 23:56:10 +0000 (15:56 -0800)]
Update eager's MNIST example to inherit from `tf.keras.Model`.
Also make estimator utils compatible with `tf_decorator`-wrapped functions.
PiperOrigin-RevId:
185916290
Yu-Cheng Ling [Thu, 15 Feb 2018 23:55:32 +0000 (15:55 -0800)]
Fix a typo in model.cc error message.
PiperOrigin-RevId:
185916196
A. Unique TensorFlower [Thu, 15 Feb 2018 23:54:39 +0000 (15:54 -0800)]
Don't spam the logs.
PiperOrigin-RevId:
185916071
A. Unique TensorFlower [Thu, 15 Feb 2018 23:32:17 +0000 (15:32 -0800)]
Add /learning/tfx/ to the visibility group of tensorflow/compiler/tf2xla/python.
PiperOrigin-RevId:
185912486
Anjali Sridhar [Thu, 15 Feb 2018 23:09:19 +0000 (15:09 -0800)]
Update tf.keras to Keras 2.1.4 API
PiperOrigin-RevId:
185908711
A. Unique TensorFlower [Thu, 15 Feb 2018 22:49:01 +0000 (14:49 -0800)]
Implement Split
PiperOrigin-RevId:
185904437
A. Unique TensorFlower [Thu, 15 Feb 2018 22:25:00 +0000 (14:25 -0800)]
Automated g4 rollback of changelist
185072479
PiperOrigin-RevId:
185900165
Derek Murray [Thu, 15 Feb 2018 22:24:40 +0000 (14:24 -0800)]
[tf.data] Return OK and set `*end_of_sequence = true` when repeating an empty dataset.
Returning an error status could lead to situations (like
`empty_ds.repeat(None).interleave(...)`) where the wrong exception was raised.
This change ensures that the proper `OutOfRangeError` is raised in the user
program.
PiperOrigin-RevId:
185900119
Anjali Sridhar [Thu, 15 Feb 2018 22:08:54 +0000 (14:08 -0800)]
Update tf.keras to version 2.1.4.
PiperOrigin-RevId:
185897606
Allen Lavoie [Thu, 15 Feb 2018 21:43:47 +0000 (13:43 -0800)]
Object-based saving: Switch to "everything is Checkpointable"
The only sane way to use/test this is to have Variables be Checkpointable, so this CL includes a move of the base class to core. No public methods are exposed, and I've attempted to not throw any errors on __setattr__.
Allows dynamic dependencies (track after restore) and restoring variables on assignment to a Checkpointable object, and includes the protocol buffer modifications necessary for saving information with each object.
There are still some prominent TODOs:
- Stop modifying the graph after the first save/restore (likely cache ops in Checkpointable objects)
- Add some overridable methods for saving Python strings when restore() is called, fed when graph building rather than embedded as constants in the graph
- Work on the initialization story for graph building. Currently the unit tests rely on collections for this.
- Support for more objects, move the prototype modifications in checkpointable_test to core.
The diff is larger than I was hoping (mostly deletions and unit tests); that could be reduced a bit (or at least "lines added" converted to "lines deleted") by diffbasing on cl/
180950921, which was my first attempt at dynamic dependencies. This CL is more of a re-write than a modification, so sending that one out seems a bit silly. The unit tests are still good, though.
PiperOrigin-RevId:
185893387
Jacques Pienaar [Thu, 15 Feb 2018 21:40:10 +0000 (13:40 -0800)]
Wrap XlaOpRegistry::DeviceKernels call to call in python.
PiperOrigin-RevId:
185892888
Alina Sbirlea [Thu, 15 Feb 2018 21:35:19 +0000 (13:35 -0800)]
Optimize dot(DynamicSlice(ConstA), ConstantB) by memoizing dot(ConstA, ConstB)
Make transformation when ConstA and ConstB are 2D, and DynamicSlice is slicing a full row, column respectively.
Handle:
dot(DynamicSlice(Index, ConstA), ConstB) => DynamicSlice(Index, dot*(ConstA, ConstB));
and
dot(ConstA, DynamicSlice(Index, ConstB)) => DynamicSlice(Index, dot*(ConstA, ConstB));
PiperOrigin-RevId:
185891869
A. Unique TensorFlower [Thu, 15 Feb 2018 20:50:03 +0000 (12:50 -0800)]
Add auc_with_confidence_intervals
This method computes the AUC and corresponding confidence intervals using an efficient algorithm.
PiperOrigin-RevId:
185884228
Alexandre Passos [Thu, 15 Feb 2018 20:39:52 +0000 (12:39 -0800)]
Register kernels for Assign and AssignVariableOp on GPU for integer types.
PiperOrigin-RevId:
185882834
Yuanzhong Xu [Thu, 15 Feb 2018 20:08:36 +0000 (12:08 -0800)]
[XLA] Fix priority queue in HLO scheduling.
The priority of an HLO can change during the scheduling. Use immutable values in
priority queue entries, and reinsert an entry if its priority goes up.
PiperOrigin-RevId:
185878562
A. Unique TensorFlower [Thu, 15 Feb 2018 19:44:04 +0000 (11:44 -0800)]
Register "Snapshot" op inserted by Grappler arithmetic optimization. For (mostly) pure XLA graphs, this op is identical to "Identity".
PiperOrigin-RevId:
185873337
A. Unique TensorFlower [Thu, 15 Feb 2018 19:38:03 +0000 (11:38 -0800)]
Implementation of tf.nn.top_k in TfLite
PiperOrigin-RevId:
185872292
A. Unique TensorFlower [Thu, 15 Feb 2018 19:34:10 +0000 (11:34 -0800)]
Make conversions from ShapedBuffer <-> ScopedShapedBuffer efficient by
moving memory ownership instead of copying.
PiperOrigin-RevId:
185871648
Jacques Pienaar [Thu, 15 Feb 2018 19:31:01 +0000 (11:31 -0800)]
[HLOEval] Logical right shift returns 0 if shift amount exceeds bitwidth.
PiperOrigin-RevId:
185871170
Bixia Zheng [Thu, 15 Feb 2018 18:39:04 +0000 (10:39 -0800)]
Enable half precision convolution for the CPU and GPU backends.
Enhance the CPU IR emitter to support F16 dot operation and convolution
operation.
Add a CPU runtime implementation for F16 convolution.
Enhance the GPU backend to handle F16 convolution thunk.
Convert some F32 xla convolution tests to support both F32 and F16 and disable
the tests for the CPU backend due to b/
72509305.
PiperOrigin-RevId:
185862438
A. Unique TensorFlower [Thu, 15 Feb 2018 16:26:14 +0000 (08:26 -0800)]
Fix a bug of overestimating AUC_PR. When TP and FP are both 0s, the precision should be 0 instead of 1.
PiperOrigin-RevId:
185842713
A. Unique TensorFlower [Thu, 15 Feb 2018 14:52:26 +0000 (06:52 -0800)]
Optimize away multiply by constant zero
PiperOrigin-RevId:
185831862
Mark Daoust [Thu, 15 Feb 2018 12:12:20 +0000 (04:12 -0800)]
Fix "cudnn64_7.dll" in install_windows.md
PiperOrigin-RevId:
185818395
Asim Shankar [Thu, 15 Feb 2018 08:26:10 +0000 (00:26 -0800)]
Java: Release 1.6.0-rc1
PiperOrigin-RevId:
185801747
Asim Shankar [Thu, 15 Feb 2018 05:41:56 +0000 (21:41 -0800)]
tf.image.resize_bilinear gradient support for float16
The required kernel changes were implemented almost 2 years ago
in
80da0a63200cb7c9c449188620992c7a8d18c8b9, but we forgot to
change the Python gradient registry.
PiperOrigin-RevId:
185790703
Yao Zhang [Thu, 15 Feb 2018 03:35:36 +0000 (19:35 -0800)]
Fix a bug to update reduction axes for all supported cases. Turn off layout
optimizer for all reference runs, as it is on by default now.
PiperOrigin-RevId:
185782777
Jianwei Xie [Thu, 15 Feb 2018 03:04:12 +0000 (19:04 -0800)]
Error out if user provided num_shards is incorrect for non-model-parallelism case.
PiperOrigin-RevId:
185780685
David G. Andersen [Thu, 15 Feb 2018 02:56:47 +0000 (18:56 -0800)]
Fix integer overflow in BMP decoder by making the checks in DecodeBmp
more stringent. Add fuzzer to improve the robustness of the decoder
in the future.
PiperOrigin-RevId:
185780111
Sanjoy Das [Thu, 15 Feb 2018 02:40:31 +0000 (18:40 -0800)]
Internal-only change.
PiperOrigin-RevId:
185778955
Guangda Lai [Thu, 15 Feb 2018 02:38:44 +0000 (18:38 -0800)]
Fix the build hdrs dependency for gpu_id. Reported by #16976.
PiperOrigin-RevId:
185778815
A. Unique TensorFlower [Thu, 15 Feb 2018 01:51:17 +0000 (17:51 -0800)]
Extract ReadOutput method to test runner
PiperOrigin-RevId:
185773920
Yangzihao Wang [Thu, 15 Feb 2018 01:28:18 +0000 (17:28 -0800)]
Add env-var to specify whether to use CUDNN_BATCHNORM_SPATIAL_PERSISTENT for cudnn batchnorm.
PiperOrigin-RevId:
185771595
Allen Lavoie [Thu, 15 Feb 2018 01:04:15 +0000 (17:04 -0800)]
Removes odd stack traces from trying to delete things that aren't
resources. Or at least provides a more informative error message.
A guess is that these came from constructing SummaryWriter(None).
PiperOrigin-RevId:
185768814
A. Unique TensorFlower [Thu, 15 Feb 2018 00:34:29 +0000 (16:34 -0800)]
Remove dynamic shape check from Bernoulli.
PiperOrigin-RevId:
185764927
A. Unique TensorFlower [Thu, 15 Feb 2018 00:16:20 +0000 (16:16 -0800)]
Test to show how TOCO fails with mismatched shapes in strided_slice.
PiperOrigin-RevId:
185762074
A. Unique TensorFlower [Thu, 15 Feb 2018 00:01:26 +0000 (16:01 -0800)]
Add dedicated code for the print function instead of wrapping it generically to py_func.
For now, we keep tf.Print disabled until we can find a way to test it. This might require launching the compiled code in a Python subprocess.
PiperOrigin-RevId:
185759599
Chris Leary [Wed, 14 Feb 2018 23:37:15 +0000 (15:37 -0800)]
[XLA:python] Add ability to set result layouts via Python API.
PiperOrigin-RevId:
185755948
Tatiana Shpeisman [Wed, 14 Feb 2018 23:19:11 +0000 (15:19 -0800)]
Build error fix - make allocator_ to be VisitableAllocator* instead of Allocator*. Allocator does not have AddAllocVisitor() and AddFreeVisitor() methods.
PiperOrigin-RevId:
185753346
Peter Hawkins [Wed, 14 Feb 2018 22:49:23 +0000 (14:49 -0800)]
[TF:XLA] Add a hook to allow reshaping of TensorFlow variables when storing them in their XLA representation.
PiperOrigin-RevId:
185748660