Dan Moldovan [Wed, 2 May 2018 02:05:39 +0000 (19:05 -0700)]
Minor refactor: establish some operator naming conventions and apply them, so that the interface is a bit more consistent.
PiperOrigin-RevId:
195034691
Patrick Nguyen [Wed, 2 May 2018 02:02:10 +0000 (19:02 -0700)]
Make the CRF work when sequence_lengths are int32.
PiperOrigin-RevId:
195034218
Sanjoy Das [Wed, 2 May 2018 01:46:31 +0000 (18:46 -0700)]
[XLA:CPU] Re-use the same llvm::GlobalVariable for identical literals
This isn't necessary today, but it will be after an optimization change I'm
about to make.
LLVM has a constant merging pass too, but one of the motivations here is to
avoid the LLVM compile time overhead of having many large arrays in the IR.
PiperOrigin-RevId:
195032900
A. Unique TensorFlower [Wed, 2 May 2018 00:59:59 +0000 (17:59 -0700)]
Internal change
PiperOrigin-RevId:
195028221
A. Unique TensorFlower [Wed, 2 May 2018 00:57:02 +0000 (17:57 -0700)]
Internal change.
PiperOrigin-RevId:
195027918
Patrick Nguyen [Wed, 2 May 2018 00:48:36 +0000 (17:48 -0700)]
Re-apply CL
194140820, which reverts #18251 (convolution change).
PiperOrigin-RevId:
195027049
Mustafa Ispir [Wed, 2 May 2018 00:21:24 +0000 (17:21 -0700)]
test fix
PiperOrigin-RevId:
195023740
Allen Lavoie [Tue, 1 May 2018 23:53:51 +0000 (16:53 -0700)]
Sharding for tensorflow/contrib/timeseries/python/timeseries/state_space_models:structural_ensemble_test
PiperOrigin-RevId:
195019968
Rachel Lim [Tue, 1 May 2018 23:52:24 +0000 (16:52 -0700)]
Fix wrongly ordered lines
PiperOrigin-RevId:
195019769
Benoit Steiner [Tue, 1 May 2018 23:38:23 +0000 (16:38 -0700)]
Avoid making a copy of the graph needlessly
PiperOrigin-RevId:
195017837
A. Unique TensorFlower [Tue, 1 May 2018 23:38:19 +0000 (16:38 -0700)]
Adds logistic_regression_head.
PiperOrigin-RevId:
195017830
A. Unique TensorFlower [Tue, 1 May 2018 23:33:03 +0000 (16:33 -0700)]
Check for overflow in shape calculation.
PiperOrigin-RevId:
195017114
A. Unique TensorFlower [Tue, 1 May 2018 23:20:47 +0000 (16:20 -0700)]
Allow `warm_start_from` argument to be a SavedModel path.
PiperOrigin-RevId:
195015356
A. Unique TensorFlower [Tue, 1 May 2018 23:17:21 +0000 (16:17 -0700)]
[TF:XLA] Separate on-host and on-device shape and layout in HloModule.
Previously, only one layout was stored with an HLO module. This CL allows
HLO passes to modify the on-device layouts without affecting the on-host
layout (provided by the client)
PiperOrigin-RevId:
195014875
A. Unique TensorFlower [Tue, 1 May 2018 22:47:27 +0000 (15:47 -0700)]
Implementation of the fully-connected TFLite Op using the symmetric quantization.
PiperOrigin-RevId:
195010312
A. Unique TensorFlower [Tue, 1 May 2018 22:47:26 +0000 (15:47 -0700)]
Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId:
195010310
Benoit Steiner [Tue, 1 May 2018 22:19:42 +0000 (15:19 -0700)]
Fixed some outdated comments
PiperOrigin-RevId:
195006088
A. Unique TensorFlower [Tue, 1 May 2018 22:01:22 +0000 (15:01 -0700)]
Minor JNI performance improvement.
PiperOrigin-RevId:
195002949
A. Unique TensorFlower [Tue, 1 May 2018 22:00:20 +0000 (15:00 -0700)]
Making ids unique in nn.embedding_lookup_sparse. This helps to reduce RPC calls for looking up the embeddings when there are repeated ids in the batch.
PiperOrigin-RevId:
195002785
Mark Heffernan [Tue, 1 May 2018 21:52:40 +0000 (14:52 -0700)]
Fix bug in peak buffer accounting in buffer assignment.
Buffer assignment keeps track of the set of logical buffers which are live at
the point of peak memory usage for each allocation. Previously colocated
buffers were not properly accounted for. This CL addresses this problem.
PiperOrigin-RevId:
195001567
Patrick Nguyen [Tue, 1 May 2018 21:28:36 +0000 (14:28 -0700)]
Merge changes from github.
PiperOrigin-RevId:
194997009
A. Unique TensorFlower [Tue, 1 May 2018 21:27:33 +0000 (14:27 -0700)]
Make tower-local variables non-trainable even with the default
DistributionStrategy.
PiperOrigin-RevId:
194996819
A. Unique TensorFlower [Tue, 1 May 2018 21:04:59 +0000 (14:04 -0700)]
Enable checkpointless eval and predict for tf.estimator.
PiperOrigin-RevId:
194993191
Mark Daoust [Tue, 1 May 2018 20:54:34 +0000 (13:54 -0700)]
Update community/swift
PiperOrigin-RevId:
194991305
Shashi Shekhar [Tue, 1 May 2018 20:44:58 +0000 (13:44 -0700)]
Update schema.
PiperOrigin-RevId:
194989704
A. Unique TensorFlower [Tue, 1 May 2018 20:34:39 +0000 (13:34 -0700)]
Relax the stringent memory allocator constraints in AssignOp if a Grappler graph analysis determines it to be safe. This will allow Assign to reuse the input buffer to initialize the variable in many cases.
PiperOrigin-RevId:
194988134
A. Unique TensorFlower [Tue, 1 May 2018 20:15:53 +0000 (13:15 -0700)]
Collective Ops Part 5
Distributed-mode implementations of DeviceResolverInterface
and ParamResolverInterface. Extend Worker interface with
new methods in support of these interfaces.
This change is part of a series of changes introducing infrastructure
for collective ops and initial implementations of reduction and broadcast.
PiperOrigin-RevId:
194984585
Sanjoy Das [Tue, 1 May 2018 20:07:03 +0000 (13:07 -0700)]
Open source infeed test
PiperOrigin-RevId:
194983270
Priya Gupta [Tue, 1 May 2018 20:01:41 +0000 (13:01 -0700)]
Add utility to auto shard a dataset pipeline in the appropriate place by locating the file readers and sharding their input files.
PiperOrigin-RevId:
194982311
Peter Hawkins [Tue, 1 May 2018 19:56:29 +0000 (12:56 -0700)]
Add a pointer from Device to its owning DeviceMgr.
Allow remote function execution on TPU devices.
PiperOrigin-RevId:
194981511
A. Unique TensorFlower [Tue, 1 May 2018 19:39:52 +0000 (12:39 -0700)]
Implements matrix multiply-accumulate for linear no-offset (aka symmetric) quantizer.
PiperOrigin-RevId:
194978865
Yuefeng Zhou [Tue, 1 May 2018 19:24:38 +0000 (12:24 -0700)]
Add device_util.resolve method which merges with current device as well.
PiperOrigin-RevId:
194976633
Benoit Steiner [Tue, 1 May 2018 19:17:48 +0000 (12:17 -0700)]
Simplified shape inference.
PiperOrigin-RevId:
194975603
RJ Ryan [Tue, 1 May 2018 19:02:59 +0000 (12:02 -0700)]
Improve shape inference for tf.contrib.signal.frame.
PiperOrigin-RevId:
194972934
A. Unique TensorFlower [Tue, 1 May 2018 18:52:43 +0000 (11:52 -0700)]
Boosted trees: support indicator column.
PiperOrigin-RevId:
194971229
Asim Shankar [Tue, 1 May 2018 18:52:04 +0000 (11:52 -0700)]
eager: Update sample notebooks with API changes in the last few releases.
Most notably:
- Avoid using tf.contrib.eager since equivalent functionality if available outside tf.contrib
- Datasets can be directly iterated on.
- Use tf.GradientTape instead of tf.contrib.eager.implicit_gradients
PiperOrigin-RevId:
194971115
Gunhan Gulsoy [Tue, 1 May 2018 18:04:26 +0000 (11:04 -0700)]
Automated g4 rollback of changelist
194917415
PiperOrigin-RevId:
194962702
A. Unique TensorFlower [Tue, 1 May 2018 16:07:57 +0000 (09:07 -0700)]
Fix crash in HloGraphDumper where it crashes on tuple shaped constants
The problem is that it tries to use a special logic for 0 element constants
but the logic used to check the number of elements only supports array shapes.
PiperOrigin-RevId:
194945246
A. Unique TensorFlower [Tue, 1 May 2018 14:56:08 +0000 (07:56 -0700)]
Preventing RemoveTrivialBinary from removing broadcasts.
PiperOrigin-RevId:
194937001
A. Unique TensorFlower [Tue, 1 May 2018 10:25:26 +0000 (03:25 -0700)]
Protocol buffer classes now list their fields in dir(cls)
PiperOrigin-RevId:
194917415
Sanjoy Das [Tue, 1 May 2018 07:42:56 +0000 (00:42 -0700)]
[XLA:CPU] Open source some tests.
PiperOrigin-RevId:
194903752
A. Unique TensorFlower [Tue, 1 May 2018 02:18:40 +0000 (19:18 -0700)]
Update ops-related pbtxt files.
PiperOrigin-RevId:
194883351
A. Unique TensorFlower [Tue, 1 May 2018 01:51:48 +0000 (18:51 -0700)]
Move LinearOperatorKronecker and LinearOperatorBlockDiag to core.
PiperOrigin-RevId:
194881237
A. Unique TensorFlower [Tue, 1 May 2018 01:41:36 +0000 (18:41 -0700)]
[XLA] Redesign: dump HloSnapshot at the point where it used to dump the SessionModule.
PiperOrigin-RevId:
194880385
A. Unique TensorFlower [Tue, 1 May 2018 01:05:37 +0000 (18:05 -0700)]
Internal change.
PiperOrigin-RevId:
194877173
Yifei Feng [Tue, 1 May 2018 01:01:23 +0000 (18:01 -0700)]
Remove proto header import from core/framework/tracking_allocator.h
The goal is to make kernels mostly independent of proto headers, which will let us lock down our .so import.
PiperOrigin-RevId:
194876569
A. Unique TensorFlower [Tue, 1 May 2018 00:46:35 +0000 (17:46 -0700)]
Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId:
194874988
A. Unique TensorFlower [Tue, 1 May 2018 00:41:33 +0000 (17:41 -0700)]
[XLA] Redesign: migrate tensorflow/compiler/tf2xla, tensorflow/compiler/aot:
- xla::ComputationBuilder -> xla::XlaBuilder
- xla::ComputationDataHandle -> xla::XlaOp
- xla::Computation -> xla::XlaComputation
- xla::CompileOnlyClient::AotComputationInstance -> xla::CompileOnlyClient::AotXlaComputationInstance
- xla::SessionModule -> xla::HloSnapshot
PiperOrigin-RevId:
194874462
Jiri Simsa [Tue, 1 May 2018 00:38:38 +0000 (17:38 -0700)]
[tf.data] Adding an experimental `group_by_reducer` transformation which groups elements of an input pipeline by a key, applies a reduce function to elements of each group "on-the-fly", and outputs the results once all input elements have been processed.
PiperOrigin-RevId:
194874087
A. Unique TensorFlower [Tue, 1 May 2018 00:14:50 +0000 (17:14 -0700)]
Internal cleanup.
PiperOrigin-RevId:
194871141
Alexandre Passos [Tue, 1 May 2018 00:11:40 +0000 (17:11 -0700)]
Faster reduce_logsoftmax (specially in eager) and bugfixes in broadcast_to
PiperOrigin-RevId:
194870645
A. Unique TensorFlower [Mon, 30 Apr 2018 23:43:14 +0000 (16:43 -0700)]
Small fix to prevent a crash if the delegate has not implemented FreeBufferHandle.
PiperOrigin-RevId:
194866595
Yuefeng Zhou [Mon, 30 Apr 2018 23:12:33 +0000 (16:12 -0700)]
Add MultiNodeDataset and MultiNodeIterator which are intended to work for multi-node distribution strategy.
PiperOrigin-RevId:
194862215
Bixia Zheng [Mon, 30 Apr 2018 23:11:38 +0000 (16:11 -0700)]
[XLA] Change the TF2XLA bridge to perform F16 reduction using F32 data type.
Add test cases to test that reduce sum for bfloat16 and float16 doesn't lose too
much precision.
PiperOrigin-RevId:
194862078
Yifei Feng [Mon, 30 Apr 2018 22:56:26 +0000 (15:56 -0700)]
Improve error message for pip_smoke_test.
PiperOrigin-RevId:
194859591
A. Unique TensorFlower [Mon, 30 Apr 2018 22:41:28 +0000 (15:41 -0700)]
Do not allocate memory for literal as it will be allocated later.
PiperOrigin-RevId:
194857422
A. Unique TensorFlower [Mon, 30 Apr 2018 22:19:00 +0000 (15:19 -0700)]
Enhancements to GRAPHVIZ_DOT output:
-edge weights added to encourage straighter main data-flow
-line thickness proportional to log(data_size)
-set global parameter "nslimit" to prevent excessive layout time for difficult graphs
PiperOrigin-RevId:
194854051
A. Unique TensorFlower [Mon, 30 Apr 2018 21:56:13 +0000 (14:56 -0700)]
Implement unary chain hoisting optimization for Concat, Split, and SplitV.
For Concat, hoist prefix chains of unary ops before concatenation, e.g.
// Rewrites
// Concat({Cos(Exp(a)), Cos(Exp(b)), Cos(Exp(c))})
// into
// Cos(Exp(Concat({a, b, c}))).
For Split/SplitV hoist unary postfix chains before the split, e.g.
// Rewrites
// [Cos(Exp(y)) for y in Split(x)]
// into
// [y for y in Split(Cos(Exp(x)))].
The new optimization is off by default.
PiperOrigin-RevId:
194850318
Eli Bendersky [Mon, 30 Apr 2018 21:39:25 +0000 (14:39 -0700)]
Add XLA logo to its documentation page
PiperOrigin-RevId:
194847599
Alexandre Passos [Mon, 30 Apr 2018 21:34:01 +0000 (14:34 -0700)]
Do not cast int64 to int32 in keras embedding lookups.
Often when working on the GPU with tf int64s are more efficient as int32s will
be copied back and forth to the host quite a bit.
PiperOrigin-RevId:
194846629
Dimitris Vardoulakis [Mon, 30 Apr 2018 21:28:46 +0000 (14:28 -0700)]
[TF:XLA] Fix some unexpected memory leak in hlo_graph_dumper_test.
PiperOrigin-RevId:
194845792
Petros Mol [Mon, 30 Apr 2018 21:26:08 +0000 (14:26 -0700)]
Removing an obsolete TODO
PiperOrigin-RevId:
194845376
A. Unique TensorFlower [Mon, 30 Apr 2018 21:14:43 +0000 (14:14 -0700)]
Push down const inputs into the function of specialized functions.
PiperOrigin-RevId:
194843380
Jiri Simsa [Mon, 30 Apr 2018 21:08:29 +0000 (14:08 -0700)]
[tf.data] Adding support for `tf.SparseTensor` into `tf.contrib.data.scan()`
PiperOrigin-RevId:
194842266
A. Unique TensorFlower [Mon, 30 Apr 2018 20:51:34 +0000 (13:51 -0700)]
Extend SDCAOptimizer functionality to prune negative indices (the default value for OOV with tf.feature_column.FeatureColumn, sparse / categorical).
PiperOrigin-RevId:
194839178
Shashi Shekhar [Mon, 30 Apr 2018 20:50:17 +0000 (13:50 -0700)]
Fix a bug in profiler.
PiperOrigin-RevId:
194838948
Joshua V. Dillon [Mon, 30 Apr 2018 20:34:46 +0000 (13:34 -0700)]
Cleanup handling of non-Tensor valued event_ndims in Bijector.
PiperOrigin-RevId:
194836408
HyoukJoong Lee [Mon, 30 Apr 2018 19:49:33 +0000 (12:49 -0700)]
Fix device assignment in xla/service/service.cc to build the assignment based on
the provided device handles rather than using the default assignment.
PiperOrigin-RevId:
194829761
Tom Hennigan [Mon, 30 Apr 2018 19:47:35 +0000 (12:47 -0700)]
Fix typos in tf.GradientTape documentation.
PiperOrigin-RevId:
194829506
Igor Saprykin [Mon, 30 Apr 2018 19:41:12 +0000 (12:41 -0700)]
When a mirrored variable is fetched in cross-tower mode, fetch its primary variable.
This prevents errors like
ValueError: Fetch argument MirroredVariable({'/job:localhost/replica:0/task:0/device:GPU:0': <tf.Variable 'global_step:0' shape=() dtype=int64>, '/job:localhost/replica:0/task:0/device:GPU:1': <tf.Variable 'global_step/replica_1:0' shape=() dtype=int64>}) cannot be interpreted as a Tensor. (Device /job:localhost/replica:0/task:0/device:CPU:0 not found in ['/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1'] (current device ))
I ran distribute/examples/resnet with and without the change and it fixed the problem.
PiperOrigin-RevId:
194828672
Sanjoy Das [Mon, 30 Apr 2018 19:33:21 +0000 (12:33 -0700)]
[TF:XLA] Bump open source llvm revision to r331173
PiperOrigin-RevId:
194827639
Benoit Steiner [Mon, 30 Apr 2018 19:13:00 +0000 (12:13 -0700)]
Use the default rewriter config instead of a custom one
PiperOrigin-RevId:
194824761
A. Unique TensorFlower [Mon, 30 Apr 2018 19:01:35 +0000 (12:01 -0700)]
Fix bugs in AssignOp:
1. Releasing the unique_ptr would "leak" a TensorBuffer refcount.
2. The output shape is defined by rhs, not lhs.
PiperOrigin-RevId:
194822802
A. Unique TensorFlower [Mon, 30 Apr 2018 18:51:03 +0000 (11:51 -0700)]
-Miscellaneous code clean-up
PiperOrigin-RevId:
194821201
Yifei Feng [Mon, 30 Apr 2018 18:26:52 +0000 (11:26 -0700)]
Add --keep_going flag to bazel query in pip_smoke_test to bypass bazel query cannot handle select statement.
PiperOrigin-RevId:
194816816
Dustin Tran [Mon, 30 Apr 2018 18:14:51 +0000 (11:14 -0700)]
Add snippet illustrating discretized logistic mixture for WaveNet.
Currently, the example manually centers the bins in order to capture ?rounding? intervals and not ?ceiling? intervals. In the future, we may simplify the example by expanding QuantizedDistribution with a binning argument.
PiperOrigin-RevId:
194814662
Mark Daoust [Mon, 30 Apr 2018 18:01:54 +0000 (11:01 -0700)]
Switch install get_started link
PiperOrigin-RevId:
194811871
Ayush Dubey [Mon, 30 Apr 2018 17:36:00 +0000 (10:36 -0700)]
Prepare nodes that will be allocated using ScopedAllocator.
This includes changes to Executor that (1) set scope_id on nodes that are
decorated with _scoped_allocator attribute, (2) mark such nodes to never
forward input.
PiperOrigin-RevId:
194807086
A. Unique TensorFlower [Mon, 30 Apr 2018 16:46:59 +0000 (09:46 -0700)]
Remove manifest_merger that is being removed from Bazel 0.13.0.
PiperOrigin-RevId:
194798790
Tom Hennigan [Mon, 30 Apr 2018 16:31:54 +0000 (09:31 -0700)]
Clarify return type for defun as zero or more `tf.Tensor`s.
PiperOrigin-RevId:
194796621
Alexandre Passos [Mon, 30 Apr 2018 16:29:31 +0000 (09:29 -0700)]
Fixes to tape gradient for providing outputs and having multiple targets.
PiperOrigin-RevId:
194796304
A. Unique TensorFlower [Mon, 30 Apr 2018 13:59:23 +0000 (06:59 -0700)]
Adding a depthwise convolution kernel op (with label 'cudnn_grouped_convolution') which forwards to cuDNN grouped convolutions.
PiperOrigin-RevId:
194780352
A. Unique TensorFlower [Mon, 30 Apr 2018 11:21:09 +0000 (04:21 -0700)]
Cleaning up tracing code.
PiperOrigin-RevId:
194768567
Russell Power [Sun, 29 Apr 2018 22:37:12 +0000 (15:37 -0700)]
Keras: Supply `maximum_iterations` to the TF backend when possible.
PiperOrigin-RevId:
194723199
Russell Power [Sun, 29 Apr 2018 22:30:22 +0000 (15:30 -0700)]
Add support for a clean checkpoint and shutdown in response to a termination notice.
PiperOrigin-RevId:
194722985
Sherry Moore [Sun, 29 Apr 2018 16:56:16 +0000 (09:56 -0700)]
Added del_hparam(), the counter part of add_hparam.
PiperOrigin-RevId:
194711291
Richard Wei [Sun, 29 Apr 2018 06:51:28 +0000 (23:51 -0700)]
Update the Swift for TensorFlow community page.
PiperOrigin-RevId:
194687897
Dimitris Vardoulakis [Sun, 29 Apr 2018 05:19:22 +0000 (22:19 -0700)]
[TF:XLA]
- Require a module config when creating an HloModule.
- All tests using HloTestBase create a module using CreateNewModule.
PiperOrigin-RevId:
194684585
A. Unique TensorFlower [Sun, 29 Apr 2018 02:47:42 +0000 (19:47 -0700)]
Internally rewrite RevBlock to use @custom_gradient
PiperOrigin-RevId:
194679657
Asim Shankar [Sat, 28 Apr 2018 18:31:12 +0000 (11:31 -0700)]
Java: Release 1.8.0
PiperOrigin-RevId:
194663800
Brennan Saeta [Sat, 28 Apr 2018 17:51:32 +0000 (10:51 -0700)]
[tf.data] Use core::ScopedUnref to avoid resource leakage.
If for whatever reason iterator_resource->set_iterator did not return Status::OK(), we would leak a reference on the iterator_resource. With this change, we won't leak the resource.
PiperOrigin-RevId:
194662412
A. Unique TensorFlower [Sat, 28 Apr 2018 17:40:49 +0000 (10:40 -0700)]
Allow not specifying eval_spec when evaluation is not necessarily run.
PiperOrigin-RevId:
194661814
Mingsheng Hong [Sat, 28 Apr 2018 15:55:08 +0000 (08:55 -0700)]
This is Part 1 of Swift<->TF sends/recvs: support sending tensors from TF to
Swift via direct session.
The changes are:
1. Added an experimental TF C API TF_DequeueNamedTensor() to consume the queued
tensors from a dequeue op. One use case is for the Swift host program to consume
tensors sent by TF, where the queue is a Fifo queue managed by TF.
Enqueuing tensors are done by running an enqueue op in a graph. The queued
tensors are not persisted, and will be lost if the process/machine dies. The
queue has a bounded capacity, to prevent producer from being unboundedly ahead
of consumer.
while caller of TF_DequeueNamedTensor() could have run the Fifo dequeue op
directly, the extra level of indirection provided by this API allows us to more
easily switch the queuing impl to another mechanism. If and once we stabilize on
the Fifo queue based impl, we can remove this API.
2. Added a new S4TF runtime API _TFCReceiveTensorHandle() that receives a tensor
via TF_DequeueNamedTensor().
3. To support tensor receives in host program, taught PartitionCloner in
TFPartition to insert SIL code to call _TFCReceiveTensorHandle().
4. To support tensor sends in accelerator program, taught TFGraphLowering in
generate QueueEnqueueV2 nodes in the TF graphs, with appropriate control
dependence to make sure these nodes get executed.
a) The enqueue produces no output tensor, and is executed only for its side
effect. To ensure it is executed properly, control dependence is wired up. The
general design is: before a TF_Function (can be a top level function or the body
function of a while op) produces an output tensor OT, make OT control dependent
on the enqueue op, so that enqueue gets run before the function returns.
b) If tensor send occurs in a while loop body, the body logic currently gets
lowered in 3 places: the while op cond function, the while op body function, and
the ops at the same level as the while op itself (for running the last loop
iteration). In this case, the correct TFGraph lowering is to run the enqueue in
the last 2 out of the 3 places above.
After this CL, the dual versions of the above (dequeuing via an op, and
enqueuing via C API) will be added.
PiperOrigin-RevId:
194658511
Anna R [Sat, 28 Apr 2018 07:13:09 +0000 (00:13 -0700)]
Removing hidden_ops.txt file.
PiperOrigin-RevId:
194637892
A. Unique TensorFlower [Sat, 28 Apr 2018 06:35:42 +0000 (23:35 -0700)]
Fix kernel creation bug, due to constant folding always use CPU.
PiperOrigin-RevId:
194636076
A. Unique TensorFlower [Sat, 28 Apr 2018 05:57:36 +0000 (22:57 -0700)]
Add test case on compiling dense layer node with XLA.
PiperOrigin-RevId:
194634563
Patrick Nguyen [Sat, 28 Apr 2018 04:58:17 +0000 (21:58 -0700)]
Properly export recurrent in contrib.
The following symbols are available:
- tf.contrib.recurrent.bidirectional_functional_rnn
- tf.contrib.recurrent.functional_rnn
- tf.contrib.recurrent.Recurrent
PiperOrigin-RevId:
194632138
Sanjoy Das [Sat, 28 Apr 2018 03:06:35 +0000 (20:06 -0700)]
HLO profiling for tfcompile.
This CL extends the --xla_hlo_profile knob to tfcompile. tf_library rules can
now set enable_xla_hlo_profiling to True to:
- Have the generated code update per-HLO profile counters as it executes.
- Have tfcompile generate and serialize an instance HloProfilePrinterData with
a compiled model that can be used to pretty-print the collected profile
counters.
PiperOrigin-RevId:
194627272
A. Unique TensorFlower [Sat, 28 Apr 2018 02:23:16 +0000 (19:23 -0700)]
Add internal uint b stats to TfOpStats.
PiperOrigin-RevId:
194625155
Sanjoy Das [Sat, 28 Apr 2018 01:41:27 +0000 (18:41 -0700)]
Split up ElementaIrEmitter::MakeElementGenerator into smaller functions; NFC
PiperOrigin-RevId:
194622198