A. Unique TensorFlower [Mon, 19 Mar 2018 16:28:58 +0000 (09:28 -0700)]
Remove a few unused #includes
PiperOrigin-RevId:
189593522
A. Unique TensorFlower [Mon, 19 Mar 2018 14:29:45 +0000 (07:29 -0700)]
Adds missing protobuf dep to tf.contrib.data ops.
PiperOrigin-RevId:
189580464
A. Unique TensorFlower [Mon, 19 Mar 2018 13:27:00 +0000 (06:27 -0700)]
Adding non-linear image warping ops to tf.contrib.image
New ops are:
tf.contrib.image.sparse_image_warp, tf.contrib.image.dense_image_warp, and tf.contrib.image.interpolate_spline.
PiperOrigin-RevId:
189574951
A. Unique TensorFlower [Mon, 19 Mar 2018 12:01:05 +0000 (05:01 -0700)]
Add new helpers to HLO sharding.
PiperOrigin-RevId:
189569053
A. Unique TensorFlower [Sun, 18 Mar 2018 22:18:06 +0000 (15:18 -0700)]
Add precision and recall metrics to _BinaryLogisticHeadWithSigmoidCrossEntropyLoss.
This change makes most of the binary classifiers in the canned estimators provide precision and recall metrics during evaluation. This matches the behavior of the canned estimators defined in the deprecated tf.contrib.learn.estimator.
PiperOrigin-RevId:
189522420
A. Unique TensorFlower [Sun, 18 Mar 2018 14:48:30 +0000 (07:48 -0700)]
Fix build
PiperOrigin-RevId:
189506945
A. Unique TensorFlower [Sat, 17 Mar 2018 18:21:02 +0000 (11:21 -0700)]
Normally tf2xla (autoclustering, jit_scope and rewrite) rely on graph optimization
passes to outline subgraphs. The XLA device itself only sees Compute() calls for
_XlaLaunch ops. All other ops are registered with a dummy op factory that just
prints an error.
This patch adds an alternative, selected at registration time, that disables
default graph optimization and instead registers a non-dummy op implementation.
This op implementation compiles the op "on demand"; it generates a fake graph containing
_Arg and _Retval nodes and calls into the XlaCompiler code as usual.
This allows the device to be used as a "normal" TensorFlow device, as well as from
Eager mode, at the expense of performance.
Later additions will add the ability to create traces to amortize kernel launch overhead,
and the ability to combine op-by-op/tracing and autoclustering with jit_scope annotations.
PiperOrigin-RevId:
189463593
Peter Hawkins [Sat, 17 Mar 2018 16:57:03 +0000 (09:57 -0700)]
[XLA] Fix points-to set calculation in HLO ListScheduler.
Previously the list scheduler considered that an instruction used only the buffers defined by its operands. This is inaccurate in the presence of aliasing?an instruction may potentially use anything in the points-to set of the operand, including buffers defined by an ancestor of an operand. Change to use the full points-to set instead.
PiperOrigin-RevId:
189460681
Raghuraman Krishnamoorthi [Sat, 17 Mar 2018 02:26:56 +0000 (19:26 -0700)]
Update docs for fake quant to reflect support for bitwidths from 2 to 16 inclusive.
PiperOrigin-RevId:
189426173
Derek Murray [Sat, 17 Mar 2018 00:10:16 +0000 (17:10 -0700)]
Automated g4 rollback of changelist
189228094
PiperOrigin-RevId:
189416074
Anna R [Sat, 17 Mar 2018 00:05:51 +0000 (17:05 -0700)]
Add user_ops.my_fact to the new TensorFlow API.
PiperOrigin-RevId:
189415577
Allen Lavoie [Sat, 17 Mar 2018 00:00:17 +0000 (17:00 -0700)]
TFTS: Allow cold-starting from SavedModels
This means the model starts from its default start state and is fed a series
(filtering) to warm up its state. This warmed up state can then be used to
make predictions.
Some shape fiddling with the receiver_fn to make feeding state optional, and a
new signature for cold-starting which uses the model's default start state.
Some other shape fiddling to make feeding strings to SavedModels work more
smoothly in the cold-start part of the LSTM example. I was squeezing out the
last dimension of "scalar" exogenous features, now I'm leaving them, which
matches the placeholder generation logic.
PiperOrigin-RevId:
189414869
Skye Wanderman-Milne [Fri, 16 Mar 2018 23:27:02 +0000 (16:27 -0700)]
Relax "_output_shapes" error checking in C++ graph importer.
This is to make the behavior inline with what Python's
import_graph_def does. It'd probably be better to raise an error, but
some people are already depending on the import_graph_def behavior in
order to import modified ops.
PiperOrigin-RevId:
189411025
Alexandre Passos [Fri, 16 Mar 2018 23:10:46 +0000 (16:10 -0700)]
Always sets self._built in tfe.metrics when build() is called.
PiperOrigin-RevId:
189408745
Benoit Steiner [Fri, 16 Mar 2018 23:07:06 +0000 (16:07 -0700)]
Deleted dead code
PiperOrigin-RevId:
189408200
A. Unique TensorFlower [Fri, 16 Mar 2018 23:00:11 +0000 (16:00 -0700)]
Increase kMaxEagerTensorParentSize to 64.
PiperOrigin-RevId:
189407226
A. Unique TensorFlower [Fri, 16 Mar 2018 22:55:14 +0000 (15:55 -0700)]
Remove identity transpose nodes.
PiperOrigin-RevId:
189406518
Yuefeng Zhou [Fri, 16 Mar 2018 22:36:15 +0000 (15:36 -0700)]
Consolidate all moving_average updates in batchnorm into one implementation.
PiperOrigin-RevId:
189404070
Benoit Steiner [Fri, 16 Mar 2018 22:31:10 +0000 (15:31 -0700)]
Don't fail when optimizing the gradients of noinline functions
PiperOrigin-RevId:
189403170
Chris Leary [Fri, 16 Mar 2018 22:13:25 +0000 (15:13 -0700)]
[XLA] Fix forward for HLO profiling test, explicitly set profiling preference.
PiperOrigin-RevId:
189400869
Benoit Steiner [Fri, 16 Mar 2018 22:10:39 +0000 (15:10 -0700)]
Don't inline functions in the grappler item builder since this part of the code doesn't
support custom ops. Instead we will rely on the function optimizer.
PiperOrigin-RevId:
189400462
Skye Wanderman-Milne [Fri, 16 Mar 2018 21:39:08 +0000 (14:39 -0700)]
Downgrade run-after-mutation error to a log warning.
This is to ease the transition to the C API. Some tests currently mutate the
graph after running it but currently pass. This error was meant to guard
against existing behavior, so it's not a regression to make it a warning
instead for now.
PiperOrigin-RevId:
189395472
A. Unique TensorFlower [Fri, 16 Mar 2018 20:43:12 +0000 (13:43 -0700)]
Fixed a typo.
PiperOrigin-RevId:
189386932
Jianwei Xie [Fri, 16 Mar 2018 20:11:36 +0000 (13:11 -0700)]
Estimate prediction size and error out if it is larger than protobuf limit.
PiperOrigin-RevId:
189382429
A. Unique TensorFlower [Fri, 16 Mar 2018 20:00:31 +0000 (13:00 -0700)]
Move if_op kernel to //third_party/tensorflow/compiler/tf2xla/kernels
PiperOrigin-RevId:
189381067
Yuanzhong Xu [Fri, 16 Mar 2018 19:56:26 +0000 (12:56 -0700)]
[XLA] BF16 conversion folding for CRS; remove no-op conversions in propagation.
If CRS has tuple output, it needs special handling in conversion folding.
BF16 propagation could result in BF16->BF16 conversions, which can be removed.
PiperOrigin-RevId:
189380578
A. Unique TensorFlower [Fri, 16 Mar 2018 19:55:18 +0000 (12:55 -0700)]
BREAKING_CHANGE: Remove SigmoidCentered bijector.
- SoftmaxCentered solely works on vector events, and supports broadcasting.
- Sigmoid exists for event_ndims=0 cases.
PiperOrigin-RevId:
189380445
Chris Leary [Fri, 16 Mar 2018 19:34:34 +0000 (12:34 -0700)]
[XLA:python] Plumb hlo_profile flag.
PiperOrigin-RevId:
189377860
A. Unique TensorFlower [Fri, 16 Mar 2018 19:12:52 +0000 (12:12 -0700)]
Remove empty buckets in latency_stats as it makes the report unreadable. This is also consistent with how SummaryHistoOp in summary_op.cc works.
PiperOrigin-RevId:
189375113
Derek Murray [Fri, 16 Mar 2018 19:11:18 +0000 (12:11 -0700)]
[tf.data] Fix typo in `Dataset.prefetch()` docstring.
PiperOrigin-RevId:
189374898
Sanjoy Das [Fri, 16 Mar 2018 18:53:14 +0000 (11:53 -0700)]
[TF:XLA] Bump open source llvm revision to r327616
PiperOrigin-RevId:
189372065
Allen Lavoie [Fri, 16 Mar 2018 18:47:26 +0000 (11:47 -0700)]
Allow variable lists to change when saving repeatedly using tfe.Checkpoint
For example allows saving a checkpoint before slot variables have been created
When graph building, restore() is still bound to a frozen set of variables.
PiperOrigin-RevId:
189371256
A. Unique TensorFlower [Fri, 16 Mar 2018 18:45:42 +0000 (11:45 -0700)]
Set number of threads at Java interpreter constructor so that Conv Kernels can be selected properly.
Remove setNumThreads in the Java API as its behavior is ambiguous.
PiperOrigin-RevId:
189370770
A. Unique TensorFlower [Fri, 16 Mar 2018 18:29:00 +0000 (11:29 -0700)]
Fixing constant output arrays by inserting synthetic reshapes.
PiperOrigin-RevId:
189368237
Frank Perbet [Fri, 16 Mar 2018 17:48:32 +0000 (10:48 -0700)]
Automated g4 rollback of changelist
189346024
PiperOrigin-RevId:
189361083
A. Unique TensorFlower [Fri, 16 Mar 2018 17:29:11 +0000 (10:29 -0700)]
Fix naming BatchNorm_Fold//batch_norm_correction -> BatchNorm_Fold/batch_norm_correction.
PiperOrigin-RevId:
189358090
Michael Case [Fri, 16 Mar 2018 17:23:41 +0000 (10:23 -0700)]
Fix sed invocation in copy_binary.py script for Mac.
Script was explicitly calling /bin/sed which was not being found on MacOS Kokoro builds.
Removing calling "sed" in script.
PiperOrigin-RevId:
189357296
A. Unique TensorFlower [Fri, 16 Mar 2018 17:10:16 +0000 (10:10 -0700)]
Added StrContains, StartsWith, and EndsWith functions to str_util.h.
Marked contains, starts_with, ends_with, and consume StringPiece methods as deprecated.
This will allow tensorflow::StringPiece to be more easily replaced with absl::string_view (once the deprecated methods are removed) as absl::string_view does not contain those methods.
PiperOrigin-RevId:
189355316
A. Unique TensorFlower [Fri, 16 Mar 2018 16:54:27 +0000 (09:54 -0700)]
Move CreateSubProcess from test.h to subprocess.h
PiperOrigin-RevId:
189353033
A. Unique TensorFlower [Fri, 16 Mar 2018 16:53:20 +0000 (09:53 -0700)]
Eliminate use of grpc::CoreCodegenInterface, which is marked as an internal interface
PiperOrigin-RevId:
189352860
A. Unique TensorFlower [Fri, 16 Mar 2018 16:42:45 +0000 (09:42 -0700)]
Doc grammar and style fixes for macOS installation.
PiperOrigin-RevId:
189351224
A. Unique TensorFlower [Fri, 16 Mar 2018 16:32:43 +0000 (09:32 -0700)]
Upgrade gRPC version used in OSS Tensorflow
PiperOrigin-RevId:
189349737
Frank Perbet [Fri, 16 Mar 2018 16:02:48 +0000 (09:02 -0700)]
Make the graph_editor C-API friendly: always construct ops with their inputs.
PiperOrigin-RevId:
189346024
A. Unique TensorFlower [Fri, 16 Mar 2018 15:59:02 +0000 (08:59 -0700)]
Fix a typo in docstring for index_table_from_tensor.
PiperOrigin-RevId:
189345585
A. Unique TensorFlower [Fri, 16 Mar 2018 14:54:08 +0000 (07:54 -0700)]
- Adds support for shared embedding layers (e.g. in RNNs), and shared Conv2D layers.
- Some minor refactoring of internal structure in fisher_blocks and layer_collection
PiperOrigin-RevId:
189338874
Chris Leary [Fri, 16 Mar 2018 14:40:49 +0000 (07:40 -0700)]
[XLA:python] Fix a bug where returning an status would not incref Py_None.
PiperOrigin-RevId:
189337748
A. Unique TensorFlower [Fri, 16 Mar 2018 12:53:38 +0000 (05:53 -0700)]
Clean up and clarify the 'install from source' page.
Remove reference to CUDA and cuDNN versions for 'install from source' from the 'install Linux' documentation. The 'install from source' should be the authoritative page for this.
PiperOrigin-RevId:
189328669
A. Unique TensorFlower [Fri, 16 Mar 2018 10:45:32 +0000 (03:45 -0700)]
[tf2xla] Introduce XlaTensorInfo
XlaTensorInfo is side-band data for Tensors. It can be used to store
information about Tensors that is not possible to store in the Tensor
itself. The XlaTensorInfos are managed by XlaTensorInfoManager, which
is an Allocator, which allows it to release the TensorInfos when the
underlying Tensor is released. Looking up an XlaTensorInfo for a
Tensor requires a hash table lookup. This implementation keeps this
off the fast path and only looks the tensorinfos up when they are
required.
PiperOrigin-RevId:
189319553
Suharsh Sivakumar [Fri, 16 Mar 2018 06:56:10 +0000 (23:56 -0700)]
Don't put quantization variables in EMA collection by default.
PiperOrigin-RevId:
189302082
A. Unique TensorFlower [Fri, 16 Mar 2018 05:28:26 +0000 (22:28 -0700)]
Propagate min-max when resolving constant Reshape op.
PiperOrigin-RevId:
189296593
Gunhan Gulsoy [Fri, 16 Mar 2018 04:39:15 +0000 (21:39 -0700)]
Automated g4 rollback of changelist
189234789
PiperOrigin-RevId:
189293458
Sanjoy Das [Fri, 16 Mar 2018 04:33:44 +0000 (21:33 -0700)]
Implement out of bounds behavior for gather in the HLO evaluator
This makes the OOB behavior of gather in the HLO evaluator consistent with
DynamicSlice while we figure out the semantics we want long term.
The HLO->HLO gather expander inherits the wrapping behavior of dynamic-slice
because it lowers the gather ops to loops of dynamic slices.
PiperOrigin-RevId:
189293175
A. Unique TensorFlower [Fri, 16 Mar 2018 01:22:09 +0000 (18:22 -0700)]
Internal cleanup.
PiperOrigin-RevId:
189278596
A. Unique TensorFlower [Fri, 16 Mar 2018 00:46:39 +0000 (17:46 -0700)]
Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId:
189274859
Benoit Steiner [Thu, 15 Mar 2018 23:52:23 +0000 (16:52 -0700)]
Symbolic gradient optimization
PiperOrigin-RevId:
189268327
A. Unique TensorFlower [Thu, 15 Mar 2018 23:29:14 +0000 (16:29 -0700)]
Add initial host transfer ops for XLA support for computation on the host during a compiled computation.
PiperOrigin-RevId:
189265297
Alexandre Passos [Thu, 15 Mar 2018 22:58:44 +0000 (15:58 -0700)]
Clarifying when is it possible to use a tape while it is still active.
PiperOrigin-RevId:
189260773
Sanjoy Das [Thu, 15 Mar 2018 22:47:09 +0000 (15:47 -0700)]
Rename CreateXyzHlo utilities to MakeXyzHlo as discussed on cr/
188968478; NFC
The rationale here is that MakeXyzHlo is less likely to be confused with
HloInstruction::CreateXyz and we already have a convention of using a "Make"
prefix for ergonomic factory functions.
PiperOrigin-RevId:
189259036
A. Unique TensorFlower [Thu, 15 Mar 2018 22:44:56 +0000 (15:44 -0700)]
Automated g4 rollback of changelist
189231636
PiperOrigin-RevId:
189258641
A. Unique TensorFlower [Thu, 15 Mar 2018 22:29:39 +0000 (15:29 -0700)]
Pass error reporter to file copy allocation,
and avoid loading model from file twice
PiperOrigin-RevId:
189256489
A. Unique TensorFlower [Thu, 15 Mar 2018 22:16:38 +0000 (15:16 -0700)]
PIE binaries that depends on static libraries usually have text relocations in the final executable, which causes link warnings/errors (different linker behaves differently). The optimal way to fix this is to link the binary with shared library, however, the libraries are NVIDIA-proprietary, not all of them have shared version (for example: cuda_9_0/lib64/libculibos.a)
PiperOrigin-RevId:
189254317
Sanjoy Das [Thu, 15 Mar 2018 21:53:52 +0000 (14:53 -0700)]
Change ParseAndVerifyModule to take a StringPiece; NFC
PiperOrigin-RevId:
189250126
Pete Warden [Thu, 15 Mar 2018 21:45:34 +0000 (14:45 -0700)]
Check for very large chunk sizes in WAV decoding
Change how chunk sizes larger than 2GB are handled, since they're stored as
unsigned int32s, so there are lots of ways for conversions to confuse the
decoding logic. The new behavior is to fail with an error, since such
large WAV files are not common, and are unsupported by many readers.
PiperOrigin-RevId:
189248857
A. Unique TensorFlower [Thu, 15 Mar 2018 21:40:12 +0000 (14:40 -0700)]
Adds a GPU kernel registration for PlaceholderWithDefault, so we can avoid the issue
of using it with a registered GPU device without soft placement.
PiperOrigin-RevId:
189248024
Shivani Agrawal [Thu, 15 Mar 2018 21:38:25 +0000 (14:38 -0700)]
[Checkpointable] Make EagerIterator checkpointable.
Use object-based save/restore to make dataset/iterator checkpointable in eager mode, this could potentially be extended to graph mode as well.
PiperOrigin-RevId:
189247720
A. Unique TensorFlower [Thu, 15 Mar 2018 21:36:53 +0000 (14:36 -0700)]
Internal cleanup.
PiperOrigin-RevId:
189247461
A. Unique TensorFlower [Thu, 15 Mar 2018 21:27:04 +0000 (14:27 -0700)]
Fix the HLO alias analysis and copy insertion to cope with the new kConditional instruction.
PiperOrigin-RevId:
189245979
Allen Lavoie [Thu, 15 Mar 2018 20:52:16 +0000 (13:52 -0700)]
Fix a bug which caused slot variables to be shared when executing eagerly
Threads the uid() name of a ResourceVariable through to the Optimizer. Tests
that slot variables are unique in several ways.
Previously ResourceVariable._shared_name was the Optimizer's slot key for a
variable, which for tfe.Variable() is just the non-uniquified name.
PiperOrigin-RevId:
189240115
A. Unique TensorFlower [Thu, 15 Mar 2018 20:46:52 +0000 (13:46 -0700)]
Go: Update generated wrapper functions for TensorFlow ops.
PiperOrigin-RevId:
189239161
A. Unique TensorFlower [Thu, 15 Mar 2018 20:19:16 +0000 (13:19 -0700)]
Update ops-related pbtxt files.
PiperOrigin-RevId:
189234789
Surya Bhupatiraju [Thu, 15 Mar 2018 20:02:23 +0000 (13:02 -0700)]
Add mean-only FID and diagonal-covariance-only FID variants to TFGAN.
PiperOrigin-RevId:
189232299
Yu-Cheng Ling [Thu, 15 Mar 2018 20:01:28 +0000 (13:01 -0700)]
TFLite iOS Makefile: Disable SSE4.1 for x86_64 build.
PiperOrigin-RevId:
189232136
Jacques Pienaar [Thu, 15 Mar 2018 19:58:08 +0000 (12:58 -0700)]
Merge changes from github.
PiperOrigin-RevId:
189231636
Mustafa Ispir [Thu, 15 Mar 2018 19:54:42 +0000 (12:54 -0700)]
Small code readability improvement.
PiperOrigin-RevId:
189231130
Anna R [Thu, 15 Mar 2018 19:43:01 +0000 (12:43 -0700)]
Remove underscore prefix from broadcast_gradient_args op.
PiperOrigin-RevId:
189229472
Derek Murray [Thu, 15 Mar 2018 19:33:10 +0000 (12:33 -0700)]
Warn when creating a `tf.InteractiveSession` if another is active.
Fixes #13202 (as far as possible without breaking backwards compatibility).
PiperOrigin-RevId:
189228094
Benjamin Kramer [Thu, 15 Mar 2018 19:15:07 +0000 (12:15 -0700)]
Clarify that in_nodes and in_edges includes control edges.
PiperOrigin-RevId:
189225717
A. Unique TensorFlower [Thu, 15 Mar 2018 19:06:24 +0000 (12:06 -0700)]
Automated g4 rollback of changelist
189110935
PiperOrigin-RevId:
189224522
Benjamin Kramer [Thu, 15 Mar 2018 18:53:00 +0000 (11:53 -0700)]
[TF:XLA] Don't follow control dependencies for const analysis.
Those inputs are not actually required to be constants.
PiperOrigin-RevId:
189222308
A. Unique TensorFlower [Thu, 15 Mar 2018 18:24:46 +0000 (11:24 -0700)]
Prevent fusing activation functions that would remove non-discardable arrays.
PiperOrigin-RevId:
189217360
Bixia Zheng [Thu, 15 Mar 2018 18:22:58 +0000 (11:22 -0700)]
[XLA:CPU] Fix the parallel task assignment to not parallelize dot operations.
The IR emitter currently generates incorrect code for parallelized dot
operations.
Add test cases to check for dot operation parallelization.
PiperOrigin-RevId:
189216963
A. Unique TensorFlower [Thu, 15 Mar 2018 18:19:32 +0000 (11:19 -0700)]
Broadcast Sub and Div from #17123 except for quantization.
PiperOrigin-RevId:
189216312
A. Unique TensorFlower [Thu, 15 Mar 2018 18:16:21 +0000 (11:16 -0700)]
Refactor and enable loop optimizer tests.
PiperOrigin-RevId:
189215781
Alexandre Passos [Thu, 15 Mar 2018 18:03:42 +0000 (11:03 -0700)]
Internal change.
PiperOrigin-RevId:
189213372
Allen Lavoie [Thu, 15 Mar 2018 17:54:42 +0000 (10:54 -0700)]
Really delete old checkpoints this time.
Follows up on cl/
188187349, which fixed checkpoint management for tf.train.Saver
when executing eagerly. Except I was recreating the tf.train.Saver objects each
save, so tfe.Checkpoint and friends did not benefit from that change.
Keeps the same tf.train.Saver around when executing eagerly. This limits object
graph mutations just like when graph building; if there are complaints I can
assign to Saver._var_list instead, since eager tf.train.Saver is not specialized
to its var_list argument.
PiperOrigin-RevId:
189211552
Justin Lebar [Thu, 15 Mar 2018 17:39:55 +0000 (10:39 -0700)]
[XLA] Add --xla_hlo_profile_last_run flag to replay_computation.
When using replay_computation for profiling, you usually only want to
do one or two warmup runs and then profile the last run of your model.
This flag makes that possible.
PiperOrigin-RevId:
189208924
Akshay Agrawal [Thu, 15 Mar 2018 17:26:13 +0000 (10:26 -0700)]
TFE: Modify initialization of `ContextStack` to ensure eager context is kept.
When eager execution is enabled in the main thread, the fact that it was
enabled is propagated to subsequently created threads. This
change ...
(1) ensures that the fact that eager was enabled is also propagated to the `ContextStack`, which is renamed to `_ContextSwitchStack`, for clarity;
(2) adds a `_ContextSwitchStack` object to `Context` as a member, removing the global `context_stack`.
PiperOrigin-RevId:
189206207
A. Unique TensorFlower [Thu, 15 Mar 2018 16:40:28 +0000 (09:40 -0700)]
add grpc service stub for TPUProfilerAnalysis
PiperOrigin-RevId:
189198495
A. Unique TensorFlower [Thu, 15 Mar 2018 16:33:03 +0000 (09:33 -0700)]
Enable constant folding optimizations in loop bodies by copying constants across Enter nodes.
PiperOrigin-RevId:
189197514
A. Unique TensorFlower [Thu, 15 Mar 2018 16:07:41 +0000 (09:07 -0700)]
Expose setNumThreads in the Java API.
PiperOrigin-RevId:
189193847
A. Unique TensorFlower [Thu, 15 Mar 2018 15:37:21 +0000 (08:37 -0700)]
Disable Add/AddN rewrite (temp).
PiperOrigin-RevId:
189189997
A. Unique TensorFlower [Thu, 15 Mar 2018 15:09:08 +0000 (08:09 -0700)]
Removed the pointer to model from interpreter.
PiperOrigin-RevId:
189186901
A. Unique TensorFlower [Thu, 15 Mar 2018 13:56:23 +0000 (06:56 -0700)]
Add ability to use feature_columns in KMeans Estimator.
PiperOrigin-RevId:
189179304
Justin Lebar [Thu, 15 Mar 2018 09:22:17 +0000 (02:22 -0700)]
[SE] [XLA:GPU] Inform --xla_hlo_profile of the GPU's memory bandwidth.
Add a memory_bandwidth() property to StreamExecutor's DeviceDescription,
and use this in the GPU's --xla_hlo_profile.
PiperOrigin-RevId:
189157407
A. Unique TensorFlower [Thu, 15 Mar 2018 03:39:10 +0000 (20:39 -0700)]
Enable Add/AddN tree rewrite for symbolically equal shapes.
1) Rewrite a tree of Add/AddN ops with a single AddN,
if all shapes are symbolically equal
2) Lookup shape properties using GraphProperties instead
of direct access to Node attributes
PiperOrigin-RevId:
189131726
Mingsheng Hong [Thu, 15 Mar 2018 03:35:42 +0000 (20:35 -0700)]
Internal change.
PiperOrigin-RevId:
189131526
A. Unique TensorFlower [Thu, 15 Mar 2018 03:23:36 +0000 (20:23 -0700)]
Make hash.h a public header.
PiperOrigin-RevId:
189130768
Jacques Pienaar [Thu, 15 Mar 2018 03:07:38 +0000 (20:07 -0700)]
Add int64 to randomized tests.
PiperOrigin-RevId:
189129641
Allen Lavoie [Thu, 15 Mar 2018 02:03:28 +0000 (19:03 -0700)]
Checkpointable: Add logic for late-naming of SaveableObjects
Previously SaveableObjects returned by _gather_saveables_for_checkpoint ran into a sanity check for naming, and would otherwise have been saved under the wrong key. This change has _gather_saveables_for_checkpoint returning factories for SaveableObjects instead, which are either passed the object-based saving key or are called with no arguments when using the global name.
Variables didn't run into this, since the Saver was generating their names.
PiperOrigin-RevId:
189124512
David Majnemer [Thu, 15 Mar 2018 02:02:29 +0000 (19:02 -0700)]
[XLA] Add a test for F32 -> U32 conversion
PiperOrigin-RevId:
189124355