Mark Daoust [Wed, 23 May 2018 17:01:15 +0000 (10:01 -0700)]
Keep column order in make_csv_dataset.
PiperOrigin-RevId:
197742412
Mark Daoust [Wed, 23 May 2018 16:59:25 +0000 (09:59 -0700)]
Add a "--no_search_hints" flag to the api-docs generator.
PiperOrigin-RevId:
197742114
A. Unique TensorFlower [Wed, 23 May 2018 16:58:30 +0000 (09:58 -0700)]
PiperOrigin-RevId:
197741984
Patrick Nguyen [Wed, 23 May 2018 16:54:06 +0000 (09:54 -0700)]
Fix typo in error message.
PiperOrigin-RevId:
197741341
Bjarke Hammersholt Roune [Wed, 23 May 2018 16:44:39 +0000 (09:44 -0700)]
Quick fix for Kokoro breakage.
PiperOrigin-RevId:
197739982
A. Unique TensorFlower [Wed, 23 May 2018 16:20:12 +0000 (09:20 -0700)]
Add 'platform_' libraries in core/BUILD.
PiperOrigin-RevId:
197736600
A. Unique TensorFlower [Wed, 23 May 2018 16:16:52 +0000 (09:16 -0700)]
Support batch size > 1 in L2Normalization 8 bit quantized implementations.
PiperOrigin-RevId:
197736184
Peter Hawkins [Wed, 23 May 2018 13:45:12 +0000 (06:45 -0700)]
Add a method XlaTensor:ReleaseShapedBuffer() to relinquish the shaped buffer owned by an XlaTensor.
Add an equality operator for xla::ShapeIndexView.
PiperOrigin-RevId:
197716313
Peter Hawkins [Wed, 23 May 2018 12:19:00 +0000 (05:19 -0700)]
[TF:XLA:GPU] Relax test tolerance due to flakiness.
PiperOrigin-RevId:
197708758
Benoit Steiner [Wed, 23 May 2018 04:57:14 +0000 (21:57 -0700)]
Use the right attributes when creating placeholder nodes.
PiperOrigin-RevId:
197673355
A. Unique TensorFlower [Wed, 23 May 2018 02:07:13 +0000 (19:07 -0700)]
Internal Change
PiperOrigin-RevId:
197661636
Bjarke Hammersholt Roune [Wed, 23 May 2018 01:22:37 +0000 (18:22 -0700)]
Add interfaces to Compiler that are sufficient to implement a backend-independent offline auto-tuner for backend configurations of ops as well as automatic testing across candidate configurations.
Also add a simple Scanner class that is handy for parsing things.
PiperOrigin-RevId:
197657512
A. Unique TensorFlower [Wed, 23 May 2018 00:17:17 +0000 (17:17 -0700)]
Fix an issue when mixing sparse and dense features in the same model.
PiperOrigin-RevId:
197650140
A. Unique TensorFlower [Wed, 23 May 2018 00:16:44 +0000 (17:16 -0700)]
Add convolution with NHWC layout to stream executor.
PiperOrigin-RevId:
197650067
Sanjoy Das [Tue, 22 May 2018 23:36:22 +0000 (16:36 -0700)]
[TF:XLA] Bump open source llvm revision to r333002
PiperOrigin-RevId:
197644290
Yu-Cheng Ling [Tue, 22 May 2018 23:31:32 +0000 (16:31 -0700)]
Fix the LSTM test in TFLite.
PiperOrigin-RevId:
197643581
A. Unique TensorFlower [Tue, 22 May 2018 23:03:16 +0000 (16:03 -0700)]
Expose the new collective reduce and broacast ops as non-public
python interface functions. Note that they are not yet fully
implemented; this change is to facilitate further development.
PiperOrigin-RevId:
197639372
Ruoxin Sang [Tue, 22 May 2018 22:51:17 +0000 (15:51 -0700)]
Always append the trailing slash when look up or insert a directory path in the stat cache.
PiperOrigin-RevId:
197637482
Justine Tunney [Tue, 22 May 2018 22:30:02 +0000 (15:30 -0700)]
Remove reservoir sampling from SummaryDbWriter
PiperOrigin-RevId:
197634162
A. Unique TensorFlower [Tue, 22 May 2018 22:24:01 +0000 (15:24 -0700)]
Adds a kernel that checks whether vector is zero or not.
PiperOrigin-RevId:
197633182
Dimitris Vardoulakis [Tue, 22 May 2018 22:01:15 +0000 (15:01 -0700)]
[TF:XLA] Add clarification to the DFS scheduler.
PiperOrigin-RevId:
197629355
Sanjoy Das [Tue, 22 May 2018 21:55:12 +0000 (14:55 -0700)]
Extract out common code and make things safer; NFC
RowMajorMatrixVectorProductEmitter and ColumnMajorMatrixVectorProductEmitter
both cache* the generated LLVM IR by keying off the dimensions of the operation,
the primitive type etc. Before this CL the code computing the cache key lived
separately from the GEMV emitters. This pattern introduces a risk that the GEMV
emitters will end up with some state not modeled in the cache key, resulting in
a subtle bug.
This CL reduces the risk by escapsulating the cache key generation and the input
configuration to the GEMV emitters in a single class.
* In the sense that two different dot operations with the same M,K,N will share
a single LLVM IR function body.
PiperOrigin-RevId:
197628423
A. Unique TensorFlower [Tue, 22 May 2018 21:52:36 +0000 (14:52 -0700)]
[TF:XLA] Add a helper to update HLO reachability.
This can be used if the user does not care if reachability changed after an
update.
PiperOrigin-RevId:
197628007
Dimitris Vardoulakis [Tue, 22 May 2018 21:39:47 +0000 (14:39 -0700)]
[TF:XLA] Roll back the functionality change of cl/
197458260 to unbreak test.
PiperOrigin-RevId:
197625888
Nick Desaulniers [Tue, 22 May 2018 21:08:57 +0000 (14:08 -0700)]
[TF:XLA] make miscomparison error messages more readable
PiperOrigin-RevId:
197620560
Yuanzhong Xu [Tue, 22 May 2018 20:59:48 +0000 (13:59 -0700)]
[XLA] Skip BF16 output conversion folding when CRS is the root.
PiperOrigin-RevId:
197618934
A. Unique TensorFlower [Tue, 22 May 2018 20:49:08 +0000 (13:49 -0700)]
Collective Ops Part 7
Complete just enough of the core implementation to run
multi-device collectives locally within a single process.
Interfaces are still private and not availble for general use.
PiperOrigin-RevId:
197617132
Derek Murray [Tue, 22 May 2018 20:14:18 +0000 (13:14 -0700)]
Move executor_test.cc to tensorflow/core/common_runtime/.
PiperOrigin-RevId:
197611583
Akshay Modi [Tue, 22 May 2018 19:46:30 +0000 (12:46 -0700)]
Fix memory leak when going from the fast path to the slow path in eager
Fixes #19385
PiperOrigin-RevId:
197607384
Jianwei Xie [Tue, 22 May 2018 19:36:35 +0000 (12:36 -0700)]
Detect unknown batch size in predictions dict
PiperOrigin-RevId:
197606059
Benjamin Kramer [Tue, 22 May 2018 19:34:51 +0000 (12:34 -0700)]
[XLA:GPU] Emit fused reduces from batchnorm expander
This is an intermediate step until we have working multi-output fusion. Once
we have it, this change should be reverted as it might interfere with fusion.
PiperOrigin-RevId:
197605814
Benjamin Kramer [Tue, 22 May 2018 18:52:51 +0000 (11:52 -0700)]
[XLA:GPU] Add lowering for input fusions with multiple reduce outputs
This is limited to reduces that have the same shapes and reduced dimensions.
Most of the code is making the individual emission code emit multiple reduction
in the same loop. This requires multi-output fusion to provide a speedup.
PiperOrigin-RevId:
197599248
A. Unique TensorFlower [Tue, 22 May 2018 18:44:52 +0000 (11:44 -0700)]
Actually return the value from train_and_evaluate.
PiperOrigin-RevId:
197597953
A. Unique TensorFlower [Tue, 22 May 2018 18:02:30 +0000 (11:02 -0700)]
* Remove the bias centering graph if it is turned off.
* Create consts once. Otherwise each time the constant is passed to an Op, a new Const op is created.
* Speed up the graph construction by using a functions to build splits.
PiperOrigin-RevId:
197590220
Mustafa Ispir [Tue, 22 May 2018 17:42:31 +0000 (10:42 -0700)]
Adding stop request capability to CheckpointSaverListener. An example usage of it is stopping training based on evaluation metrics. Example usage is as follows:
my-estimator = tf.estimator.DNNClassifier(...)
stopper = StopTrainingBasedOnEvaluateMetrics(my-estimator)
my-estimator.train(..., saving_listeners=[stopper])
where:
class StopTrainingBasedOnEvaluateMetrics(tf.train.CheckpointSaverListener):
"""A saver listener to run evaluate with every checkpoint."""
def __init__(self, estimator):
self._estimator = estimator
def after_save(self, session, global_step_value):
eval_results = self._estimator.evaluate(...)
if stop-if-started-overfitting(eval_results):
return True
PiperOrigin-RevId:
197586515
Akshay Agrawal [Tue, 22 May 2018 17:26:00 +0000 (10:26 -0700)]
Make init_scope preserve the inner device stack when lifting into a graph.
Eager execution doesn't implement device stacks and in particular it doesn't support device functions (which determine the device on a per-op basis), so in general it's not possible to do the same when lifting into the eager context.
PiperOrigin-RevId:
197583446
Dan Moldovan [Tue, 22 May 2018 16:43:06 +0000 (09:43 -0700)]
Special case the 'dict' call, which trips other mechanisms for built-ins.
PiperOrigin-RevId:
197576297
Benjamin Kramer [Tue, 22 May 2018 16:08:06 +0000 (09:08 -0700)]
[TF:XLA] Fix xla_interpreter_device build
PiperOrigin-RevId:
197571618
A. Unique TensorFlower [Tue, 22 May 2018 15:18:11 +0000 (08:18 -0700)]
Contributing guidelines, style guide and README updates
PiperOrigin-RevId:
197564905
A. Unique TensorFlower [Tue, 22 May 2018 15:14:49 +0000 (08:14 -0700)]
Update calls to addPassesToEmitFile
PiperOrigin-RevId:
197564506
A. Unique TensorFlower [Tue, 22 May 2018 15:12:41 +0000 (08:12 -0700)]
Fix a couple of broken links in the Swift For TensorFlow page.
PiperOrigin-RevId:
197564254
A. Unique TensorFlower [Tue, 22 May 2018 15:02:39 +0000 (08:02 -0700)]
Automated g4 rollback of changelist
197527651
PiperOrigin-RevId:
197562826
Benjamin Kramer [Tue, 22 May 2018 14:06:08 +0000 (07:06 -0700)]
[XLA:TF] Run buildifier on llvm.BUILD
Buildifier recently started sorting load args
https://github.com/bazelbuild/buildtools/commit/
3ac5f85b22bc44820c041d0cacd3bc2ed54e7742 which causes diffs in the output.
PiperOrigin-RevId:
197556554
A. Unique TensorFlower [Tue, 22 May 2018 12:50:34 +0000 (05:50 -0700)]
[XLA] Optimize ShapeTree<T>
This optimizes ShapeTree quite significantly. In particular this optimizes for the common case of querying/iterating, copying and moving ShapeTrees.
* Allocate all ShapeTreeNodes inside a single, owned, vector. This reduces the number of memory allocations and improves cache performance.
* Instead of storing children nodes as unique_ptrs, store them as indices into the owning container's vector. This allows cheap copy-construction (a std::vector POD copy) and doesn't change the fast path (dereferencing a pointer is just as fast as dereferencing a base + offset).
* Instead of a unique_ptr<Shape>, use a shared_ptr<Shape>. This removes a load of copy-construction overhead at the cost of a shared_ptr over a unique_ptr (one extra allocation).
* Instead of computing ShapeIndexes on-demand in the iterators/ForEach*, precompute them during construction time. This adds a few more bytes per ShapeTree, but now we can...
* ... store a std::pair<ShapeIndex, T> as the ShapeTreeNode's data element. This allows us to provide a std::pair<K,V>&, STL-like interface from iterators without going through any of the previous unique_ptr hacks around storage lifetimes.
* Because we no longer need to iterate from the beginning to build up the ShapeIndex, we can now offer a ::find() function to return an iterator for a ShapeIndex in O(K) time. As the iteration order is guaranteed to be pre-order, this can be used (and will be, later) to speed up the fast-path of mutating a subtree of a ShapeTree from tf2xla::ExtractSubBuffers.
* Similarly because we now have a very standard, cheap STL interface with no performance cliffs, we can hopefully improve ShapedBuffer's copy and move constructors to be cheaper.
PiperOrigin-RevId:
197548717
A. Unique TensorFlower [Tue, 22 May 2018 09:27:45 +0000 (02:27 -0700)]
internal change
PiperOrigin-RevId:
197533162
A. Unique TensorFlower [Tue, 22 May 2018 09:21:30 +0000 (02:21 -0700)]
batch_util.h is generally useful so moved to util/ from kernels/ where it will be included in the pip package.
PiperOrigin-RevId:
197532524
A. Unique TensorFlower [Tue, 22 May 2018 08:35:36 +0000 (01:35 -0700)]
convert Pow op into something that is more recognizable, so we can have further
optimizations.
PiperOrigin-RevId:
197527651
A. Unique TensorFlower [Tue, 22 May 2018 08:01:01 +0000 (01:01 -0700)]
Automated g4 rollback of changelist
197487461
PiperOrigin-RevId:
197523867
A. Unique TensorFlower [Tue, 22 May 2018 07:44:47 +0000 (00:44 -0700)]
Unifiy the cuda toolchain definition of gcc/nvcc and cuda-clang.
gcc >= 7 will change how it treats -pie [1]; passing -pie after -shared
on the command line is not possible any more; given that the legacy way to
configure flags in the gcc/nvcc toolchain does not allow control over where
the flags go or how to provide -pie only for linking of binaries, we can
prevent this from breaking in the future by also using the new feature
mechanism for gcc/nvcc.
In addition to moving the gcc-specific workarounds in the toolchain to
cuda_configure.bzl, document them, so we don't need to rediscover them in the
future.
[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77464
PiperOrigin-RevId:
197522719
A. Unique TensorFlower [Tue, 22 May 2018 06:37:12 +0000 (23:37 -0700)]
Enable tpu.rewrite to work on XLA CPU/GPU backends.
PiperOrigin-RevId:
197517946
Justin Lebar [Tue, 22 May 2018 03:41:26 +0000 (20:41 -0700)]
[XLA:GPU] Implement trivial (one-replica) cross-replica-sum on XLA:GPU.
Also fix the CPU implementation to work in the case when there are
multiple operands to the cross-replica-sum op.
PiperOrigin-RevId:
197506311
A. Unique TensorFlower [Tue, 22 May 2018 03:27:53 +0000 (20:27 -0700)]
Update scan benchmarks to have a range of 16K-128K iterations. As of https://github.com/tensorflow/tensorflow/commit/
5802096c267c805f6a69798aac10aefef759bb9f, TensorFlow Eager no longer exhibits quadratic behavior. The benchmark is still ~5x slower in eager mode vs. graph mode, and maybe slightly worse than linear:
n Graph Time (s) Eager Time (s) Ratio
-----------------------------------------------
16K 0.35 1.8 5.1
32K 0.64 3.6 5.6
64K 1.1 7.2 6.5
128K 2.4 14.8 6.2
PiperOrigin-RevId:
197505257
Michael Kuperstein [Tue, 22 May 2018 03:06:39 +0000 (20:06 -0700)]
Internal Change
PiperOrigin-RevId:
197503560
Michael Case [Tue, 22 May 2018 02:45:21 +0000 (19:45 -0700)]
Internal Change
PiperOrigin-RevId:
197501805
Asim Shankar [Tue, 22 May 2018 02:30:52 +0000 (19:30 -0700)]
s/tfe.GradientTape/tf.GradientTape/
s/tfe.enable_eager_execution/tf.enable_eager_execution/
PiperOrigin-RevId:
197500751
Akshay Modi [Tue, 22 May 2018 01:53:54 +0000 (18:53 -0700)]
Improvements to util/nest.py and data/util/nest.py
Changes:
- Add a cache for type -> is_sequence to speed up Flatten/IsSequence
- Update data/util/nest.py flatten to use C Flatten
Before:
entry {
name: "EagerLinearRegressionBenchmark.eager_train_cpu"
iters: 2000
wall_time: 1.
91852378845
extras {
key: "examples_per_sec"
value {
double_value: 66717.9634521
}
}
}
After:
entry {
name: "EagerLinearRegressionBenchmark.eager_train_cpu"
iters: 2000
wall_time: 1.
74479198456
extras {
key: "examples_per_sec"
value {
double_value: 73361.1806638
}
}
}
PiperOrigin-RevId:
197497854
Frank Chen [Tue, 22 May 2018 01:14:30 +0000 (18:14 -0700)]
Adds support for specifying a discovery_service_url (via either a parameter or an environment variable) within TPUClusterResolver
PiperOrigin-RevId:
197494335
Austin Anderson [Tue, 22 May 2018 00:45:22 +0000 (17:45 -0700)]
Split generated_examples test into multiple test targets
PiperOrigin-RevId:
197490872
Smit Hinsu [Tue, 22 May 2018 00:42:15 +0000 (17:42 -0700)]
Introduce an option to allocate CUDA unified memory
PiperOrigin-RevId:
197490523
Justin Lebar [Tue, 22 May 2018 00:34:56 +0000 (17:34 -0700)]
[XLA] Two minor style-guide fixups.
- Fix TODO(b/123) style.
- Use a value type rather than an rvalue reference for a "sink" param.
PiperOrigin-RevId:
197489597
A. Unique TensorFlower [Tue, 22 May 2018 00:18:06 +0000 (17:18 -0700)]
Make the quantize_and_dequantize op use the full quantized range when possible.
PiperOrigin-RevId:
197487461
Petros Mol [Tue, 22 May 2018 00:15:39 +0000 (17:15 -0700)]
Improves documentation of labels and logits arguments in hinge loss methods .
PiperOrigin-RevId:
197487120
A. Unique TensorFlower [Tue, 22 May 2018 00:03:40 +0000 (17:03 -0700)]
Supports initializing an Interpreter with a direct ByteBuffer of nativeOrder()
that contains bytes content of a valid tflite model.
PiperOrigin-RevId:
197485253
Saurabh Saxena [Mon, 21 May 2018 23:43:53 +0000 (16:43 -0700)]
Ensure that saving/restoring iterator in CheckpointInputPipelineHook is performed *after* the _DatasetInitializerHook has been run.
In the TPUEstimator the _DatasetInitializerHook is present in the
EstimatorSpec.training_hooks. Since these are executed after the `hooks`
passed to Estimator.train the input pipeline checkpointing hook fails
since it finds an uninitialized iterator.
PiperOrigin-RevId:
197482609
Alexandre Passos [Mon, 21 May 2018 23:37:17 +0000 (16:37 -0700)]
Fixes issue with gradient tape when asking for the gradient of an intermediate tensor.
PiperOrigin-RevId:
197481473
A. Unique TensorFlower [Mon, 21 May 2018 23:30:42 +0000 (16:30 -0700)]
Improve error message in tensor.cc when IsAligned() test fails
by logging offending ptr value.
PiperOrigin-RevId:
197480534
Igor Saprykin [Mon, 21 May 2018 23:26:11 +0000 (16:26 -0700)]
Support a better interface for the single option case in combinations.py.
If there's only one combination for combination-based tests, it doesn't have to
be a list.
PiperOrigin-RevId:
197479773
A. Unique TensorFlower [Mon, 21 May 2018 23:14:10 +0000 (16:14 -0700)]
Add arithmetic optimizer stage that removes LogicalNot that takes a comparison as input, i.e.
!(a == b) => a != b
!(a != b) => a == b
!(a < b) => a >= b
!(a <= b) => a > b
!(a > b) => a <= b
!(a >= b) => a < b
PiperOrigin-RevId:
197477959
A. Unique TensorFlower [Mon, 21 May 2018 23:13:28 +0000 (16:13 -0700)]
Expose partition_strategy option in embedding_lookup_unique
PiperOrigin-RevId:
197477853
Benoit Steiner [Mon, 21 May 2018 23:02:33 +0000 (16:02 -0700)]
Optimize more reductions
PiperOrigin-RevId:
197476067
Michael Case [Mon, 21 May 2018 22:55:46 +0000 (15:55 -0700)]
Internal Change.
PiperOrigin-RevId:
197475076
Sanjoy Das [Mon, 21 May 2018 22:49:14 +0000 (15:49 -0700)]
Extract out a MatrixMatrixBlockPanelEmitter::Dimensions struct; NFC
This gives me a convenient place to note that the m/k/n here are not the m/k/n
for the entire GEMM. I didn't rename m/k/n to mc/kc/nr since the latter seems
somewhat redundant to me -- we could read 'c as 'column' and 'r' as 'row', but
that's the only possibility?
This refactoring will also be useful when implementing GEPP on top of GEBP.
PiperOrigin-RevId:
197474137
Allen Lavoie [Mon, 21 May 2018 22:43:20 +0000 (15:43 -0700)]
Remove object-based checkpointing probes from Python 3 tf.train.Saver "name not found" stack traces
PiperOrigin-RevId:
197473101
Jiri Simsa [Mon, 21 May 2018 22:32:45 +0000 (15:32 -0700)]
Disable flaky batch_dataset_op_test
PiperOrigin-RevId:
197471172
A. Unique TensorFlower [Mon, 21 May 2018 21:47:37 +0000 (14:47 -0700)]
Allow using DNN to only train the embeddings and using the tree model for the final prediction.
PiperOrigin-RevId:
197462585
Dimitris Vardoulakis [Mon, 21 May 2018 21:25:04 +0000 (14:25 -0700)]
[TF:XLA] Delete cumulative_total_size to simplify the DFS scheduler.
It's unclear why we would assign cumulative_total_size as the total size of a single HLO, and deleting it doesn't make a difference in practice.
PiperOrigin-RevId:
197458260
Alexandre Passos [Mon, 21 May 2018 20:00:20 +0000 (13:00 -0700)]
Always enter the handle graph before calling convert_to_tensor in resource variables.
This mimics the behavior of ref variable's assign which converts to tensor in the
right graph inside op_def_lib.apply_op.
PiperOrigin-RevId:
197441989
Benoit Steiner [Mon, 21 May 2018 19:43:52 +0000 (12:43 -0700)]
Turn on dead branch elimination, shape optimization, and remapping by default
PiperOrigin-RevId:
197439191
Benoit Steiner [Mon, 21 May 2018 18:12:55 +0000 (11:12 -0700)]
Optimize multiplications by constants in more cases.
PiperOrigin-RevId:
197422256
Sanjoy Das [Mon, 21 May 2018 18:11:48 +0000 (11:11 -0700)]
Add a kernel usable as a GEBP inner loop for an LLVM IR GEMM
This is not used in any real code path, but I've added an escape hatch that runs
regular matrix multiplies through this kernel for testing purposes.
As far as I can tell this is functionally correct, but I don't yet have a proper
apples-to-apples performance comparison -- that'll have to wait till the
implementation is complete.
PiperOrigin-RevId:
197422075
A. Unique TensorFlower [Mon, 21 May 2018 17:34:42 +0000 (10:34 -0700)]
Automated g4 rollback of changelist
197226707
PiperOrigin-RevId:
197415745
A. Unique TensorFlower [Mon, 21 May 2018 14:00:53 +0000 (07:00 -0700)]
Enhance error reporting.
PiperOrigin-RevId:
197390052
A. Unique TensorFlower [Mon, 21 May 2018 10:52:25 +0000 (03:52 -0700)]
Extend optimization of Slice operator to StridedSlice.
PiperOrigin-RevId:
197376217
Justin Lebar [Sun, 20 May 2018 21:50:49 +0000 (14:50 -0700)]
[XLA] Fix memory leak in ScopedShapedBuffer's move-assignment operator.
PiperOrigin-RevId:
197331197
Pete Warden [Sun, 20 May 2018 21:45:10 +0000 (14:45 -0700)]
Fixed Pi cross compilation
PiperOrigin-RevId:
197331034
A. Unique TensorFlower [Sun, 20 May 2018 20:14:55 +0000 (13:14 -0700)]
Rollforward of CL
197167501, without enabling CUDNN_FFT_TILING_FORWARD because that breaks XLA tests.
PiperOrigin-RevId:
197328103
Justin Lebar [Sat, 19 May 2018 07:37:18 +0000 (00:37 -0700)]
[XLA] Consistently apply gpu-max-kernel-unroll-factor = 1 in HloTestBase.
Previously we set it in CreateNewModule but not in
GetDebugOptionsFromFlags(), which seems wrong.
PiperOrigin-RevId:
197247863
A. Unique TensorFlower [Sat, 19 May 2018 04:44:09 +0000 (21:44 -0700)]
Fix compile error due to missing default case in switch statement.
PiperOrigin-RevId:
197240781
A. Unique TensorFlower [Sat, 19 May 2018 02:25:41 +0000 (19:25 -0700)]
Add a method to list op names in an ApiDefMap.
PiperOrigin-RevId:
197234301
Skye Wanderman-Milne [Sat, 19 May 2018 01:29:54 +0000 (18:29 -0700)]
Add 'src_graph' argument to gradients_impl._GradientsHelper.
This allows the gradient graph to be built in a _FuncGraph separate
from the forward graph (a _FuncGraph is necessary to capture needed
tensors from the forward graph. It's up to the capturing logic what
how to feed the forward tensors to the gradient graph).
PiperOrigin-RevId:
197230736
A. Unique TensorFlower [Sat, 19 May 2018 01:18:21 +0000 (18:18 -0700)]
Delete unused and buggy code.
(Note the mistaken argument order of the string constructors.)
PiperOrigin-RevId:
197229855
Chris Leary [Sat, 19 May 2018 00:49:42 +0000 (17:49 -0700)]
[XLA] Regression test for missing virtual destructor.
PiperOrigin-RevId:
197227263
A. Unique TensorFlower [Sat, 19 May 2018 00:43:46 +0000 (17:43 -0700)]
Make the quantize_and_dequantize op use the full quantized range when possible.
PiperOrigin-RevId:
197226707
Igor Saprykin [Fri, 18 May 2018 23:33:19 +0000 (16:33 -0700)]
Automated g4 rollback of changelist
197070234
PiperOrigin-RevId:
197218170
A. Unique TensorFlower [Fri, 18 May 2018 23:28:59 +0000 (16:28 -0700)]
Improve import error messages.
PiperOrigin-RevId:
197217638
Igor Saprykin [Fri, 18 May 2018 22:49:01 +0000 (15:49 -0700)]
Skip tests that require unavailable hardware when not using DistributionStrategy
Right now combinations.py skips tests that do not have the hardware that's
requried by the DistributionStrategy instance that is used in that test. After
this change, the user can trigger such a behavior even when they are not using
DistributionStrategy.
Two new special arguments are added: "required_tpu" and "required_gpus".
PiperOrigin-RevId:
197212466
Alexandre Passos [Fri, 18 May 2018 22:33:00 +0000 (15:33 -0700)]
Correct dtype in resource_strided_slice_assign
PiperOrigin-RevId:
197210273
A. Unique TensorFlower [Fri, 18 May 2018 22:15:06 +0000 (15:15 -0700)]
Remove unused BUILD dependencies
PiperOrigin-RevId:
197207799
Jianwei Xie [Fri, 18 May 2018 21:59:01 +0000 (14:59 -0700)]
Fixed an issue when add context into params.
PiperOrigin-RevId:
197205327
Yu-Cheng Ling [Fri, 18 May 2018 20:12:41 +0000 (13:12 -0700)]
Revert a change to fix TFLite iOS demo app.
It depends on released CocoaPod.
PiperOrigin-RevId:
197189734