Ian Langmore [Wed, 14 Feb 2018 12:02:15 +0000 (04:02 -0800)]
effective_sample_size kwarg change (same default behavior).
* rename max_lags --> filter_beyond_lag
* rename max_lags_threshold --> filter_threshold
* Users can use both filters, and they combine in an "OR" manner
* None ==> turn off a filter.
PiperOrigin-RevId:
185666926
A. Unique TensorFlower [Wed, 14 Feb 2018 08:38:42 +0000 (00:38 -0800)]
Fix the comment for the return type _dnn_model_fn and _dnn_linear_combined_model_fn.
PiperOrigin-RevId:
185651142
Shanqing Cai [Wed, 14 Feb 2018 05:49:28 +0000 (21:49 -0800)]
tfe SPINN example: Add inference; fix serialization
* Also de-flake a test.
PiperOrigin-RevId:
185637742
Peter Hawkins [Wed, 14 Feb 2018 05:45:08 +0000 (21:45 -0800)]
[TF:XLA] Tag reverse_sequence test as optonly, increase its timeout.
PiperOrigin-RevId:
185637431
Francois Chollet [Wed, 14 Feb 2018 04:39:33 +0000 (20:39 -0800)]
Fix tf.keras progbar display.
PiperOrigin-RevId:
185632155
Anjali Sridhar [Wed, 14 Feb 2018 03:16:08 +0000 (19:16 -0800)]
For models running in Eager mode, do not update the weights of the BatchNorm layer if the layer's trainable argument is False.
This change is required in Eager mode to freeze a layer's weights when we set the layer's trainable attribute to False.
This should not be confused with the "training" attribute which refers to a model's training or inference mode behavior.
PiperOrigin-RevId:
185625661
A. Unique TensorFlower [Wed, 14 Feb 2018 02:57:53 +0000 (18:57 -0800)]
Automated g4 rollback of changelist
185598764
PiperOrigin-RevId:
185623948
A. Unique TensorFlower [Wed, 14 Feb 2018 01:53:13 +0000 (17:53 -0800)]
Internal Change.
PiperOrigin-RevId:
185618034
Shanqing Cai [Wed, 14 Feb 2018 01:52:50 +0000 (17:52 -0800)]
TensorBoard debugger plugin: SIGINT handler for easier termination of debugged runtime
from TensorBoardDebugWrapperSession and TensorBoardDebugHook.
PiperOrigin-RevId:
185617989
Rohan Jain [Wed, 14 Feb 2018 01:26:18 +0000 (17:26 -0800)]
Don't release kInvalidHandle.
Also added a little more debug information.
PiperOrigin-RevId:
185615169
Alexandre Passos [Wed, 14 Feb 2018 01:16:34 +0000 (17:16 -0800)]
Lazily reads from resource variables in eager mode.
PiperOrigin-RevId:
185614070
Justin Lebar [Wed, 14 Feb 2018 01:16:29 +0000 (17:16 -0800)]
[XLA:GPU] Don't crash when the root instruction of a computation is a multi-output fusion node, and avoid some pointer chasing with tuples.
Previously, the kernels we generated would have one argument per
*top-level* buffer of the input/output. This was fine for inputs. But
it doesn't work for outputs: Imagine you're a node that returns a tuple
-- e.g. multi-output fusion -- if all you get is a pointer to the
top-level buffer of your output (which should contain pointers to the
lower-level buffers at some point, but at the moment is just empty), how
are you supposed to figure out where to write your output?
(This usually worked because most of the time your output would live
inside of the big XLA temp buffer, and kernels always get a pointer to
that.)
Now we pass all the buffers, top-level and otherwise, to our kernel. In
addition, we're now willing to dereference statically tuples that live
entirely in XLA's temp buffer. Pointers in input tuples must still be
dereferenced dynamically, because the caller has the option of giving us
these values or not when invoking XLA.
This change makes some parts of BufferAssignment/BufferAllocations more
truthful. Previously, if you passed a tuple-shaped input to XLA, we'd
say in BufferAllocations that the pointer for some subshape of the param
was the *top-level tuple pointer*. XLA then knew that this was a lie
and would dereference it accordingly. Now we have an explicit notion of
a BufferAllocation pointing to a subshape of an input parameter.
PiperOrigin-RevId:
185614060
Sanjoy Das [Wed, 14 Feb 2018 00:55:54 +0000 (16:55 -0800)]
Improve type safety around float constants
Instead of passing floating point constants to the vector support library as
compiler-side floats, pass them as APFloats instead. This reduces the duration
during which these constants are semantically represented as floats on the host
side and are subject to fast-math-like behavior. This is especially important
in cases where the exact bit representation of the floating point constant is
significant, but also makes progress towards ensuring that e.g. build XLA with
-ffast-math does not change the IR we generate.
PiperOrigin-RevId:
185611301
Suharsh Sivakumar [Wed, 14 Feb 2018 00:43:21 +0000 (16:43 -0800)]
Fix incorrect is_training parameter.
And remove many is_training default values to avoid these mistakes from happening again.
PiperOrigin-RevId:
185609589
A. Unique TensorFlower [Wed, 14 Feb 2018 00:43:20 +0000 (16:43 -0800)]
Update TPU Profiler to be able to take a TPU name
PiperOrigin-RevId:
185609588
Akshay Modi [Wed, 14 Feb 2018 00:17:55 +0000 (16:17 -0800)]
CleanupFunc doesn't need to do cleanup if _py_funcs is already destroyed.
PiperOrigin-RevId:
185606203
A. Unique TensorFlower [Wed, 14 Feb 2018 00:09:00 +0000 (16:09 -0800)]
Allow an option to delete the temporary file created at compilation on exit. Defaulting it to true, though we may want to keep it off during development.
PiperOrigin-RevId:
185604628
A. Unique TensorFlower [Tue, 13 Feb 2018 23:29:00 +0000 (15:29 -0800)]
Wire in support for XLA kConditional instruction.
PiperOrigin-RevId:
185598764
Sanjoy Das [Tue, 13 Feb 2018 22:55:24 +0000 (14:55 -0800)]
Change linkage type of modules to external after dropping initializers
It isn't legal to have private global variables without initializers. In the
current state the -noconst.ll LLVM IR cannot be passed to opt.
PiperOrigin-RevId:
185593073
Mingsheng Hong [Tue, 13 Feb 2018 22:52:31 +0000 (14:52 -0800)]
Code cleanup: Made Executor related API take std::unique_ptr<const Graph> instead of const Graph* as input.
PiperOrigin-RevId:
185592574
Yunxing Dai [Tue, 13 Feb 2018 22:51:25 +0000 (14:51 -0800)]
[TF-XLA] Disable Tensorflow's CSE in xla compiler
No need to do CSE in TF-XLA bridge, as XLA already has its own CSE pass later in the compilation pipeline. This removes one source of nondeterminism.
RELNOTES: CSE pass from Tensorflow is now disabled in XLA.
PiperOrigin-RevId:
185592383
A. Unique TensorFlower [Tue, 13 Feb 2018 21:56:24 +0000 (13:56 -0800)]
Use a more advanced py_func wrapper, one that allows non-tensor args to be passed directly to the wrapped function.
PiperOrigin-RevId:
185583023
Sanjoy Das [Tue, 13 Feb 2018 21:54:52 +0000 (13:54 -0800)]
[TF:XLA] Bump open source llvm revision to r324990
PiperOrigin-RevId:
185582753
Zhixian Yan [Tue, 13 Feb 2018 21:42:11 +0000 (13:42 -0800)]
Internal Change
PiperOrigin-RevId:
185580787
A. Unique TensorFlower [Tue, 13 Feb 2018 21:30:20 +0000 (13:30 -0800)]
Add `Kumaraswamy` to distributions __init__.py
PiperOrigin-RevId:
185578929
Sanjoy Das [Tue, 13 Feb 2018 20:45:27 +0000 (12:45 -0800)]
Make LLVM IR dumps more readable
Before this the IR we dumped out would contain constant tensors that were
sometimes huge resulting in unweildy IR files. With this change we dump out a
"noconst" IR file that has the constant initializers stripped out.
PiperOrigin-RevId:
185572700
Benoit Steiner [Tue, 13 Feb 2018 20:31:59 +0000 (12:31 -0800)]
Extract the filter and input shape for Conv2DBackpropFilter/Conv2DBackpropInput
from the corresponding op inputs whenever possible.
PiperOrigin-RevId:
185570750
Andrew Selle [Tue, 13 Feb 2018 20:24:02 +0000 (12:24 -0800)]
Fix bug in populating the execution plan sent to the delegate.
- memcpy was missing the array size.
- modified the unit test to verify that the execution plan is
trivial on first delegate invocation.
PiperOrigin-RevId:
185569606
Akshay Modi [Tue, 13 Feb 2018 20:10:40 +0000 (12:10 -0800)]
Add cache for _zeros in backprop
PiperOrigin-RevId:
185567508
Akshay Modi [Tue, 13 Feb 2018 19:56:59 +0000 (11:56 -0800)]
Minor eager-related performance improvements
- Add a cache for name_scope
- skip some overhead _MulGrad and _MatMulGrad
PiperOrigin-RevId:
185565363
Benoit Steiner [Tue, 13 Feb 2018 19:46:08 +0000 (11:46 -0800)]
Explicitely place the swap-in node: this ensures that subsequent rounds of
memory optimization have a more accurate picture of the placement.
PiperOrigin-RevId:
185563797
Jianwei Xie [Tue, 13 Feb 2018 19:33:12 +0000 (11:33 -0800)]
Error out or log a warning if user sets the TPUConfig.num_shards incorrectly.
Also improve TPU system metadata print out message.
PiperOrigin-RevId:
185561680
Igor Saprykin [Tue, 13 Feb 2018 19:18:15 +0000 (11:18 -0800)]
Allow other types of variables to act as a resource variable.
Introduce resource_variable_ops.is_resource_variable() function that returns true
if an _should_act_as_resource_variable attribute is set.
PiperOrigin-RevId:
185559202
Mark Daoust [Tue, 13 Feb 2018 18:54:09 +0000 (10:54 -0800)]
add missing blank line
PiperOrigin-RevId:
185554969
A. Unique TensorFlower [Tue, 13 Feb 2018 18:48:58 +0000 (10:48 -0800)]
Add empty scaffolding for loop optimizers in Grappler.
PiperOrigin-RevId:
185554126
A. Unique TensorFlower [Tue, 13 Feb 2018 18:48:43 +0000 (10:48 -0800)]
Clarify that the behavior of the iterator (advancing whenever any of the components is evaluated) is not magic, but a simple consequence of the dataflow graph structure.
PiperOrigin-RevId:
185554089
A. Unique TensorFlower [Tue, 13 Feb 2018 18:36:37 +0000 (10:36 -0800)]
Fix documentation for the real shape of the output of crf_log_likelihood.
PiperOrigin-RevId:
185552171
Akshay Modi [Tue, 13 Feb 2018 18:23:53 +0000 (10:23 -0800)]
Use _set_attr instead of directly modifying the nodedef
PiperOrigin-RevId:
185550223
Anjali Sridhar [Tue, 13 Feb 2018 18:16:03 +0000 (10:16 -0800)]
Move two common utility functions used by training and training_eager classes to a utility class.
PiperOrigin-RevId:
185548922
Eugene Brevdo [Tue, 13 Feb 2018 17:46:36 +0000 (09:46 -0800)]
Tiny bugfix to eager TensorArray error message.
PiperOrigin-RevId:
185543699
A. Unique TensorFlower [Tue, 13 Feb 2018 17:07:12 +0000 (09:07 -0800)]
TF to XLA compiler to support FakeQuantWithMinMaxVars/Args.
PiperOrigin-RevId:
185538228
A. Unique TensorFlower [Tue, 13 Feb 2018 16:46:26 +0000 (08:46 -0800)]
Add gradient norm target arg to wass gradient penalty function. This trick is usd in the progressive GAN paper https://arxiv.org/abs/1710.10196
PiperOrigin-RevId:
185535584
A. Unique TensorFlower [Tue, 13 Feb 2018 16:12:09 +0000 (08:12 -0800)]
Force the use of print function in generated code.
PiperOrigin-RevId:
185531979
A. Unique TensorFlower [Tue, 13 Feb 2018 09:08:22 +0000 (01:08 -0800)]
Mechanical variable renaming to improve consistency. No other changes.
Distinguishing between points and vectors: A point refers to a location in the tensor/filter, referred to by channel/row/col. A vector is the difference between two points (mostly 'one_past_the_end - begin'), referred to by depth/height/width.
PiperOrigin-RevId:
185496176
Gunhan Gulsoy [Tue, 13 Feb 2018 08:13:28 +0000 (00:13 -0800)]
Internal change
PiperOrigin-RevId:
185491705
Surya Bhupatiraju [Tue, 13 Feb 2018 07:31:38 +0000 (23:31 -0800)]
Add test to ensure that covariance terms of FID is being incorporated meaningfully.
PiperOrigin-RevId:
185488757
Gunhan Gulsoy [Tue, 13 Feb 2018 07:20:15 +0000 (23:20 -0800)]
Disable interleave_dataset_op_test.py and remove a duplicate entry in test blacklist in cmake build.
PiperOrigin-RevId:
185488210
A. Unique TensorFlower [Tue, 13 Feb 2018 06:54:48 +0000 (22:54 -0800)]
1. Add image_ops.is_jpeg Op to decide if a input string is a jpeg string or not.
2. Change tfexample_decoder in slim/objection_detection to accept different JPEG decompression method.
Defaults to ""/None which maps to a system-specific default. Currently valid values are ["INTEGER_FAST", "INTEGER_ACCURATE"]. The hint may be ignored (e.g., the internal jpeg library changes to a version that does not have that specific option.)
PiperOrigin-RevId:
185486653
Peter Hawkins [Tue, 13 Feb 2018 06:45:49 +0000 (22:45 -0800)]
[TF:XLA] Work around crash in Gather op on CPU backend by making loop bound a compile-time constant.
PiperOrigin-RevId:
185486148
Sanjoy Das [Tue, 13 Feb 2018 05:56:19 +0000 (21:56 -0800)]
[XLA:CPU] Implement vectorized Log in LLVM IR
This was the last vectorized intrinsic for which we had to call into
C++ so also remove the associated machinery.
PiperOrigin-RevId:
185482962
Peter Hawkins [Tue, 13 Feb 2018 02:34:36 +0000 (18:34 -0800)]
[TF:XLA] Implement ScatterNd.
Add a helper method for performing scatter operations. Share it between ScatterNd and UnsortedSegmentSum implementations. In passing, add support for negative indices to the UnsortedSegmentSum implementation.
Added helper methods for creating XLA while loops. Use the new helper in both Gather and Scatter ops.
PiperOrigin-RevId:
185469229
Yu-Cheng Ling [Tue, 13 Feb 2018 01:58:27 +0000 (17:58 -0800)]
Fix TFLite examples/image_label
PiperOrigin-RevId:
185465716
Francois Chollet [Tue, 13 Feb 2018 01:44:01 +0000 (17:44 -0800)]
Enable Model subclassing, both in eager-mode and symbolic-mode.
PiperOrigin-RevId:
185464334
Jacques Pienaar [Tue, 13 Feb 2018 01:15:51 +0000 (17:15 -0800)]
Rollforward switch group identification with fixes.
Fixed computing the switch depth: with the erroneous switch depth incorrect
clusters could be formed. Change the way the switch depth is determined (the
switch depth is now on the output side, so a switch always has a switch depth
one higher than all its inputs), add further checking during execution.
PiperOrigin-RevId:
185461054
A. Unique TensorFlower [Tue, 13 Feb 2018 01:07:27 +0000 (17:07 -0800)]
Fix a typo in the comments.
PiperOrigin-RevId:
185459972
Bixia Zheng [Tue, 13 Feb 2018 00:56:28 +0000 (16:56 -0800)]
[XLA:GPU] Extend the CustomCall for cudnn convolutions to represent
tensor_ops_enabled.
The convolution algorithms returned from the stream executor have a flag
for whether tensor_ops is enabled. This flag is used when running each
algorithm during auto-tunning. However, this flag is not currently represented
in the CustomCall representing the auto-tune result. As a result, the algorithm
may be run differently after auto-tune.
This change adds a constant to the CustomCall for cudnn convolution algorithm
selected by auto-tune, to represent whether tensor_ops is enabled during
auto-tune. This information is used by convolution thunk to ensure that the
algorithm is run with the same flag after auto-tune.
PiperOrigin-RevId:
185458497
Igor Saprykin [Tue, 13 Feb 2018 00:24:45 +0000 (16:24 -0800)]
Support None trainable variables that don't produce a gradient in replicate_model_fn.
This fixes #16829.
PiperOrigin-RevId:
185453911
Suharsh Sivakumar [Tue, 13 Feb 2018 00:20:07 +0000 (16:20 -0800)]
Automated g4 rollback of changelist
185420228
PiperOrigin-RevId:
185453293
Jianwei Xie [Mon, 12 Feb 2018 23:12:31 +0000 (15:12 -0800)]
Avoid setting `ConfigProto.cluster_def` when `run_config.cluster_def` is not set.
PiperOrigin-RevId:
185443115
Guangda Lai [Mon, 12 Feb 2018 22:52:47 +0000 (14:52 -0800)]
Add an option to tf_gen_op_wrapper_py to make it able to run the genrule
locally.
PiperOrigin-RevId:
185439892
A. Unique TensorFlower [Mon, 12 Feb 2018 22:38:37 +0000 (14:38 -0800)]
Add a caveat that pixel value range might not be preserved by ResizeArea.
PiperOrigin-RevId:
185437687
Jacques Pienaar [Mon, 12 Feb 2018 22:37:41 +0000 (14:37 -0800)]
Rename op name in comments to reflect renamed op names. NFC.
PiperOrigin-RevId:
185437550
Suharsh Sivakumar [Mon, 12 Feb 2018 22:05:08 +0000 (14:05 -0800)]
Add tests for visible api arguments in quantize_graph.
PiperOrigin-RevId:
185432142
A. Unique TensorFlower [Mon, 12 Feb 2018 22:01:51 +0000 (14:01 -0800)]
Return false instead of crashing in Tensor::SharesBufferWith if neither tensor has a buffer assigned yet, since that is a valid state. Returning
buf_ != nullptr && b.buf_ != nullptr && buf_->root_buffer() == b.buf_->root_buffer();
still satisfies the contract in the header, i.e. "True iff the two tensors use the same underlying refcounted storage."
PiperOrigin-RevId:
185431574
Benoit Steiner [Mon, 12 Feb 2018 21:37:37 +0000 (13:37 -0800)]
Filter out the fake XLA devices to avoid double counting the actual hardware
resources available on the machine
PiperOrigin-RevId:
185427665
Mark Daoust [Mon, 12 Feb 2018 21:26:35 +0000 (13:26 -0800)]
allow @{} links to break across lines.
PiperOrigin-RevId:
185426070
Benoit Steiner [Mon, 12 Feb 2018 21:25:57 +0000 (13:25 -0800)]
Extend the memory optimizations to also support accumulate_n ops
PiperOrigin-RevId:
185425999
Jianwei Xie [Mon, 12 Feb 2018 21:11:17 +0000 (13:11 -0800)]
Respect the cluster spec prop during TPU system auto query.
PiperOrigin-RevId:
185423314
Alexandre Passos [Mon, 12 Feb 2018 21:00:57 +0000 (13:00 -0800)]
Scope and decorator to automatically add control dependencies.
Should mimic the desired behavior of eager code.
For now supports only straight-line code and conditionals.
PiperOrigin-RevId:
185421760
Suharsh Sivakumar [Mon, 12 Feb 2018 20:50:07 +0000 (12:50 -0800)]
Also add quantization step node to MODEL_VARIABLES collection.
PiperOrigin-RevId:
185420228
Sanjoy Das [Mon, 12 Feb 2018 20:30:01 +0000 (12:30 -0800)]
[TF:XLA] Bump open source llvm revision to r324889
PiperOrigin-RevId:
185417275
A. Unique TensorFlower [Mon, 12 Feb 2018 20:29:05 +0000 (12:29 -0800)]
Add missing feature in make_parse_example_spec documentation.
PiperOrigin-RevId:
185417163
A. Unique TensorFlower [Mon, 12 Feb 2018 20:23:16 +0000 (12:23 -0800)]
Add yield_single_examples arg to Estimator.predict
PiperOrigin-RevId:
185416396
Asim Shankar [Mon, 12 Feb 2018 20:09:26 +0000 (12:09 -0800)]
[Java]: Add link to samples in the tensorflow/models repository.
PiperOrigin-RevId:
185414475
Benoit Steiner [Mon, 12 Feb 2018 20:05:49 +0000 (12:05 -0800)]
Enable the use of scheduling heuristics to reduce peak memory usage by default
PiperOrigin-RevId:
185413855
Yao Zhang [Mon, 12 Feb 2018 19:54:07 +0000 (11:54 -0800)]
Support reduction with true keep_dims and squeeze along NHW dimensions.
PiperOrigin-RevId:
185411786
A. Unique TensorFlower [Mon, 12 Feb 2018 19:49:54 +0000 (11:49 -0800)]
Adding support for tf.reduce_sum with keep_dims=True.
PiperOrigin-RevId:
185411141
Yuanzhong Xu [Mon, 12 Feb 2018 19:26:22 +0000 (11:26 -0800)]
[XLA] An HLO pass that folds BF16 F32 conversions: if an HLO already supports BF16 input/output, conversions before/after it will be removed and the HLO's input/output types will be converted to BF16.
Also updates HloVerifier to allow mixed precision if requested. If an HLO has both both F32 and BF16 inputs, ShapeInference will use F32 as the output type.
PiperOrigin-RevId:
185407143
Sanjoy Das [Mon, 12 Feb 2018 19:12:04 +0000 (11:12 -0800)]
Make variable_ops_test optonly
variable_ops_test sometimes times out in fastbuild mode. So mark it as optonly.
Running this test with `bazel test -c opt` passes all 1000 of 1000 reruns.
Running it with just `bazel test` fails 5 out of 300 reruns.
PiperOrigin-RevId:
185404726
Neal Wu [Mon, 12 Feb 2018 18:47:26 +0000 (10:47 -0800)]
Change the column name in tutorials/wide.md from 'income' to 'income_bracket' to match the code
PiperOrigin-RevId:
185400490
A. Unique TensorFlower [Mon, 12 Feb 2018 18:34:20 +0000 (10:34 -0800)]
Add support for scalars in `tf.contrib.all_reduce`.
PiperOrigin-RevId:
185398372
Peter Hawkins [Mon, 12 Feb 2018 18:34:18 +0000 (10:34 -0800)]
[TF:XLA] Add additional test case for tf.gather.
PiperOrigin-RevId:
185398368
Alexandre Passos [Mon, 12 Feb 2018 18:27:18 +0000 (10:27 -0800)]
Fix shape inference bug in tensorlist
PiperOrigin-RevId:
185397219
Derek Murray [Mon, 12 Feb 2018 17:57:40 +0000 (09:57 -0800)]
Update `tf.contrib.data` API docstring.
PiperOrigin-RevId:
185392564
Jacques Pienaar [Mon, 12 Feb 2018 17:28:47 +0000 (09:28 -0800)]
ParseNodeName fix.
ParseNodeName was skipping ops that started with an underscore, leading to warnings that input of an op was undefined and stopping grappler optimizations from being run on the graph.
PiperOrigin-RevId:
185388749
A. Unique TensorFlower [Mon, 12 Feb 2018 16:38:17 +0000 (08:38 -0800)]
Internal Change
PiperOrigin-RevId:
185382594
Brian Patton [Mon, 12 Feb 2018 14:40:26 +0000 (06:40 -0800)]
For debugging purposes, it can be useful to know which ops are considered non-pure / non-constant.
PiperOrigin-RevId:
185371882
A. Unique TensorFlower [Mon, 12 Feb 2018 13:34:05 +0000 (05:34 -0800)]
[XLA] Support generating tuple shaped fake data in client testing
The previous implementation failed over in case of a tuple shaped input
what broke the replay computation tool for the case where the input is a
tuple.
PiperOrigin-RevId:
185366228
Vijay Vasudevan [Mon, 12 Feb 2018 05:19:37 +0000 (21:19 -0800)]
Provide more diagnostic shape information in output window error message.
PiperOrigin-RevId:
185331713
Guangda Lai [Mon, 12 Feb 2018 02:09:11 +0000 (18:09 -0800)]
Automated g4 rollback of changelist
185233116
PiperOrigin-RevId:
185324160
Jianwei Xie [Sun, 11 Feb 2018 23:54:39 +0000 (15:54 -0800)]
[TPUEstimator] Automatically detect the TPU system information, including topology for model parallelism.
PiperOrigin-RevId:
185318852
A. Unique TensorFlower [Sun, 11 Feb 2018 11:44:24 +0000 (03:44 -0800)]
Disable flaky halton_sequence_test
PiperOrigin-RevId:
185294455
A. Unique TensorFlower [Sun, 11 Feb 2018 04:48:19 +0000 (20:48 -0800)]
Add support for kConditional to the module group scheduler.
PiperOrigin-RevId:
185279412
A. Unique TensorFlower [Sat, 10 Feb 2018 20:45:12 +0000 (12:45 -0800)]
Getting rid of unnecessary GPUDevice typedef.
Passing DepthwiseArgs by reference in host code.
PiperOrigin-RevId:
185263307
A. Unique TensorFlower [Sat, 10 Feb 2018 19:22:55 +0000 (11:22 -0800)]
Add python/util/is_in_graph_mode.py
PiperOrigin-RevId:
185260675
A. Unique TensorFlower [Sat, 10 Feb 2018 11:47:15 +0000 (03:47 -0800)]
Automated g4 rollback of changelist
185073515
PiperOrigin-RevId:
185246348
Yao Zhang [Sat, 10 Feb 2018 09:45:11 +0000 (01:45 -0800)]
Do not convert layout for Select if condition input is of unknown shape.
PiperOrigin-RevId:
185242138
Guangda Lai [Sat, 10 Feb 2018 06:47:30 +0000 (22:47 -0800)]
Fix grappler to use CudaGpuId instead of TfGpuId to query device states.
PiperOrigin-RevId:
185233116
Sanjoy Das [Sat, 10 Feb 2018 01:30:03 +0000 (17:30 -0800)]
Add a test that exhaustively checks Log/Exp/Tanh
PiperOrigin-RevId:
185216684
Skye Wanderman-Milne [Sat, 10 Feb 2018 01:14:30 +0000 (17:14 -0800)]
import_graph_def: support "absolute" names with the C API enabled.
Passing a name with a trailing '/' to import_graph_def causes that
name to be used as-is (i.e. it is not appended to the existing name
scope and not de-duped with any existing name scopes. This is in order
to re-use an existing name scope). This didn't work with the C API
enabled because it was set to always have the C API uniquify the
prefix.
The fix is to not uniquify the prefix, since calling name_scope in
import_graph_def already has the logic to uniquify the prefix if
necessary. I'm not sure why I thought we needed the C API to do this
to being with.
In addition, this changes the graph_constructor.cc logic to uniquify
names if the prefix cannot be guaranteed unique (see the new test case
in graph_constructor_test.cc for why/when this is necessary).
PiperOrigin-RevId:
185215326