Allen Lavoie [Thu, 17 May 2018 21:11:14 +0000 (14:11 -0700)]
Automated g4 rollback of changelist
197026249
PiperOrigin-RevId:
197049255
Michael Kuperstein [Thu, 17 May 2018 20:37:57 +0000 (13:37 -0700)]
[TF:XLA] Do not rely on implementation-defined semantics of DynamicSlice.
ReverseSequence relies on DynamicSlice wrapping around, which is implementation-defined behavior, and is not guaranteed. Pad the input instead.
PiperOrigin-RevId:
197043307
A. Unique TensorFlower [Thu, 17 May 2018 20:12:51 +0000 (13:12 -0700)]
Allows users to specify allow_custom_ops when calling tf.contrib.lite.toco_convert().
PiperOrigin-RevId:
197039477
Michael Case [Thu, 17 May 2018 20:02:08 +0000 (13:02 -0700)]
Remove -DGEMMLOWP_ALLOW_SLOW_SCALAR_FALLBACK being added by default to all builds.
PiperOrigin-RevId:
197037867
A. Unique TensorFlower [Thu, 17 May 2018 19:55:02 +0000 (12:55 -0700)]
Support functools.partial as callable object in tf_inspect.getargspec.
PiperOrigin-RevId:
197036874
A. Unique TensorFlower [Thu, 17 May 2018 19:31:17 +0000 (12:31 -0700)]
[XLA] Redesign: delete Client::LoadSnapeshot(SessionModule). This is a precondition to delete xla::Computation.
PiperOrigin-RevId:
197033641
Akshay Modi [Thu, 17 May 2018 19:29:54 +0000 (12:29 -0700)]
Test some distributions stuff in Eager as well as Graph
PiperOrigin-RevId:
197033485
A. Unique TensorFlower [Thu, 17 May 2018 18:52:49 +0000 (11:52 -0700)]
Internal Change
PiperOrigin-RevId:
197028096
A. Unique TensorFlower [Thu, 17 May 2018 18:47:16 +0000 (11:47 -0700)]
Support 1x1x1xN bias sizes in TFLite's convolution and FC layers.
PiperOrigin-RevId:
197027135
A. Unique TensorFlower [Thu, 17 May 2018 18:42:18 +0000 (11:42 -0700)]
Internal change
PiperOrigin-RevId:
197026249
Skye Wanderman-Milne [Thu, 17 May 2018 18:33:36 +0000 (11:33 -0700)]
Remove C API staging from importer.py.
PiperOrigin-RevId:
197024708
Alexandre Passos [Thu, 17 May 2018 18:17:58 +0000 (11:17 -0700)]
Fix the fizzbuzz example
PiperOrigin-RevId:
197021930
A. Unique TensorFlower [Thu, 17 May 2018 18:13:00 +0000 (11:13 -0700)]
Use integral power function rather than floating point version. The integral version is faster.
PiperOrigin-RevId:
197021020
Benjamin Kramer [Thu, 17 May 2018 18:06:05 +0000 (11:06 -0700)]
[XLA:GPU] Unroll multi-output loop fusions
This is easier than I thought because we can assume that all tuple members have
the same number of elements. LLVM doesn't do a great job of vectorizing the
resulting stores, but otherwise this is working fine.
PiperOrigin-RevId:
197019718
A. Unique TensorFlower [Thu, 17 May 2018 17:56:36 +0000 (10:56 -0700)]
Change loop variable type to be deduced.
PiperOrigin-RevId:
197017789
A. Unique TensorFlower [Thu, 17 May 2018 17:15:45 +0000 (10:15 -0700)]
Change traverse_test.test_module to traverse a constructed dummy module rather than testcase itself.
PiperOrigin-RevId:
197010681
Alexandre Passos [Thu, 17 May 2018 17:02:06 +0000 (10:02 -0700)]
Allows the fizzbuzz example to work when called as fizzbuzz(tf.constant(10)).
Fixes #18960
PiperOrigin-RevId:
197008373
Tom Hennigan [Thu, 17 May 2018 16:57:16 +0000 (09:57 -0700)]
Rename private push/pop API and use from `stop_recording` method.
PiperOrigin-RevId:
197007561
Nick Desaulniers [Thu, 17 May 2018 16:54:17 +0000 (09:54 -0700)]
[TF:XLA] remove re-initializations of Literals
It's an antipattern to have:
auto x = Literal::CreateFromShape(my_shape);
x->Populate();
as that results in initialization followed by reinitialization. Can be replaced
with:
auto x = MakeUnique<Literal>(my_shape);
x->Populate();
Suggested-by: Kay Zhu <kayzhu@google.com>
PiperOrigin-RevId:
197007127
Derek Murray [Thu, 17 May 2018 16:32:02 +0000 (09:32 -0700)]
Improve the error message printed when a WorkerService::GetStatus() call fails on session creation.
PiperOrigin-RevId:
197003951
Skye Wanderman-Milne [Thu, 17 May 2018 16:28:35 +0000 (09:28 -0700)]
Improvements to function._FuncGraph.
* Adds 'inputs', 'outputs', and 'name' field to _FuncGraph. This
allows _FuncGraph to encapsulate all the information needed to
convert it to a FunctionDef.
* Refactor logic for converting a Python callable to a _FuncGraph into
a new method, func_graph_from_py_func().
These changes are in preparation for converting tf.cond to emit an If
op. By exposing _FuncGraph functionality outside of _DefinedFunction,
_FuncGraphs can be used to represent functions that are manipulated
(e.g. to output intermediate tensors) before being converted to
FunctionDef protos.
PiperOrigin-RevId:
197003496
A. Unique TensorFlower [Thu, 17 May 2018 16:26:18 +0000 (09:26 -0700)]
Remove misleading declaration-as-default that results in a deleted constructor, and a misguided comment.
PiperOrigin-RevId:
197003162
A. Unique TensorFlower [Thu, 17 May 2018 16:25:14 +0000 (09:25 -0700)]
[XLA] Adds HloLivenessAnalysis and HloModuleDCE.
HloLivenessAnalysis marks all live instruction outputs (i.e. tuple elements) for all instructions in an HloModule, propagating live values across computation boundaries.
HloModuleDCE sweeps through each instructions dead tuple elements, eliminating dead code (currently removes dead tuple elements from while loops, but could be extended to do the same for call instructions).
PiperOrigin-RevId:
197003043
Skye Wanderman-Milne [Thu, 17 May 2018 16:22:24 +0000 (09:22 -0700)]
Update SessionTest.testFeedShapeCompatibility to work with C API enabled.
This test got lost in the transition. Prior to enabling the C API,
some constant node whose values were used for shape inference would be
marked as unfeedable in tensor_util.constant_value
(https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/python/framework/tensor_util.py#L810).
This shape inference path is no longer used with the C API enabled, so
the constant node is successfully fed, triggering a runtime shape error.
This is arguably a regression, but given that the Python code wouldn't
mark all nodes evaluated during shape inference as unfeedable, it
seems ok to relax the check a little more.
PiperOrigin-RevId:
197002741
A. Unique TensorFlower [Thu, 17 May 2018 16:11:16 +0000 (09:11 -0700)]
Avoid accessing platform/default directly.
PiperOrigin-RevId:
197001347
A. Unique TensorFlower [Thu, 17 May 2018 15:52:36 +0000 (08:52 -0700)]
Fixed bug in tf.pad shape logic that made it more restrictive than necessary by interpreting 0 as False.
PiperOrigin-RevId:
196998883
Alexandre Passos [Thu, 17 May 2018 15:23:10 +0000 (08:23 -0700)]
Methods to stop and reset tf.GradientTape()
PiperOrigin-RevId:
196995160
Benjamin Kramer [Thu, 17 May 2018 15:00:54 +0000 (08:00 -0700)]
[TF:XLA] Bump open source llvm revision to r332584
PiperOrigin-RevId:
196992500
A. Unique TensorFlower [Thu, 17 May 2018 14:34:24 +0000 (07:34 -0700)]
Drop some old dump_graphviz options
PiperOrigin-RevId:
196989899
Eric Liu [Thu, 17 May 2018 12:57:31 +0000 (05:57 -0700)]
Adapt LLVM ORC interface change in r332541.
PiperOrigin-RevId:
196978634
A. Unique TensorFlower [Thu, 17 May 2018 09:47:24 +0000 (02:47 -0700)]
Add evaluation metrics and export results in the new train_and_evaluate API (for local mode).
PiperOrigin-RevId:
196962253
A. Unique TensorFlower [Thu, 17 May 2018 05:33:38 +0000 (22:33 -0700)]
Move BUILD file to OVIC folder and add model validation.
PiperOrigin-RevId:
196940252
A. Unique TensorFlower [Thu, 17 May 2018 05:22:36 +0000 (22:22 -0700)]
Update installation documentation to reflect that CUDA 8 and cuDNN 6 are minimal supported versions.
Remove GPU instructions for MacOS because GPUs are not supported anymore.
PiperOrigin-RevId:
196939548
A. Unique TensorFlower [Thu, 17 May 2018 04:40:06 +0000 (21:40 -0700)]
Sort tags before logging.
PiperOrigin-RevId:
196936678
Derek Murray [Thu, 17 May 2018 04:15:44 +0000 (21:15 -0700)]
[tf.data] Accept NumPy dtype objects in `Dataset.from_generator(..., output_types=...)`.
PiperOrigin-RevId:
196935179
James Qin [Thu, 17 May 2018 03:52:32 +0000 (20:52 -0700)]
Add more logging in BaseGPUDevice::ComputeHelper for kernel completion.
PiperOrigin-RevId:
196933479
A. Unique TensorFlower [Thu, 17 May 2018 03:31:29 +0000 (20:31 -0700)]
Making GetOptionalInput from kernel_util.h return a pointer to const data.
PiperOrigin-RevId:
196932028
Sanjoy Das [Thu, 17 May 2018 03:15:53 +0000 (20:15 -0700)]
[TF:XLA] Bump open source llvm revision to r332236
PiperOrigin-RevId:
196930996
Justine Tunney [Thu, 17 May 2018 02:27:28 +0000 (19:27 -0700)]
Internal change.
PiperOrigin-RevId:
196927869
A. Unique TensorFlower [Thu, 17 May 2018 02:12:18 +0000 (19:12 -0700)]
Enhance DenseLayer + XLA compatibility test cases to cover compilation behavior differences in different jit modes.
PiperOrigin-RevId:
196926896
James Qin [Thu, 17 May 2018 01:41:39 +0000 (18:41 -0700)]
Append device name in executor logging
PiperOrigin-RevId:
196924318
A. Unique TensorFlower [Thu, 17 May 2018 01:23:20 +0000 (18:23 -0700)]
[TF:XLA] Make noinline function work with control flow.
1) Make the local function library for control flow self-contained. The control flow function could refer to a noinline function not defined in the local library. Copy the missing FunctionDefs from the glocal library to the local one.
2) Fix the index used to get the output shapes for functional nodes.
PiperOrigin-RevId:
196922649
A. Unique TensorFlower [Thu, 17 May 2018 01:13:00 +0000 (18:13 -0700)]
[XLA] Remove XlaOp::GetShape. It only works when the buidler of the XlaOp is not freeed.
PiperOrigin-RevId:
196921647
Skye Wanderman-Milne [Thu, 17 May 2018 01:05:52 +0000 (18:05 -0700)]
Remove _USE_C_API staging in tests now that the C API is enabled by default.
This is in preparation for removing the _USE_C_API toggle altogether.
PiperOrigin-RevId:
196920890
Skye Wanderman-Milne [Thu, 17 May 2018 01:03:01 +0000 (18:03 -0700)]
Remove _USE_C_API staging in tests now that the C API is enabled by default.
This is in preparation for removing the _USE_C_API toggle altogether.
PiperOrigin-RevId:
196920481
Igor Ganichev [Thu, 17 May 2018 00:48:22 +0000 (17:48 -0700)]
Fix typo in TensorHandle
PiperOrigin-RevId:
196919119
Jiri Simsa [Thu, 17 May 2018 00:25:36 +0000 (17:25 -0700)]
Re-enabling a test after a previous fix.
PiperOrigin-RevId:
196916467
A. Unique TensorFlower [Thu, 17 May 2018 00:24:39 +0000 (17:24 -0700)]
Remove no-op statement. tf_additional_lib_srcs only selects .cc files. When we
do tf_additional_lib_srcs(exclude=[**/*.cc]) we are selecting zero files, and
the statement can be safely removed.
PiperOrigin-RevId:
196916359
Jianwei Xie [Thu, 17 May 2018 00:23:02 +0000 (17:23 -0700)]
Adds basic TPU replicate training support for Keras.
PiperOrigin-RevId:
196916177
Justin Lebar [Thu, 17 May 2018 00:21:22 +0000 (17:21 -0700)]
[XLA] Improve documentation on HloModule, HloComputation, and HloInstruction.
PiperOrigin-RevId:
196915982
Justin Lebar [Thu, 17 May 2018 00:09:42 +0000 (17:09 -0700)]
[XLA] Add documentation explaining FusionKind.
PiperOrigin-RevId:
196914484
A. Unique TensorFlower [Thu, 17 May 2018 00:05:33 +0000 (17:05 -0700)]
Remove unused inclusions
PiperOrigin-RevId:
196913890
Justin Lebar [Wed, 16 May 2018 23:56:48 +0000 (16:56 -0700)]
[XLA:GPU] Add op-tracing to XLA:GPU.
PiperOrigin-RevId:
196912575
Akshay Modi [Wed, 16 May 2018 23:43:29 +0000 (16:43 -0700)]
Allow for remote eager execution.
PiperOrigin-RevId:
196910675
A. Unique TensorFlower [Wed, 16 May 2018 23:16:46 +0000 (16:16 -0700)]
Remove unused inclusions
PiperOrigin-RevId:
196906815
Jeremy Lau [Wed, 16 May 2018 22:54:49 +0000 (15:54 -0700)]
Move DoesNotUseOperandBuffer and CanShareOperandBufferWithUser from
liveness_util to methods on TuplePointsToAnalysis and HloDataflowAnalysis.
PiperOrigin-RevId:
196903216
Dimitris Vardoulakis [Wed, 16 May 2018 22:53:34 +0000 (15:53 -0700)]
[TF:XLA] Take subcomputations into account during HLO scheduling.
In the List scheduler, if an instruction calls subcomputations, we count the memory usage of the subcomputation towards the memory usage of the parent instruction.
PiperOrigin-RevId:
196903042
A. Unique TensorFlower [Wed, 16 May 2018 22:28:11 +0000 (15:28 -0700)]
Fixing test for Topk kernel in TFlite
PiperOrigin-RevId:
196899232
Mustafa Ispir [Wed, 16 May 2018 22:27:34 +0000 (15:27 -0700)]
Make sparse_cross operations publicly available.
PiperOrigin-RevId:
196899145
Jacques Pienaar [Wed, 16 May 2018 22:02:06 +0000 (15:02 -0700)]
Add test for 64-bit clz and sign.
PiperOrigin-RevId:
196894702
Sanjoy Das [Wed, 16 May 2018 22:01:22 +0000 (15:01 -0700)]
Fix typo in comment
PiperOrigin-RevId:
196894582
A. Unique TensorFlower [Wed, 16 May 2018 21:53:11 +0000 (14:53 -0700)]
Add a parameter to the adaptive shared batcher which allows the user to set a lower bound for in_flight_batches_limit.
This can help prevent overloads which may occur during large traffic shifts - a small value learned during a period of low load can be unsuitable at high load.
PiperOrigin-RevId:
196893320
Allen Lavoie [Wed, 16 May 2018 20:52:54 +0000 (13:52 -0700)]
Checkpointable: move python/training/checkpointable_* to python/training/checkpointable/
Need to add some new checkpointable files in core (specifically I had some checkpointable data structures in mind), and prefixing more files with "checkpointable_" in python/training/ seems dirty.
No functional changes, just some branching and build/import fiddling.
PiperOrigin-RevId:
196883136
Peter Hawkins [Wed, 16 May 2018 20:34:10 +0000 (13:34 -0700)]
Automated g4 rollback of changelist
196691101
PiperOrigin-RevId:
196879933
A. Unique TensorFlower [Wed, 16 May 2018 20:27:49 +0000 (13:27 -0700)]
BUILD cleanup in contrib/lite/...
PiperOrigin-RevId:
196878865
Brian Patton [Wed, 16 May 2018 20:22:53 +0000 (13:22 -0700)]
Fix the gradient of reduce_prod for complex dtypes.
Fixes #12514
PiperOrigin-RevId:
196878148
Jacques Pienaar [Wed, 16 May 2018 20:12:25 +0000 (13:12 -0700)]
Remove sorted as types not sortable.
PiperOrigin-RevId:
196876502
A. Unique TensorFlower [Wed, 16 May 2018 19:54:32 +0000 (12:54 -0700)]
Fix broken link.
PiperOrigin-RevId:
196873792
Igor Ganichev [Wed, 16 May 2018 19:50:41 +0000 (12:50 -0700)]
Add a test for compiled tfe.defun in GradientTape
PiperOrigin-RevId:
196873235
Ayush Dubey [Wed, 16 May 2018 19:32:04 +0000 (12:32 -0700)]
Remove redundant initialization of collective params.
subdiv_permutations is being resized twice in GenerateSubdivParams.
PiperOrigin-RevId:
196870781
A. Unique TensorFlower [Wed, 16 May 2018 19:24:59 +0000 (12:24 -0700)]
Use sequence_length arg for dynamic_rnn within RNNEstimator
This does not affect correctness, but improves performance by skipping padded
parts of the sequence. Also correct documentation in rnn.py that states the
opposite.
PiperOrigin-RevId:
196869793
Nick Desaulniers [Wed, 16 May 2018 19:21:35 +0000 (12:21 -0700)]
[TF:XLA:CPU] enable s32 reduce-window
PiperOrigin-RevId:
196869296
A. Unique TensorFlower [Wed, 16 May 2018 19:16:33 +0000 (12:16 -0700)]
Fix the CCFLAGS mismatch.
PiperOrigin-RevId:
196868601
Jacques Pienaar [Wed, 16 May 2018 19:15:37 +0000 (12:15 -0700)]
Expand tests to include int64 output type.
PiperOrigin-RevId:
196868485
A. Unique TensorFlower [Wed, 16 May 2018 19:04:13 +0000 (12:04 -0700)]
Turn off MirroredStrategy Dataset prefetching in tests when using the
combinations library. It adds some small non-determinism to the input
batches which can make tests flaky.
Also add a default DistributionStrategy combination.
PiperOrigin-RevId:
196866569
Michael Case [Wed, 16 May 2018 18:51:48 +0000 (11:51 -0700)]
Internal Change.
PiperOrigin-RevId:
196864489
Benjamin Kramer [Wed, 16 May 2018 18:45:56 +0000 (11:45 -0700)]
[XLA:GPU] Emit the final write of the tuple pointers
Turns out this doesn't matter when the fusion is emitted as a kernel, but does
when the whole thing is inlined. Oops.
PiperOrigin-RevId:
196863545
Nupur Garg [Wed, 16 May 2018 17:46:46 +0000 (10:46 -0700)]
Fixes tflite_diff script.
PiperOrigin-RevId:
196852157
Benjamin Kramer [Wed, 16 May 2018 17:40:57 +0000 (10:40 -0700)]
[XLA:GPU] Teach ir_emitter_nested how to deal with multi output loop fusion
Most of the plumbing is there already, just set up a loop emitter with a target
for each tuple element. For a simple case the output looks reasonable, though I
haven't checked correctness of anything complex.
PiperOrigin-RevId:
196850926
Michael Case [Wed, 16 May 2018 17:39:46 +0000 (10:39 -0700)]
Remove more Estimator dependencies from core TensorFlow.
Cleaning up some imports and deps I left behind that are no
longer used.
PiperOrigin-RevId:
196850661
A. Unique TensorFlower [Wed, 16 May 2018 17:20:37 +0000 (10:20 -0700)]
Removed C++ ABSL includes from tensorflow/core and tensorflow/compiler.
This is necessary to prevent undefined behavior caused by ODR violations, which could occur if different versions of ABSL are linked together.
PiperOrigin-RevId:
196847315
Jianwei Xie [Wed, 16 May 2018 17:18:05 +0000 (10:18 -0700)]
Add TPUContext for input_fn invocation.
PiperOrigin-RevId:
196846795
Nick Desaulniers [Wed, 16 May 2018 17:03:12 +0000 (10:03 -0700)]
[TF:XLA:INTERPRETER] speed up select and scatter by avoiding memory allocation in loops
HandleSelectAndScatter() has 2 IterateThroughWindow() blocks. Before, we spent (in percent total program time):
11.98% Literal::CreateR0() = 10.82% (block1) + 1.16% (block2)
4.91% Literal::~Literal() = 4.44% (block1) + 0.51% (block2)
1.52% operator delete = 1.38% (block1) + 0.14% (block2)
=====
18.41% total
After:
1.99% Literal::~Literal() = 1.83% (block1) + 0.16% (block2)
0.68% operator delete = 0.61% (block1) + 0.07% (block2)
=====
2.67% total
PiperOrigin-RevId:
196844177
A. Unique TensorFlower [Wed, 16 May 2018 17:02:30 +0000 (10:02 -0700)]
Add tf.contrib.data.make_tf_record_dataset() like make_csv_dataset() and
make_batched_features_dataset(), that is easy and fast by default.
PiperOrigin-RevId:
196844073
Younghee Kwon [Wed, 16 May 2018 16:44:48 +0000 (09:44 -0700)]
boosted_trees: accept integer labels properly now the same as float labels; added tests about labels.
PiperOrigin-RevId:
196841265
A. Unique TensorFlower [Wed, 16 May 2018 16:26:51 +0000 (09:26 -0700)]
Don't initialize GPUs if none will be used.
PiperOrigin-RevId:
196838739
A. Unique TensorFlower [Wed, 16 May 2018 16:17:15 +0000 (09:17 -0700)]
Migrating BestModelExportStrategy to core library.
PiperOrigin-RevId:
196837506
A. Unique TensorFlower [Wed, 16 May 2018 15:45:28 +0000 (08:45 -0700)]
Modify tf.contrib.distributions.BatchReshape to behave a bit more like
tf.reshape: accept a single unknown dimension and infer partial shape
information statically.
PiperOrigin-RevId:
196833267
A. Unique TensorFlower [Wed, 16 May 2018 15:35:29 +0000 (08:35 -0700)]
Resolved inconsistency with shape inference for tf.reduce_join when passing non-Tensor values.
Removed deprecated arguments in tf.reduce_join test.
PiperOrigin-RevId:
196832183
Peter Hawkins [Wed, 16 May 2018 15:10:02 +0000 (08:10 -0700)]
[XLA] Expose MinimumMemoryForComputation in hlo_scheduling.h
PiperOrigin-RevId:
196829414
A. Unique TensorFlower [Wed, 16 May 2018 14:47:44 +0000 (07:47 -0700)]
Add support libraries in core/platform.
PiperOrigin-RevId:
196826860
A. Unique TensorFlower [Wed, 16 May 2018 14:01:04 +0000 (07:01 -0700)]
Migrate HloExecutionProfileTest to textual HLO
Also add lhs_contracting_dims and rhs_contracting_dims to make the test more realistic.
Before, the dot operation was created with CreateBinary instead of CreateCanonicalDot.
PiperOrigin-RevId:
196822255
A. Unique TensorFlower [Wed, 16 May 2018 13:30:19 +0000 (06:30 -0700)]
Employ array flat sizes more directly in optimized_ops, some places in reference_ops.h.
PiperOrigin-RevId:
196819423
A. Unique TensorFlower [Wed, 16 May 2018 12:20:47 +0000 (05:20 -0700)]
internal change
PiperOrigin-RevId:
196813574
A. Unique TensorFlower [Wed, 16 May 2018 12:11:33 +0000 (05:11 -0700)]
Refactor HloInstruction::Fuse and add a method for multi-output fusion.
PiperOrigin-RevId:
196813042
Alexandre Passos [Wed, 16 May 2018 10:56:17 +0000 (03:56 -0700)]
Improving variable_scope documentation.
PiperOrigin-RevId:
196807465
A. Unique TensorFlower [Wed, 16 May 2018 10:43:10 +0000 (03:43 -0700)]
Implementation of transpose_conv
PiperOrigin-RevId:
196806646
Tom Hennigan [Wed, 16 May 2018 06:54:39 +0000 (23:54 -0700)]
Add performance notes for in-context gradient calls.
Also:
* Add _{start,stop}_recording methods to GradientTape.
* Add performance notes when calling gradient in recording context for
persistent tapes.
* s/tfe.GradientTape/tf.GradientTape/ in docstrings.
PiperOrigin-RevId:
196786148
David Majnemer [Wed, 16 May 2018 06:00:32 +0000 (23:00 -0700)]
[TF:XLA] Make softplus more accurate
The softplus function computes log(exp(x) + 1).
We computed it this way but with special cases to handle underflow and
overflow.
This was done by comparing the input against a quantity with the
magnitude 13.
94238515. Note that this quantity is not representable as a single
precision float and is instead rounded to 13.9423847.
If softplus would overflow, it will be approximated as x.
If softplus would underflow, it will be approximated as exp(x).
Unfortunately, this can provide inaccurate results for negative floats close to
the threshold.
For example: consider x = -13.
9274826049805. softmax(x) is ~8.
94068849e-7;
rounded to the nearest single precision float, this is 8.940689e-7.
In this case, x is quite close to the underflow threshold but not close enough
to be approximated by exp(x) == 8.
94069273e-7.
Rather, it gets calculated using the canonical definition of softmax and comes
to 8.
34464686e-7.
This result comes out to be wrong by 1,048,568 ULPs.
Instead, we can compute it the way one would compute LogSumExp(x, 0):
max(x, 0) + log(exp(x - max(x, 0)) + exp(0 - max(x, 0)))
When x is positive, this is:
x + log(exp(0) + exp(-x))
When x is negative, this is:
log(exp(x) + exp(0))
When x is 0, this is:
log(exp(0) + exp(0))
exp(0) evaluates to 1 which gives us:
if x is positive, x + log(1 + exp(-x))
if x is negative, log(exp(x) + 1)
if x is zero, log(2)
These three cases can be combined like so:
max(x, 0) + log(exp(-abs(x)) + 1)
Further, we can increase the fidelity of the log calculation by using log1p:
max(x, 0) + log1p(exp(-abs(x)))
This computation naturally handles underflow and overflow while also providing
more numerically accurate results for a few small, positive, floating point
values.
PiperOrigin-RevId:
196782814
Derek Murray [Wed, 16 May 2018 05:55:44 +0000 (22:55 -0700)]
Fix bug in `WorkerService::Logging()` handler.
Since transitioning to proto3, it was not possible to distinguish between the absence of
LoggingRequest::rpc_logging and it being set to false. This led to a bug that ignored
log-disabling messages in some implementations, which meant that logging was never
disabled. This fix adds explicit fields in LoggingRequest for enabling and disabling RPC
logging.
PiperOrigin-RevId:
196782547