Use the `use_gpu` and `force_gpu` options to control where ops are run. If
`force_gpu` is True, all ops are pinned to `/device:GPU:0`. Otherwise, if
- `use_gpu`
- is True, TensorFlow tries to run as many ops on the GPU as possible. If both
- `force_gpu and `use_gpu` are False, all ops are pinned to the CPU.
+ `use_gpu` is True, TensorFlow tries to run as many ops on the GPU as
+ possible. If both `force_gpu and `use_gpu` are False, all ops are pinned to
+ the CPU.
Example:
```python
package(default_visibility = ["//visibility:public"])
load("//tensorflow:tensorflow.bzl", "py_test")
+load("//tensorflow:tensorflow.bzl", "cuda_py_test")
config_setting(
name = "empty_condition",
],
)
-py_test(
+cuda_py_test(
name = "multi_gpu_utils_test",
- size = "medium",
srcs = ["_impl/keras/utils/multi_gpu_utils_test.py"],
- srcs_version = "PY2AND3",
- tags = ["multi_gpu"],
- deps = [
+ additional_deps = [
":keras",
- "//tensorflow/python:client_testlib",
"//third_party/py/numpy",
+ "//tensorflow/python:client_testlib",
+ ],
+ tags = [
+ "guitar",
+ "multi_gpu",
],
)