# Neural Style Transfer Python* Sample
-This topic demonstrates how to run the Neural Style Transfer sample application, which performs
+This topic demonstrates how to run the Neural Style Transfer sample application, which performs
inference of style transfer models.
> **NOTE**: The OpenVINO™ toolkit does not include a pre-trained model to run the Neural Style Transfer sample. A public model from the [Zhaw's Neural Style Transfer repository](https://github.com/zhaw/neural_style) can be used. Read the [Converting a Style Transfer Model from MXNet*](./docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md) topic from the [Model Optimizer Developer Guide](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) to learn about how to get the trained model and how to convert it to the Inference Engine format (\*.xml + \*.bin).
The command yields the following usage message:
```
usage: style_transfer_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
- [-l CPU_EXTENSION] [-d DEVICE]
+ [-l CPU_EXTENSION] [-d DEVICE]
[-nt NUMBER_TOP]
[--mean_val_r MEAN_VAL_R]
[--mean_val_g MEAN_VAL_G]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
- Path to an .xml file with a trained model.
+ Required. Path to an .xml file with a trained model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
- Path to a folder with images or path to an image files
+ Required. Path to a folder with images or path to an image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. Absolute
MKLDNN (CPU)-targeted custom layers. Absolute path to
a shared library with the kernels implementations
-d DEVICE, --device DEVICE
- Specify the target device to infer on; CPU, GPU, FPGA,
+ Optional. Specify the target device to infer on; CPU, GPU, FPGA,
HDDL or MYRIAD is acceptable. Sample will look for a
suitable plugin for device specified. Default value is CPU
-nt NUMBER_TOP, --number_top NUMBER_TOP
- Number of top results
+ Optional. Number of top results
--mean_val_r MEAN_VAL_R, -mean_val_r MEAN_VAL_R
- Mean value of red chanel for mean value subtraction in
+ Optional. Mean value of red chanel for mean value subtraction in
postprocessing
--mean_val_g MEAN_VAL_G, -mean_val_g MEAN_VAL_G
- Mean value of green chanel for mean value subtraction
+ Optional. Mean value of green chanel for mean value subtraction
in postprocessing
--mean_val_b MEAN_VAL_B, -mean_val_b MEAN_VAL_B
- Mean value of blue chanel for mean value subtraction
+ Optional. Mean value of blue chanel for mean value subtraction
in postprocessing
```
### Demo Output
-The application outputs an image (`out1.bmp`) or a sequence of images (`out1.bmp`, ..., `out<N>.bmp`) which are redrawn in style of the style transfer model used for sample.
+The application outputs an image (`out1.bmp`) or a sequence of images (`out1.bmp`, ..., `out<N>.bmp`) which are redrawn in style of the style transfer model used for sample.
-## See Also
+## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)
parser = ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')
- args.add_argument("-m", "--model", help="Path to an .xml file with a trained model.", required=True, type=str)
- args.add_argument("-i", "--input", help="Path to a folder with images or path to an image files", required=True,
+ args.add_argument("-m", "--model", help="Required. Path to an .xml file with a trained model.", required=True, type=str)
+ args.add_argument("-i", "--input", help="Required. Path to a folder with images or path to an image files", required=True,
type=str, nargs="+")
args.add_argument("-l", "--cpu_extension",
help="Optional. Required for CPU custom layers. "
"Absolute MKLDNN (CPU)-targeted custom layers. Absolute path to a shared library with the "
"kernels implementations", type=str, default=None)
args.add_argument("-d", "--device",
- help="Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is acceptable. Sample "
+ help="Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is acceptable. Sample "
"will look for a suitable plugin for device specified. Default value is CPU", default="CPU",
type=str)
- args.add_argument("-nt", "--number_top", help="Number of top results", default=10, type=int)
+ args.add_argument("-nt", "--number_top", help="Optional. Number of top results", default=10, type=int)
args.add_argument("--mean_val_r", "-mean_val_r",
- help="Mean value of red chanel for mean value subtraction in postprocessing ", default=0,
+ help="Optional. Mean value of red chanel for mean value subtraction in postprocessing ", default=0,
type=float)
args.add_argument("--mean_val_g", "-mean_val_g",
- help="Mean value of green chanel for mean value subtraction in postprocessing ", default=0,
+ help="Optional. Mean value of green chanel for mean value subtraction in postprocessing ", default=0,
type=float)
args.add_argument("--mean_val_b", "-mean_val_b",
- help="Mean value of blue chanel for mean value subtraction in postprocessing ", default=0,
+ help="Optional. Mean value of blue chanel for mean value subtraction in postprocessing ", default=0,
type=float)
return parser