--- /dev/null
+.. _Retina_Model:
+
+Discovering the human retina and its use for image processing
+*************************************************************
+
+Goal
+=====
+
+I present here a model of human retina that shows some interesting properties for image preprocessing and enhancement.
+In this tutorial you will learn how to:
+
+.. container:: enumeratevisibleitemswithsquare
+
+ + discover the main two channels outing from your retina
+
+ + see the basics to use the retina model
+
+ + discover some parameters tweaks
+
+
+General overview
+================
+
+The proposed model originates from Jeanny Herault's research [herault2010]_ at `Gipsa <http://www.gipsa-lab.inpg.fr>`_. It is involved in image processing applications with `Listic <http://www.listic.univ-savoie.fr>`_ (code maintainer and user) lab. This is not a complete model but it already present interesting properties that can be involved for enhanced image processing experience. The model allows the following human retina properties to be used :
+
+* spectral whitening that has 3 important effects: high spatio-temporal frequency signals canceling (noise), mid-frequencies details enhancement and low frequencies luminance energy reduction. This *all in one* property directly allows visual signals cleaning of classical undesired distortions introduced by image sensors and input luminance range.
+
+* local logarithmic luminance compression allows details to be enhanced even in low light conditions.
+
+* decorrelation of the details information (Parvocellular output channel) and transient information (events, motion made available at the Magnocellular output channel).
+
+The first two points are illustrated below :
+
+In the figure below, the OpenEXR image sample *CrissyField.exr*, a High Dynamic Range image is shown. In order to make it visible on this web-page, the original input image is linearly rescaled to the classical image luminance range [0-255] and is converted to 8bit/channel format. Such strong conversion hides many details because of too strong local contrasts. Furthermore, noise energy is also strong and pollutes visual information.
+
+.. image:: images/retina_TreeHdr_small.jpg
+ :alt: A High dynamic range image linearly rescaled within range [0-255].
+ :align: center
+
+In the following image, applying the ideas proposed in [benoit2010]_, as your retina does, local luminance adaptation, spatial noise removal and spectral whitening work together and transmit accurate information on lower range 8bit data channels. On this picture, noise in significantly removed, local details hidden by strong luminance contrasts are enhanced. Output image keeps its naturalness and visual content is enhanced. Color processing is based on the color multiplexing/demultiplexing method proposed in [chaix2007]_.
+
+.. image:: images/retina_TreeHdr_retina.jpg
+ :alt: A High dynamic range image compressed within range [0-255] using the retina.
+ :align: center
+
+
+*Note :* image sample can be downloaded from the `OpenEXR website <http://www.openexr.com>`_. Regarding this demonstration, before retina processing, input image has been linearly rescaled within 0-255 keeping its channels float format. 5% of its histogram ends has been cut (mostly removes wrong HDR pixels). Check out the sample *opencv/samples/cpp/OpenEXRimages_HighDynamicRange_Retina_toneMapping.cpp* for similar processing. The following demonstration will only consider classical 8bit/channel images.
+
+The retina model output channels
+================================
+
+The retina model presents two outputs that benefit from the above cited behaviors.
+
+* The first one is called the Parvocellular channel. It is mainly active in the foveal retina area (high resolution central vision with color sensitive photo-receptors), its aim is to provide accurate color vision for visual details remaining static on the retina. On the other hand objects moving on the retina projection are blurred.
+
+* The second well known channel is the Magnocellular channel. It is mainly active in the retina peripheral vision and send signals related to change events (motion, transient events, etc.). These outing signals also help visual system to focus/center retina on 'transient'/moving areas for more detailed analysis thus improving visual scene context and object classification.
+
+**NOTE :** regarding the proposed model, contrary to the real retina, we apply these two channels on the entire input images using the same resolution. This allows enhanced visual details and motion information to be extracted on all the considered images... but remember, that these two channels are complementary. For example, if Magnocellular channel gives strong energy in an area, then, the Parvocellular channel is certainly blurred there since there is a transient event.
+
+As an illustration, we apply in the following the retina model on a webcam video stream of a dark visual scene. In this visual scene, captured in an amphitheater of the university, some students are moving while talking to the teacher.
+
+In this video sequence, because of the dark ambiance, signal to noise ratio is low and color artifacts are present on visual features edges because of the low quality image capture tool-chain.
+
+.. image:: images/studentsSample_input.jpg
+ :alt: an input video stream extract sample
+ :align: center
+
+Below is shown the retina foveal vision applied on the entire image. In the used retina configuration, global luminance is preserved and local contrasts are enhanced. Also, signal to noise ratio is improved : since high frequency spatio-temporal noise is reduced, enhanced details are not corrupted by any enhanced noise.
+
+.. image:: images/studentsSample_parvo.jpg
+ :alt: the retina Parvocellular output. Enhanced details, luminance adaptation and noise removal. A processing tool for image analysis.
+ :align: center
+
+Below is the output of the Magnocellular output of the retina model. Its signals are strong where transient events occur. Here, a student is moving at the bottom of the image thus generating high energy. The remaining of the image is static however, it is corrupted by a strong noise. Here, the retina filters out most of the noise thus generating low false motion area 'alarms'. This channel can be used as a transient/moving areas detector : it would provide relevant information for a low cost segmentation tool that would highlight areas in which an event is occurring.
+
+.. image:: images/studentsSample_magno.jpg
+ :alt: the retina Magnocellular output. Enhanced transient signals (motion, etc.). A preprocessing tool for event detection.
+ :align: center
+
+Retina use case
+===============
+
+This model can be used basically for spatio-temporal video effects but also in the aim of :
+
+* performing texture analysis with enhanced signal to noise ratio and enhanced details robust against input images luminance ranges (check out the Parvocellular retina channel output)
+
+* performing motion analysis also taking benefit of the previously cited properties.
+
+Literature
+==========
+For more information, refer to the following papers :
+
+.. [benoit2010] Benoit A., Caplier A., Durette B., Herault, J., "Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
+
+* Please have a look at the reference work of Jeanny Herault that you can read in his book :
+
+.. [herault2010] Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
+
+This retina filter code includes the research contributions of phd/research collegues from which code has been redrawn by the author :
+
+* take a look at the *retinacolor.hpp* module to discover Brice Chaix de Lavarene phD color mosaicing/demosaicing and his reference paper:
+
+.. [chaix2007] B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
+
+* take a look at *imagelogpolprojection.hpp* to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. More informations in the above cited Jeanny Heraults's book.
+
+Code tutorial
+=============
+
+Please refer to the original tutorial source code in file *opencv_folder/samples/cpp/tutorial_code/bioinspired/retina_tutorial.cpp*.
+
+To compile it, assuming OpenCV is correctly installed, use the following command. It requires the opencv_core *(cv::Mat and friends objects management)*, opencv_highgui *(display and image/video read)* and opencv_bioinspired *(Retina description)* libraries to compile.
+
+.. code-block:: cpp
+
+ // compile
+ gcc retina_tutorial.cpp -o Retina_tuto -lopencv_core -lopencv_highgui -lopencv_bioinspired
+
+ // Run commands : add 'log' as a last parameter to apply a spatial log sampling (simulates retina sampling)
+ // run on webcam
+ ./Retina_tuto -video
+ // run on video file
+ ./Retina_tuto -video myVideo.avi
+ // run on an image
+ ./Retina_tuto -image myPicture.jpg
+ // run on an image with log sampling
+ ./Retina_tuto -image myPicture.jpg log
+
+Here is a code explanation :
+
+Retina definition is present in the bioinspired package and a simple include allows to use it
+
+.. code-block:: cpp
+
+ #include "opencv2/opencv.hpp"
+
+Provide user some hints to run the program with a help function
+
+.. code-block:: cpp
+
+ // the help procedure
+ static void help(std::string errorMessage)
+ {
+ std::cout<<"Program init error : "<<errorMessage<<std::endl;
+ std::cout<<"\nProgram call procedure : retinaDemo [processing mode] [Optional : media target] [Optional LAST parameter: \"log\" to activate retina log sampling]"<<std::endl;
+ std::cout<<"\t[processing mode] :"<<std::endl;
+ std::cout<<"\t -image : for still image processing"<<std::endl;
+ std::cout<<"\t -video : for video stream processing"<<std::endl;
+ std::cout<<"\t[Optional : media target] :"<<std::endl;
+ std::cout<<"\t if processing an image or video file, then, specify the path and filename of the target to process"<<std::endl;
+ std::cout<<"\t leave empty if processing video stream coming from a connected video device"<<std::endl;
+ std::cout<<"\t[Optional : activate retina log sampling] : an optional last parameter can be specified for retina spatial log sampling"<<std::endl;
+ std::cout<<"\t set \"log\" without quotes to activate this sampling, output frame size will be divided by 4"<<std::endl;
+ std::cout<<"\nExamples:"<<std::endl;
+ std::cout<<"\t-Image processing : ./retinaDemo -image lena.jpg"<<std::endl;
+ std::cout<<"\t-Image processing with log sampling : ./retinaDemo -image lena.jpg log"<<std::endl;
+ std::cout<<"\t-Video processing : ./retinaDemo -video myMovie.mp4"<<std::endl;
+ std::cout<<"\t-Live video processing : ./retinaDemo -video"<<std::endl;
+ std::cout<<"\nPlease start again with new parameters"<<std::endl;
+ std::cout<<"****************************************************"<<std::endl;
+ std::cout<<" NOTE : this program generates the default retina parameters file 'RetinaDefaultParameters.xml'"<<std::endl;
+ std::cout<<" => you can use this to fine tune parameters and load them if you save to file 'RetinaSpecificParameters.xml'"<<std::endl;
+ }
+
+Then, start the main program and first declare a *cv::Mat* matrix in which input images will be loaded. Also allocate a *cv::VideoCapture* object ready to load video streams (if necessary)
+
+.. code-block:: cpp
+
+ int main(int argc, char* argv[]) {
+ // declare the retina input buffer... that will be fed differently in regard of the input media
+ cv::Mat inputFrame;
+ cv::VideoCapture videoCapture; // in case a video media is used, its manager is declared here
+
+
+In the main program, before processing, first check input command parameters. Here it loads a first input image coming from a single loaded image (if user chose command *-image*) or from a video stream (if user chose command *-video*). Also, if the user added *log* command at the end of its program call, the spatial logarithmic image sampling performed by the retina is taken into account by the Boolean flag *useLogSampling*.
+
+.. code-block:: cpp
+
+ // welcome message
+ std::cout<<"****************************************************"<<std::endl;
+ std::cout<<"* Retina demonstration : demonstrates the use of is a wrapper class of the Gipsa/Listic Labs retina model."<<std::endl;
+ std::cout<<"* This demo will try to load the file 'RetinaSpecificParameters.xml' (if exists).\nTo create it, copy the autogenerated template 'RetinaDefaultParameters.xml'.\nThen twaek it with your own retina parameters."<<std::endl;
+ // basic input arguments checking
+ if (argc<2)
+ {
+ help("bad number of parameter");
+ return -1;
+ }
+
+ bool useLogSampling = !strcmp(argv[argc-1], "log"); // check if user wants retina log sampling processing
+
+ std::string inputMediaType=argv[1];
+
+ //////////////////////////////////////////////////////////////////////////////
+ // checking input media type (still image, video file, live video acquisition)
+ if (!strcmp(inputMediaType.c_str(), "-image") && argc >= 3)
+ {
+ std::cout<<"RetinaDemo: processing image "<<argv[2]<<std::endl;
+ // image processing case
+ inputFrame = cv::imread(std::string(argv[2]), 1); // load image in RGB mode
+ }else
+ if (!strcmp(inputMediaType.c_str(), "-video"))
+ {
+ if (argc == 2 || (argc == 3 && useLogSampling)) // attempt to grab images from a video capture device
+ {
+ videoCapture.open(0);
+ }else// attempt to grab images from a video filestream
+ {
+ std::cout<<"RetinaDemo: processing video stream "<<argv[2]<<std::endl;
+ videoCapture.open(argv[2]);
+ }
+
+ // grab a first frame to check if everything is ok
+ videoCapture>>inputFrame;
+ }else
+ {
+ // bad command parameter
+ help("bad command parameter");
+ return -1;
+ }
+
+Once all input parameters are processed, a first image should have been loaded, if not, display error and stop program :
+
+.. code-block:: cpp
+
+ if (inputFrame.empty())
+ {
+ help("Input media could not be loaded, aborting");
+ return -1;
+ }
+
+Now, everything is ready to run the retina model. I propose here to allocate a retina instance and to manage the eventual log sampling option. The Retina constructor expects at least a cv::Size object that shows the input data size that will have to be managed. One can activate other options such as color and its related color multiplexing strategy (here Bayer multiplexing is chosen using enum cv::RETINA_COLOR_BAYER). If using log sampling, the image reduction factor (smaller output images) and log sampling strengh can be adjusted.
+
+.. code-block:: cpp
+
+ // pointer to a retina object
+ cv::Ptr<Retina> myRetina;
+
+ // if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
+ if (useLogSampling)
+ {
+ myRetina = createRetina(inputFrame.size(), true, RETINA_COLOR_BAYER, true, 2.0, 10.0);
+ }
+ else// -> else allocate "classical" retina :
+ myRetina = createRetina(inputFrame.size());
+
+Once done, the proposed code writes a default xml file that contains the default parameters of the retina. This is useful to make your own config using this template. Here generated template xml file is called *RetinaDefaultParameters.xml*.
+
+.. code-block:: cpp
+
+ // save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"
+ myRetina->write("RetinaDefaultParameters.xml");
+
+In the following line, the retina attempts to load another xml file called *RetinaSpecificParameters.xml*. If you created it and introduced your own setup, it will be loaded, in the other case, default retina parameters are used.
+
+.. code-block:: cpp
+
+ // load parameters if file exists
+ myRetina->setup("RetinaSpecificParameters.xml");
+
+It is not required here but just to show it is possible, you can reset the retina buffers to zero to force it to forget past events.
+
+.. code-block:: cpp
+
+ // reset all retina buffers (imagine you close your eyes for a long time)
+ myRetina->clearBuffers();
+
+Now, it is time to run the retina ! First create some output buffers ready to receive the two retina channels outputs
+
+.. code-block:: cpp
+
+ // declare retina output buffers
+ cv::Mat retinaOutput_parvo;
+ cv::Mat retinaOutput_magno;
+
+Then, run retina in a loop, load new frames from video sequence if necessary and get retina outputs back to dedicated buffers.
+
+.. code-block:: cpp
+
+ // processing loop with no stop condition
+ while(true)
+ {
+ // if using video stream, then, grabbing a new frame, else, input remains the same
+ if (videoCapture.isOpened())
+ videoCapture>>inputFrame;
+
+ // run retina filter on the loaded input frame
+ myRetina->run(inputFrame);
+ // Retrieve and display retina output
+ myRetina->getParvo(retinaOutput_parvo);
+ myRetina->getMagno(retinaOutput_magno);
+ cv::imshow("retina input", inputFrame);
+ cv::imshow("Retina Parvo", retinaOutput_parvo);
+ cv::imshow("Retina Magno", retinaOutput_magno);
+ cv::waitKey(10);
+ }
+
+That's done ! But if you want to secure the system, take care and manage Exceptions. The retina can throw some when it sees irrelevant data (no input frame, wrong setup, etc.).
+Then, i recommend to surround all the retina code by a try/catch system like this :
+
+.. code-block:: cpp
+
+ try{
+ // pointer to a retina object
+ cv::Ptr<cv::Retina> myRetina;
+ [---]
+ // processing loop with no stop condition
+ while(true)
+ {
+ [---]
+ }
+
+ }catch(cv::Exception e)
+ {
+ std::cerr<<"Error using Retina : "<<e.what()<<std::endl;
+ }
+
+Retina parameters, what to do ?
+===============================
+
+First, it is recommended to read the reference paper :
+
+* Benoit A., Caplier A., Durette B., Herault, J., *"Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing"*, Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
+
+Once done open the configuration file *RetinaDefaultParameters.xml* generated by the demo and let's have a look at it.
+
+.. code-block:: cpp
+
+ <?xml version="1.0"?>
+ <opencv_storage>
+ <OPLandIPLparvo>
+ <colorMode>1</colorMode>
+ <normaliseOutput>1</normaliseOutput>
+ <photoreceptorsLocalAdaptationSensitivity>7.5e-01</photoreceptorsLocalAdaptationSensitivity>
+ <photoreceptorsTemporalConstant>9.0e-01</photoreceptorsTemporalConstant>
+ <photoreceptorsSpatialConstant>5.7e-01</photoreceptorsSpatialConstant>
+ <horizontalCellsGain>0.01</horizontalCellsGain>
+ <hcellsTemporalConstant>0.5</hcellsTemporalConstant>
+ <hcellsSpatialConstant>7.</hcellsSpatialConstant>
+ <ganglionCellsSensitivity>7.5e-01</ganglionCellsSensitivity></OPLandIPLparvo>
+ <IPLmagno>
+ <normaliseOutput>1</normaliseOutput>
+ <parasolCells_beta>0.</parasolCells_beta>
+ <parasolCells_tau>0.</parasolCells_tau>
+ <parasolCells_k>7.</parasolCells_k>
+ <amacrinCellsTemporalCutFrequency>2.0e+00</amacrinCellsTemporalCutFrequency>
+ <V0CompressionParameter>9.5e-01</V0CompressionParameter>
+ <localAdaptintegration_tau>0.</localAdaptintegration_tau>
+ <localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
+ </opencv_storage>
+
+Here are some hints but actually, the best parameter setup depends more on what you want to do with the retina rather than the images input that you give to retina. Apart from the more specific case of High Dynamic Range images (HDR) that require more specific setup for specific luminance compression objective, the retina behaviors should be rather stable from content to content. Note that OpenCV is able to manage such HDR format thanks to the OpenEXR images compatibility.
+
+Then, if the application target requires details enhancement prior to specific image processing, you need to know if mean luminance information is required or not. If not, the the retina can cancel or significantly reduce its energy thus giving more visibility to higher spatial frequency details.
+
+
+Basic parameters
+----------------
+
+The most simple parameters are the following :
+
+* **colorMode** : let the retina process color information (if 1) or gray scale images (if 0). In this last case, only the first channel of the input will be processed.
+
+* **normaliseOutput** : each channel has this parameter, if value is 1, then the considered channel output is rescaled between 0 and 255. Take care in this case at the Magnocellular output level (motion/transient channel detection). Residual noise will also be rescaled !
+
+**Note :** using color requires color channels multiplexing/demultipexing which requires more processing. You can expect much faster processing using gray levels : it would require around 30 product per pixel for all the retina processes and it has recently been parallelized for multicore architectures.
+
+Photo-receptors parameters
+--------------------------
+
+The following parameters act on the entry point of the retina - photo-receptors - and impact all the following processes. These sensors are low pass spatio-temporal filters that smooth temporal and spatial data and also adjust there sensitivity to local luminance thus improving details extraction and high frequency noise canceling.
+
+* **photoreceptorsLocalAdaptationSensitivity** between 0 and 1. Values close to 1 allow high luminance log compression effect at the photo-receptors level. Values closer to 0 give a more linear sensitivity. Increased alone, it can burn the *Parvo (details channel)* output image. If adjusted in collaboration with **ganglionCellsSensitivity** images can be very contrasted whatever the local luminance there is... at the price of a naturalness decrease.
+
+* **photoreceptorsTemporalConstant** this setups the temporal constant of the low pass filter effect at the entry of the retina. High value lead to strong temporal smoothing effect : moving objects are blurred and can disappear while static object are favored. But when starting the retina processing, stable state is reached lately.
+
+* **photoreceptorsSpatialConstant** specifies the spatial constant related to photo-receptors low pass filter effect. This parameters specify the minimum allowed spatial signal period allowed in the following. Typically, this filter should cut high frequency noise. Then a 0 value doesn't cut anything noise while higher values start to cut high spatial frequencies and more and more lower frequencies... Then, do not go to high if you wanna see some details of the input images ! A good compromise for color images is 0.53 since this won't affect too much the color spectrum. Higher values would lead to gray and blurred output images.
+
+Horizontal cells parameters
+---------------------------
+
+This parameter set tunes the neural network connected to the photo-receptors, the horizontal cells. It modulates photo-receptors sensitivity and completes the processing for final spectral whitening (part of the spatial band pass effect thus favoring visual details enhancement).
+
+* **horizontalCellsGain** here is a critical parameter ! If you are not interested by the mean luminance and focus on details enhancement, then, set to zero. But if you want to keep some environment luminance data, let some low spatial frequencies pass into the system and set a higher value (<1).
+
+* **hcellsTemporalConstant** similar to photo-receptors, this acts on the temporal constant of a low pass temporal filter that smooths input data. Here, a high value generates a high retina after effect while a lower value makes the retina more reactive. This value should be lower than **photoreceptorsTemporalConstant** to limit strong retina after effects.
+
+* **hcellsSpatialConstant** is the spatial constant of the low pass filter of these cells filter. It specifies the lowest spatial frequency allowed in the following. Visually, a high value leads to very low spatial frequencies processing and leads to salient halo effects. Lower values reduce this effect but the limit is : do not go lower than the value of **photoreceptorsSpatialConstant**. Those 2 parameters actually specify the spatial band-pass of the retina.
+
+**NOTE** after the processing managed by the previous parameters, input data is cleaned from noise and luminance in already partly enhanced. The following parameters act on the last processing stages of the two outing retina signals.
+
+Parvo (details channel) dedicated parameter
+-------------------------------------------
+
+* **ganglionCellsSensitivity** specifies the strength of the final local adaptation occurring at the output of this details dedicated channel. Parameter values remain between 0 and 1. Low value tend to give a linear response while higher values enforces the remaining low contrasted areas.
+
+**Note :** this parameter can correct eventual burned images by favoring low energetic details of the visual scene, even in bright areas.
+
+IPL Magno (motion/transient channel) parameters
+-----------------------------------------------
+
+Once image information is cleaned, this channel acts as a high pass temporal filter that only selects signals related to transient signals (events, motion, etc.). A low pass spatial filter smooths extracted transient data and a final logarithmic compression enhances low transient events thus enhancing event sensitivity.
+
+* **parasolCells_beta** generally set to zero, can be considered as an amplifier gain at the entry point of this processing stage. Generally set to 0.
+
+* **parasolCells_tau** the temporal smoothing effect that can be added
+
+* **parasolCells_k** the spatial constant of the spatial filtering effect, set it at a high value to favor low spatial frequency signals that are lower subject to residual noise.
+
+* **amacrinCellsTemporalCutFrequency** specifies the temporal constant of the high pass filter. High values let slow transient events to be selected.
+
+* **V0CompressionParameter** specifies the strength of the log compression. Similar behaviors to previous description but here it enforces sensitivity of transient events.
+
+* **localAdaptintegration_tau** generally set to 0, no real use here actually
+
+* **localAdaptintegration_k** specifies the size of the area on which local adaptation is performed. Low values lead to short range local adaptation (higher sensitivity to noise), high values secure log compression.
--- /dev/null
+.. _Table-Of-Content-Bioinspired:
+
+*bioinspired* module. Algorithms inspired from biological models
+----------------------------------------------------------------
+
+Here you will learn how to use additional modules of OpenCV defined in the "bioinspired" module.
+
+ .. include:: ../../definitions/tocDefinitions.rst
+
++
+ .. tabularcolumns:: m{100pt} m{300pt}
+ .. cssclass:: toctableopencv
+
+ =============== ======================================================
+ |RetinaDemoImg| **Title:** :ref:`Retina_Model`
+
+ *Compatibility:* > OpenCV 2.4
+
+ *Author:* |Author_AlexB|
+
+ You will learn how to process images and video streams with a model of retina filter for details enhancement, spatio-temporal noise removal, luminance correction and spatio-temporal events detection.
+
+ =============== ======================================================
+
+ .. |RetinaDemoImg| image:: images/retina_TreeHdr_small.jpg
+ :height: 90pt
+ :width: 90pt
+
+ .. raw:: latex
+
+ \pagebreak
+
+.. toctree::
+ :hidden:
+
+ ../retina_model/retina_model
--- /dev/null
+set(the_description "Biologically inspired algorithms")
+ocv_define_module(bioinspired opencv_core OPTIONAL opencv_highgui)
--- /dev/null
+*******************************************************************
+bioinspired. Biologically inspired vision models and derivated tools
+*******************************************************************
+
+The module provides biological visual systems models (human visual system and others). It also provides derivated objects that take advantage of those bio-inspired models.
+
+.. toctree::
+ :maxdepth: 2
+
+ Human retina documentation <retina/index>
--- /dev/null
+Retina : a Bio mimetic human retina model
+*****************************************
+
+.. highlight:: cpp
+
+Retina
+======
+.. ocv:class:: Retina : public Algorithm
+
+Introduction
+++++++++++++
+
+Class which provides the main controls to the Gipsa/Listic labs human retina model. This is a non separable spatio-temporal filter modelling the two main retina information channels :
+
+* foveal vision for detailled color vision : the parvocellular pathway.
+
+* peripheral vision for sensitive transient signals detection (motion and events) : the magnocellular pathway.
+
+From a general point of view, this filter whitens the image spectrum and corrects luminance thanks to local adaptation. An other important property is its hability to filter out spatio-temporal noise while enhancing details.
+This model originates from Jeanny Herault work [Herault2010]_. It has been involved in Alexandre Benoit phd and his current research [Benoit2010]_, [Strat2013]_ (he currently maintains this module within OpenCV). It includes the work of other Jeanny's phd student such as [Chaix2007]_ and the log polar transformations of Barthelemy Durette described in Jeanny's book.
+
+**NOTES :**
+
+* For ease of use in computer vision applications, the two retina channels are applied homogeneously on all the input images. This does not follow the real retina topology but this can still be done using the log sampling capabilities proposed within the class.
+
+* Extend the retina description and code use in the tutorial/contrib section for complementary explanations.
+
+Preliminary illustration
+++++++++++++++++++++++++
+
+As a preliminary presentation, let's start with a visual example. We propose to apply the filter on a low quality color jpeg image with backlight problems. Here is the considered input... *"Well, my eyes were able to see more that this strange black shadow..."*
+
+.. image:: images/retinaInput.jpg
+ :alt: a low quality color jpeg image with backlight problems.
+ :align: center
+
+Below, the retina foveal model applied on the entire image with default parameters. Here contours are enforced, halo effects are voluntary visible with this configuration. See parameters discussion below and increase horizontalCellsGain near 1 to remove them.
+
+.. image:: images/retinaOutput_default.jpg
+ :alt: the retina foveal model applied on the entire image with default parameters. Here contours are enforced, luminance is corrected and halo effects are voluntary visible with this configuration, increase horizontalCellsGain near 1 to remove them.
+ :align: center
+
+Below, a second retina foveal model output applied on the entire image with a parameters setup focused on naturalness perception. *"Hey, i now recognize my cat, looking at the mountains at the end of the day !"*. Here contours are enforced, luminance is corrected but halos are avoided with this configuration. The backlight effect is corrected and highlight details are still preserved. Then, even on a low quality jpeg image, if some luminance information remains, the retina is able to reconstruct a proper visual signal. Such configuration is also usefull for High Dynamic Range (*HDR*) images compression to 8bit images as discussed in [benoit2010]_ and in the demonstration codes discussed below.
+As shown at the end of the page, parameters change from defaults are :
+
+* horizontalCellsGain=0.3
+
+* photoreceptorsLocalAdaptationSensitivity=ganglioncellsSensitivity=0.89.
+
+.. image:: images/retinaOutput_realistic.jpg
+ :alt: the retina foveal model applied on the entire image with 'naturalness' parameters. Here contours are enforced but are avoided with this configuration, horizontalCellsGain is 0.3 and photoreceptorsLocalAdaptationSensitivity=ganglioncellsSensitivity=0.89.
+ :align: center
+
+As observed in this preliminary demo, the retina can be settled up with various parameters, by default, as shown on the figure above, the retina strongly reduces mean luminance energy and enforces all details of the visual scene. Luminance energy and halo effects can be modulated (exagerated to cancelled as shown on the two examples). In order to use your own parameters, you can use at least one time the *write(String fs)* method which will write a proper XML file with all default parameters. Then, tweak it on your own and reload them at any time using method *setup(String fs)*. These methods update a *Retina::RetinaParameters* member structure that is described hereafter. XML parameters file samples are shown at the end of the page.
+
+Here is an overview of the abstract Retina interface, allocate one instance with the *createRetina* functions.::
+
+ class Retina : public Algorithm
+ {
+ public:
+ // parameters setup instance
+ struct RetinaParameters; // this class is detailled later
+
+ // main method for input frame processing (all use method, can also perform High Dynamic Range tone mapping)
+ void run (InputArray inputImage);
+
+ // specific method aiming at correcting luminance only (faster High Dynamic Range tone mapping)
+ void applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)
+
+ // output buffers retreival methods
+ // -> foveal color vision details channel with luminance and noise correction
+ void getParvo (OutputArray retinaOutput_parvo);
+ void getParvoRAW (OutputArray retinaOutput_parvo);// retreive original output buffers without any normalisation
+ const Mat getParvoRAW () const;// retreive original output buffers without any normalisation
+ // -> peripheral monochrome motion and events (transient information) channel
+ void getMagno (OutputArray retinaOutput_magno);
+ void getMagnoRAW (OutputArray retinaOutput_magno); // retreive original output buffers without any normalisation
+ const Mat getMagnoRAW () const;// retreive original output buffers without any normalisation
+
+ // reset retina buffers... equivalent to closing your eyes for some seconds
+ void clearBuffers ();
+
+ // retreive input and output buffers sizes
+ Size getInputSize ();
+ Size getOutputSize ();
+
+ // setup methods with specific parameters specification of global xml config file loading/write
+ void setup (String retinaParameterFile="", const bool applyDefaultSetupOnFailure=true);
+ void setup (FileStorage &fs, const bool applyDefaultSetupOnFailure=true);
+ void setup (RetinaParameters newParameters);
+ struct Retina::RetinaParameters getParameters ();
+ const String printSetup ();
+ virtual void write (String fs) const;
+ virtual void write (FileStorage &fs) const;
+ void setupOPLandIPLParvoChannel (const bool colorMode=true, const bool normaliseOutput=true, const float photoreceptorsLocalAdaptationSensitivity=0.7, const float photoreceptorsTemporalConstant=0.5, const float photoreceptorsSpatialConstant=0.53, const float horizontalCellsGain=0, const float HcellsTemporalConstant=1, const float HcellsSpatialConstant=7, const float ganglionCellsSensitivity=0.7);
+ void setupIPLMagnoChannel (const bool normaliseOutput=true, const float parasolCells_beta=0, const float parasolCells_tau=0, const float parasolCells_k=7, const float amacrinCellsTemporalCutFrequency=1.2, const float V0CompressionParameter=0.95, const float localAdaptintegration_tau=0, const float localAdaptintegration_k=7);
+ void setColorSaturation (const bool saturateColors=true, const float colorSaturationValue=4.0);
+ void activateMovingContoursProcessing (const bool activate);
+ void activateContoursProcessing (const bool activate);
+ };
+
+ // Allocators
+ cv::Ptr<Retina> createRetina (Size inputSize);
+ cv::Ptr<Retina> createRetina (Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
+
+
+Description
++++++++++++
+
+Class which allows the `Gipsa <http://www.gipsa-lab.inpg.fr>`_ (preliminary work) / `Listic <http://www.listic.univ-savoie.fr>`_ (code maintainer and user) labs retina model to be used. This class allows human retina spatio-temporal image processing to be applied on still images, images sequences and video sequences. Briefly, here are the main human retina model properties:
+
+* spectral whithening (mid-frequency details enhancement)
+
+* high frequency spatio-temporal noise reduction (temporal noise and high frequency spatial noise are minimized)
+
+* low frequency luminance reduction (luminance range compression) : high luminance regions do not hide details in darker regions anymore
+
+* local logarithmic luminance compression allows details to be enhanced even in low light conditions
+
+Use : this model can be used basically for spatio-temporal video effects but also in the aim of :
+
+* performing texture analysis with enhanced signal to noise ratio and enhanced details robust against input images luminance ranges (check out the parvocellular retina channel output, by using the provided **getParvo** methods)
+
+* performing motion analysis also taking benefit of the previously cited properties (check out the magnocellular retina channel output, by using the provided **getMagno** methods)
+
+* general image/video sequence description using either one or both channels. An example of the use of Retina in a Bag of Words approach is given in [Strat2013]_.
+
+Literature
+==========
+For more information, refer to the following papers :
+
+* Model description :
+
+.. [Benoit2010] Benoit A., Caplier A., Durette B., Herault, J., "Using Human Visual System Modeling For Bio-Inspired Low Level Image Processing", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773. DOI <http://dx.doi.org/10.1016/j.cviu.2010.01.011>
+
+* Model use in a Bag of Words approach :
+
+.. [Strat2013] Strat S., Benoit A., Lambert P., "Retina enhanced SIFT descriptors for video indexing", CBMI2013, Veszprém, Hungary, 2013.
+
+* Please have a look at the reference work of Jeanny Herault that you can read in his book :
+
+.. [Herault2010] Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
+
+This retina filter code includes the research contributions of phd/research collegues from which code has been redrawn by the author :
+
+* take a look at the *retinacolor.hpp* module to discover Brice Chaix de Lavarene phD color mosaicing/demosaicing and his reference paper:
+
+.. [Chaix2007] B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). "Efficient demosaicing through recursive filtering", IEEE International Conference on Image Processing ICIP 2007
+
+* take a look at *imagelogpolprojection.hpp* to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny's discussions. More informations in the above cited Jeanny Heraults's book.
+
+* Meylan&al work on HDR tone mapping that is implemented as a specific method within the model :
+
+.. [Meylan2007] L. Meylan , D. Alleysson, S. Susstrunk, "A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images", Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
+
+Demos and experiments !
+=======================
+
+**NOTE : Complementary to the following examples, have a look at the Retina tutorial in the tutorial/contrib section for complementary explanations.**
+
+Take a look at the provided C++ examples provided with OpenCV :
+
+* **samples/cpp/retinademo.cpp** shows how to use the retina module for details enhancement (Parvo channel output) and transient maps observation (Magno channel output). You can play with images, video sequences and webcam video.
+ Typical uses are (provided your OpenCV installation is situated in folder *OpenCVReleaseFolder*)
+
+ * image processing : **OpenCVReleaseFolder/bin/retinademo -image myPicture.jpg**
+
+ * video processing : **OpenCVReleaseFolder/bin/retinademo -video myMovie.avi**
+
+ * webcam processing: **OpenCVReleaseFolder/bin/retinademo -video**
+
+ **Note :** This demo generates the file *RetinaDefaultParameters.xml* which contains the default parameters of the retina. Then, rename this as *RetinaSpecificParameters.xml*, adjust the parameters the way you want and reload the program to check the effect.
+
+
+* **samples/cpp/OpenEXRimages_HighDynamicRange_Retina_toneMapping.cpp** shows how to use the retina to perform High Dynamic Range (HDR) luminance compression
+
+ Then, take a HDR image using bracketing with your camera and generate an OpenEXR image and then process it using the demo.
+
+ Typical use, supposing that you have the OpenEXR image such as *memorial.exr* (present in the samples/cpp/ folder)
+
+ **OpenCVReleaseFolder/bin/OpenEXRimages_HighDynamicRange_Retina_toneMapping memorial.exr [optionnal: 'fast']**
+
+ Note that some sliders are made available to allow you to play with luminance compression.
+
+ If not using the 'fast' option, then, tone mapping is performed using the full retina model [Benoit2010]_. It includes spectral whitening that allows luminance energy to be reduced. When using the 'fast' option, then, a simpler method is used, it is an adaptation of the algorithm presented in [Meylan2007]_. This method gives also good results and is faster to process but it sometimes requires some more parameters adjustement.
+
+
+Methods description
+===================
+
+Here are detailled the main methods to control the retina model
+
+Ptr<Retina>::createRetina
++++++++++++++++++++++++++
+
+.. ocv:function:: Ptr<Retina> createRetina(Size inputSize)
+.. ocv:function:: Ptr<Retina> createRetina(Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod = RETINA_COLOR_BAYER, const bool useRetinaLogSampling = false, const double reductionFactor = 1.0, const double samplingStrenght = 10.0 )
+
+ Constructors from standardized interfaces : retreive a smart pointer to a Retina instance
+
+ :param inputSize: the input frame size
+ :param colorMode: the chosen processing mode : with or without color processing
+ :param colorSamplingMethod: specifies which kind of color sampling will be used :
+
+ * RETINA_COLOR_RANDOM: each pixel position is either R, G or B in a random choice
+
+ * RETINA_COLOR_DIAGONAL: color sampling is RGBRGBRGB..., line 2 BRGBRGBRG..., line 3, GBRGBRGBR...
+
+ * RETINA_COLOR_BAYER: standard bayer sampling
+
+ :param useRetinaLogSampling: activate retina log sampling, if true, the 2 following parameters can be used
+ :param reductionFactor: only usefull if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak
+ :param samplingStrenght: only usefull if param useRetinaLogSampling=true, specifies the strenght of the log scale that is applied
+
+Retina::activateContoursProcessing
+++++++++++++++++++++++++++++++++++
+
+.. ocv:function:: void Retina::activateContoursProcessing(const bool activate)
+
+ Activate/desactivate the Parvocellular pathway processing (contours information extraction), by default, it is activated
+
+ :param activate: true if Parvocellular (contours information extraction) output should be activated, false if not... if activated, the Parvocellular output can be retrieved using the **getParvo** methods
+
+Retina::activateMovingContoursProcessing
+++++++++++++++++++++++++++++++++++++++++
+
+.. ocv:function:: void Retina::activateMovingContoursProcessing(const bool activate)
+
+ Activate/desactivate the Magnocellular pathway processing (motion information extraction), by default, it is activated
+
+ :param activate: true if Magnocellular output should be activated, false if not... if activated, the Magnocellular output can be retrieved using the **getMagno** methods
+
+Retina::clearBuffers
+++++++++++++++++++++
+
+.. ocv:function:: void Retina::clearBuffers()
+
+ Clears all retina buffers (equivalent to opening the eyes after a long period of eye close ;o) whatchout the temporal transition occuring just after this method call.
+
+Retina::getParvo
+++++++++++++++++
+
+.. ocv:function:: void Retina::getParvo( OutputArray retinaOutput_parvo )
+.. ocv:function:: void Retina::getParvoRAW( OutputArray retinaOutput_parvo )
+.. ocv:function:: const Mat Retina::getParvoRAW() const
+
+ Accessor of the details channel of the retina (models foveal vision). Warning, getParvoRAW methods return buffers that are not rescaled within range [0;255] while the non RAW method allows a normalized matrix to be retrieved.
+
+ :param retinaOutput_parvo: the output buffer (reallocated if necessary), format can be :
+
+ * a Mat, this output is rescaled for standard 8bits image processing use in OpenCV
+
+ * RAW methods actually return a 1D matrix (encoding is R1, R2, ... Rn, G1, G2, ..., Gn, B1, B2, ...Bn), this output is the original retina filter model output, without any quantification or rescaling.
+
+Retina::getMagno
+++++++++++++++++
+
+.. ocv:function:: void Retina::getMagno( OutputArray retinaOutput_magno )
+.. ocv:function:: void Retina::getMagnoRAW( OutputArray retinaOutput_magno )
+.. ocv:function:: const Mat Retina::getMagnoRAW() const
+
+ Accessor of the motion channel of the retina (models peripheral vision). Warning, getMagnoRAW methods return buffers that are not rescaled within range [0;255] while the non RAW method allows a normalized matrix to be retrieved.
+
+ :param retinaOutput_magno: the output buffer (reallocated if necessary), format can be :
+
+ * a Mat, this output is rescaled for standard 8bits image processing use in OpenCV
+
+ * RAW methods actually return a 1D matrix (encoding is M1, M2,... Mn), this output is the original retina filter model output, without any quantification or rescaling.
+
+Retina::getInputSize
+++++++++++++++++++++
+
+.. ocv:function:: Size Retina::getInputSize()
+
+ Retreive retina input buffer size
+
+ :return: the retina input buffer size
+
+Retina::getOutputSize
++++++++++++++++++++++
+
+.. ocv:function:: Size Retina::getOutputSize()
+
+ Retreive retina output buffer size that can be different from the input if a spatial log transformation is applied
+
+ :return: the retina output buffer size
+
+Retina::printSetup
+++++++++++++++++++
+
+.. ocv:function:: const String Retina::printSetup()
+
+ Outputs a string showing the used parameters setup
+
+ :return: a string which contains formated parameters information
+
+Retina::run
++++++++++++
+
+.. ocv:function:: void Retina::run(InputArray inputImage)
+
+ Method which allows retina to be applied on an input image, after run, encapsulated retina module is ready to deliver its outputs using dedicated acccessors, see getParvo and getMagno methods
+
+ :param inputImage: the input Mat image to be processed, can be gray level or BGR coded in any format (from 8bit to 16bits)
+
+Retina::applyFastToneMapping
+++++++++++++++++++++++++++++
+
+.. ocv:function:: void Retina::applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)
+
+ Method which processes an image in the aim to correct its luminance : correct backlight problems, enhance details in shadows. This method is designed to perform High Dynamic Range image tone mapping (compress >8bit/pixel images to 8bit/pixel). This is a simplified version of the Retina Parvocellular model (simplified version of the run/getParvo methods call) since it does not include the spatio-temporal filter modelling the Outer Plexiform Layer of the retina that performs spectral whitening and many other stuff. However, it works great for tone mapping and in a faster way.
+
+ Check the demos and experiments section to see examples and the way to perform tone mapping using the original retina model and the method.
+
+ :param inputImage: the input image to process (should be coded in float format : CV_32F, CV_32FC1, CV_32F_C3, CV_32F_C4, the 4th channel won't be considered).
+ :param outputToneMappedImage: the output 8bit/channel tone mapped image (CV_8U or CV_8UC3 format).
+
+Retina::setColorSaturation
+++++++++++++++++++++++++++
+
+.. ocv:function:: void Retina::setColorSaturation(const bool saturateColors = true, const float colorSaturationValue = 4.0 )
+
+ Activate color saturation as the final step of the color demultiplexing process -> this saturation is a sigmoide function applied to each channel of the demultiplexed image.
+
+ :param saturateColors: boolean that activates color saturation (if true) or desactivate (if false)
+ :param colorSaturationValue: the saturation factor : a simple factor applied on the chrominance buffers
+
+
+Retina::setup
++++++++++++++
+
+.. ocv:function:: void Retina::setup(String retinaParameterFile = "", const bool applyDefaultSetupOnFailure = true )
+.. ocv:function:: void Retina::setup(FileStorage & fs, const bool applyDefaultSetupOnFailure = true )
+.. ocv:function:: void Retina::setup(RetinaParameters newParameters)
+
+ Try to open an XML retina parameters file to adjust current retina instance setup => if the xml file does not exist, then default setup is applied => warning, Exceptions are thrown if read XML file is not valid
+
+ :param retinaParameterFile: the parameters filename
+ :param applyDefaultSetupOnFailure: set to true if an error must be thrown on error
+ :param fs: the open Filestorage which contains retina parameters
+ :param newParameters: a parameters structures updated with the new target configuration. You can retreive the current parameers structure using method *Retina::RetinaParameters Retina::getParameters()* and update it before running method *setup*.
+
+Retina::write
++++++++++++++
+
+.. ocv:function:: void Retina::write( String fs ) const
+.. ocv:function:: void Retina::write( FileStorage& fs ) const
+
+ Write xml/yml formated parameters information
+
+ :param fs: the filename of the xml file that will be open and writen with formatted parameters information
+
+Retina::setupIPLMagnoChannel
+++++++++++++++++++++++++++++
+
+.. ocv:function:: void Retina::setupIPLMagnoChannel(const bool normaliseOutput = true, const float parasolCells_beta = 0, const float parasolCells_tau = 0, const float parasolCells_k = 7, const float amacrinCellsTemporalCutFrequency = 1.2, const float V0CompressionParameter = 0.95, const float localAdaptintegration_tau = 0, const float localAdaptintegration_k = 7 )
+
+ Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel this channel processes signals output from OPL processing stage in peripheral vision, it allows motion information enhancement. It is decorrelated from the details channel. See reference papers for more details.
+
+ :param normaliseOutput: specifies if (true) output is rescaled between 0 and 255 of not (false)
+ :param parasolCells_beta: the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
+ :param parasolCells_tau: the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
+ :param parasolCells_k: the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
+ :param amacrinCellsTemporalCutFrequency: the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, typical value is 1.2
+ :param V0CompressionParameter: the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 0.95
+ :param localAdaptintegration_tau: specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
+ :param localAdaptintegration_k: specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
+
+Retina::setupOPLandIPLParvoChannel
+++++++++++++++++++++++++++++++++++
+
+.. ocv:function:: void Retina::setupOPLandIPLParvoChannel(const bool colorMode = true, const bool normaliseOutput = true, const float photoreceptorsLocalAdaptationSensitivity = 0.7, const float photoreceptorsTemporalConstant = 0.5, const float photoreceptorsSpatialConstant = 0.53, const float horizontalCellsGain = 0, const float HcellsTemporalConstant = 1, const float HcellsSpatialConstant = 7, const float ganglionCellsSensitivity = 0.7 )
+
+ Setup the OPL and IPL parvo channels (see biologocal model) OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See reference papers for more informations.
+
+ :param colorMode: specifies if (true) color is processed of not (false) to then processing gray level image
+ :param normaliseOutput: specifies if (true) output is rescaled between 0 and 255 of not (false)
+ :param photoreceptorsLocalAdaptationSensitivity: the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)
+ :param photoreceptorsTemporalConstant: the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
+ :param photoreceptorsSpatialConstant: the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
+ :param horizontalCellsGain: gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
+ :param HcellsTemporalConstant: the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors
+ :param HcellsSpatialConstant: the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
+ :param ganglionCellsSensitivity: the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 0.7
+
+
+Retina::RetinaParameters
+========================
+
+.. ocv:struct:: Retina::RetinaParameters
+
+ This structure merges all the parameters that can be adjusted threw the **Retina::setup()**, **Retina::setupOPLandIPLParvoChannel** and **Retina::setupIPLMagnoChannel** setup methods
+ Parameters structure for better clarity, check explenations on the comments of methods : setupOPLandIPLParvoChannel and setupIPLMagnoChannel. ::
+
+ class RetinaParameters{
+ struct OPLandIplParvoParameters{ // Outer Plexiform Layer (OPL) and Inner Plexiform Layer Parvocellular (IplParvo) parameters
+ OPLandIplParvoParameters():colorMode(true),
+ normaliseOutput(true), // specifies if (true) output is rescaled between 0 and 255 of not (false)
+ photoreceptorsLocalAdaptationSensitivity(0.7f), // the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)
+ photoreceptorsTemporalConstant(0.5f),// the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame
+ photoreceptorsSpatialConstant(0.53f),// the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel
+ horizontalCellsGain(0.0f),//gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0
+ hcellsTemporalConstant(1.f),// the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors. Reduce to 0.5 to limit retina after effects.
+ hcellsSpatialConstant(7.f),//the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)
+ ganglionCellsSensitivity(0.7f)//the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 0.7
+ {};// default setup
+ bool colorMode, normaliseOutput;
+ float photoreceptorsLocalAdaptationSensitivity, photoreceptorsTemporalConstant, photoreceptorsSpatialConstant, horizontalCellsGain, hcellsTemporalConstant, hcellsSpatialConstant, ganglionCellsSensitivity;
+ };
+ struct IplMagnoParameters{ // Inner Plexiform Layer Magnocellular channel (IplMagno)
+ IplMagnoParameters():
+ normaliseOutput(true), //specifies if (true) output is rescaled between 0 and 255 of not (false)
+ parasolCells_beta(0.f), // the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0
+ parasolCells_tau(0.f), //the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)
+ parasolCells_k(7.f), //the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5
+ amacrinCellsTemporalCutFrequency(1.2f), //the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, typical value is 1.2
+ V0CompressionParameter(0.95f), the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity... and the output saturates faster, recommended value: 0.95
+ localAdaptintegration_tau(0.f), // specifies the temporal constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
+ localAdaptintegration_k(7.f) // specifies the spatial constant of the low pas filter involved in the computation of the local "motion mean" for the local adaptation computation
+ {};// default setup
+ bool normaliseOutput;
+ float parasolCells_beta, parasolCells_tau, parasolCells_k, amacrinCellsTemporalCutFrequency, V0CompressionParameter, localAdaptintegration_tau, localAdaptintegration_k;
+ };
+ struct OPLandIplParvoParameters OPLandIplParvo;
+ struct IplMagnoParameters IplMagno;
+ };
+
+Retina parameters files examples
+++++++++++++++++++++++++++++++++
+
+Here is the default configuration file of the retina module. It gives results such as the first retina output shown on the top of this page.
+
+.. code-block:: cpp
+
+ <?xml version="1.0"?>
+ <opencv_storage>
+ <OPLandIPLparvo>
+ <colorMode>1</colorMode>
+ <normaliseOutput>1</normaliseOutput>
+ <photoreceptorsLocalAdaptationSensitivity>7.5e-01</photoreceptorsLocalAdaptationSensitivity>
+ <photoreceptorsTemporalConstant>9.0e-01</photoreceptorsTemporalConstant>
+ <photoreceptorsSpatialConstant>5.3e-01</photoreceptorsSpatialConstant>
+ <horizontalCellsGain>0.01</horizontalCellsGain>
+ <hcellsTemporalConstant>0.5</hcellsTemporalConstant>
+ <hcellsSpatialConstant>7.</hcellsSpatialConstant>
+ <ganglionCellsSensitivity>7.5e-01</ganglionCellsSensitivity></OPLandIPLparvo>
+ <IPLmagno>
+ <normaliseOutput>1</normaliseOutput>
+ <parasolCells_beta>0.</parasolCells_beta>
+ <parasolCells_tau>0.</parasolCells_tau>
+ <parasolCells_k>7.</parasolCells_k>
+ <amacrinCellsTemporalCutFrequency>2.0e+00</amacrinCellsTemporalCutFrequency>
+ <V0CompressionParameter>9.5e-01</V0CompressionParameter>
+ <localAdaptintegration_tau>0.</localAdaptintegration_tau>
+ <localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
+ </opencv_storage>
+
+Here is the 'realistic" setup used to obtain the second retina output shown on the top of this page.
+
+.. code-block:: cpp
+
+ <?xml version="1.0"?>
+ <opencv_storage>
+ <OPLandIPLparvo>
+ <colorMode>1</colorMode>
+ <normaliseOutput>1</normaliseOutput>
+ <photoreceptorsLocalAdaptationSensitivity>8.9e-01</photoreceptorsLocalAdaptationSensitivity>
+ <photoreceptorsTemporalConstant>9.0e-01</photoreceptorsTemporalConstant>
+ <photoreceptorsSpatialConstant>5.3e-01</photoreceptorsSpatialConstant>
+ <horizontalCellsGain>0.3</horizontalCellsGain>
+ <hcellsTemporalConstant>0.5</hcellsTemporalConstant>
+ <hcellsSpatialConstant>7.</hcellsSpatialConstant>
+ <ganglionCellsSensitivity>8.9e-01</ganglionCellsSensitivity></OPLandIPLparvo>
+ <IPLmagno>
+ <normaliseOutput>1</normaliseOutput>
+ <parasolCells_beta>0.</parasolCells_beta>
+ <parasolCells_tau>0.</parasolCells_tau>
+ <parasolCells_k>7.</parasolCells_k>
+ <amacrinCellsTemporalCutFrequency>2.0e+00</amacrinCellsTemporalCutFrequency>
+ <V0CompressionParameter>9.5e-01</V0CompressionParameter>
+ <localAdaptintegration_tau>0.</localAdaptintegration_tau>
+ <localAdaptintegration_k>7.</localAdaptintegration_k></IPLmagno>
+ </opencv_storage>
--- /dev/null
+/*M///////////////////////////////////////////////////////////////////////////////////////
+//
+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
+//
+// By downloading, copying, installing or using the software you agree to this license.
+// If you do not agree to this license, do not download, install,
+// copy or use the software.
+//
+//
+// License Agreement
+// For Open Source Computer Vision Library
+//
+// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
+// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
+// Third party copyrights are property of their respective owners.
+//
+// Redistribution and use in source and binary forms, with or without modification,
+// are permitted provided that the following conditions are met:
+//
+// * Redistribution's of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+//
+// * Redistribution's in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// * The name of the copyright holders may not be used to endorse or promote products
+// derived from this software without specific prior written permission.
+//
+// This software is provided by the copyright holders and contributors "as is" and
+// any express or implied warranties, including, but not limited to, the implied
+// warranties of merchantability and fitness for a particular purpose are disclaimed.
+// In no event shall the Intel Corporation or contributors be liable for any direct,
+// indirect, incidental, special, exemplary, or consequential damages
+// (including, but not limited to, procurement of substitute goods or services;
+// loss of use, data, or profits; or business interruption) however caused
+// and on any theory of liability, whether in contract, strict liability,
+// or tort (including negligence or otherwise) arising in any way out of
+// the use of this software, even if advised of the possibility of such damage.
+//
+//M*/
+
+#ifndef __OPENCV_BIOINSPIRED_HPP__
+#define __OPENCV_BIOINSPIRED_HPP__
+
+#include "opencv2/core.hpp"
+#include "opencv2/bioinspired/retina.hpp"
+#include "opencv2/bioinspired/retinafasttonemapping.hpp"
+
+using namespace cv::hvstools;
+#endif
--- /dev/null
+/*M///////////////////////////////////////////////////////////////////////////////////////
+//
+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
+//
+// By downloading, copying, installing or using the software you agree to this license.
+// If you do not agree to this license, do not download, install,
+// copy or use the software.
+//
+//
+// License Agreement
+// For Open Source Computer Vision Library
+//
+// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
+// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
+// Copyright (C) 2013, OpenCV Foundation, all rights reserved.
+// Third party copyrights are property of their respective owners.
+//
+// Redistribution and use in source and binary forms, with or without modification,
+// are permitted provided that the following conditions are met:
+//
+// * Redistribution's of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+//
+// * Redistribution's in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// * The name of the copyright holders may not be used to endorse or promote products
+// derived from this software without specific prior written permission.
+//
+// This software is provided by the copyright holders and contributors "as is" and
+// any express or implied warranties, including, but not limited to, the implied
+// warranties of merchantability and fitness for a particular purpose are disclaimed.
+// In no event shall the Intel Corporation or contributors be liable for any direct,
+// indirect, incidental, special, exemplary, or consequential damages
+// (including, but not limited to, procurement of substitute goods or services;
+// loss of use, data, or profits; or business interruption) however caused
+// and on any theory of liability, whether in contract, strict liability,
+// or tort (including negligence or otherwise) arising in any way out of
+// the use of this software, even if advised of the possibility of such damage.
+//
+//M*/
+
+#ifdef __OPENCV_BUILD
+#error this is a compatibility header which should not be used inside the OpenCV library
+#endif
+
+#include "opencv2/bioinspired.hpp"
**
** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
**
- ** Creation - enhancement process 2007-2011
+ ** Creation - enhancement process 2007-2013
** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
**
** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
** the use of this software, even if advised of the possibility of such damage.
*******************************************************************************/
-#ifndef __OPENCV_CONTRIB_RETINA_HPP__
-#define __OPENCV_CONTRIB_RETINA_HPP__
+#ifndef __OPENCV_BIOINSPIRED_RETINA_HPP__
+#define __OPENCV_BIOINSPIRED_RETINA_HPP__
/*
* Retina.hpp
#include "opencv2/core.hpp" // for all OpenCV core functionalities access, including cv::Exception support
#include <valarray>
-namespace cv
-{
+namespace cv{
+namespace hvstools{
enum RETINA_COLORSAMPLINGMETHOD
{
virtual void run(InputArray inputImage)=0;
/**
+ * method that applies a luminance correction (initially High Dynamic Range (HDR) tone mapping) using only the 2 local adaptation stages of the retina parvo channel : photoreceptors level and ganlion cells level. Spatio temporal filtering is applied but limited to temporal smoothing and eventually high frequencies attenuation. This is a lighter method than the one available using the regular run method. It is then faster but it does not include complete temporal filtering nor retina spectral whitening. Then, it can have a more limited effect on images with a very high dynamic range. This is an adptation of the original still image HDR tone mapping algorithm of David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
+ * -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
+ @param inputImage the input image to process RGB or gray levels
+ @param outputToneMappedImage the output tone mapped image
+ */
+ virtual void applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)=0;
+
+ /**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
CV_EXPORTS Ptr<Retina> createRetina(Size inputSize);
CV_EXPORTS Ptr<Retina> createRetina(Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
+
+ /**
+ * exports a valarray buffer outing from HVStools objects to a cv::Mat in CV_8UC1 (gray level picture) or CV_8UC3 (color) format
+ * @param grayMatrixToConvert the valarray to export to OpenCV
+ * @param nbRows : the number of rows of the valarray flatten matrix
+ * @param nbColumns : the number of rows of the valarray flatten matrix
+ * @param colorMode : a flag which mentions if matrix is color (true) or graylevel (false)
+ * @param outBuffer : the output matrix which is reallocated to satisfy Retina output buffer dimensions
+ */
+ void _convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, OutputArray outBuffer);
+
+ /**
+ * convert a cv::Mat to a valarray buffer in float format
+ * @param inputMatToConvert : the OpenCV cv::Mat that has to be converted to gray or RGB valarray buffer that will be processed by the retina model
+ * @param outputValarrayMatrix : the output valarray
+ * @return the input image color mode (color=true, gray levels=false)
+ */
+ bool _convertCvMat2ValarrayBuffer(InputArray inputMatToConvert, std::valarray<float> &outputValarrayMatrix);
+
+}
}
-#endif /* __OPENCV_CONTRIB_RETINA_HPP__ */
+#endif /* __OPENCV_BIOINSPIRED_RETINA_HPP__ */
--- /dev/null
+
+/*#******************************************************************************
+ ** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
+ **
+ ** By downloading, copying, installing or using the software you agree to this license.
+ ** If you do not agree to this license, do not download, install,
+ ** copy or use the software.
+ **
+ **
+ ** HVStools : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
+ **
+ ** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
+ **
+ ** Creation - enhancement process 2007-2013
+ ** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
+ **
+ ** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
+ ** Refer to the following research paper for more information:
+ ** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
+ ** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
+ ** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
+ **
+ **
+ **
+ **
+ **
+ ** This class is based on image processing tools of the author and already used within the Retina class (this is the same code as method retina::applyFastToneMapping, but in an independent class, it is ligth from a memory requirement point of view). It implements an adaptation of the efficient tone mapping algorithm propose by David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
+ ** -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
+ **
+ **
+ ** License Agreement
+ ** For Open Source Computer Vision Library
+ **
+ ** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
+ ** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
+ **
+ ** For Human Visual System tools (hvstools)
+ ** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
+ **
+ ** Third party copyrights are property of their respective owners.
+ **
+ ** Redistribution and use in source and binary forms, with or without modification,
+ ** are permitted provided that the following conditions are met:
+ **
+ ** * Redistributions of source code must retain the above copyright notice,
+ ** this list of conditions and the following disclaimer.
+ **
+ ** * Redistributions in binary form must reproduce the above copyright notice,
+ ** this list of conditions and the following disclaimer in the documentation
+ ** and/or other materials provided with the distribution.
+ **
+ ** * The name of the copyright holders may not be used to endorse or promote products
+ ** derived from this software without specific prior written permission.
+ **
+ ** This software is provided by the copyright holders and contributors "as is" and
+ ** any express or implied warranties, including, but not limited to, the implied
+ ** warranties of merchantability and fitness for a particular purpose are disclaimed.
+ ** In no event shall the Intel Corporation or contributors be liable for any direct,
+ ** indirect, incidental, special, exemplary, or consequential damages
+ ** (including, but not limited to, procurement of substitute goods or services;
+ ** loss of use, data, or profits; or business interruption) however caused
+ ** and on any theory of liability, whether in contract, strict liability,
+ ** or tort (including negligence or otherwise) arising in any way out of
+ ** the use of this software, even if advised of the possibility of such damage.
+ *******************************************************************************/
+
+#ifndef __OPENCV_CONTRIB_RETINAFASTTONEMAPPING_HPP__
+#define __OPENCV_CONTRIB_RETINAFASTTONEMAPPING_HPP__
+
+/*
+ * retinafasttonemapping.hpp
+ *
+ * Created on: May 26, 2013
+ * Author: Alexandre Benoit
+ */
+
+#include "opencv2/core.hpp" // for all OpenCV core functionalities access, including cv::Exception support
+#include <valarray>
+
+namespace cv{
+namespace hvstools{
+
+/**
+ * @class RetinaFastToneMappingImpl a wrapper class which allows the tone mapping algorithm of Meylan&al(2007) to be used with OpenCV.
+ * This algorithm is already implemented in thre Retina class (retina::applyFastToneMapping) but used it does not require all the retina model to be allocated. This allows a light memory use for low memory devices (smartphones, etc.
+ * As a summary, these are the model properties:
+ * => 2 stages of local luminance adaptation with a different local neighborhood for each.
+ * => first stage models the retina photorecetors local luminance adaptation
+ * => second stage models th ganglion cells local information adaptation
+ * => compared to the initial publication, this class uses spatio-temporal low pass filters instead of spatial only filters.
+ * ====> this can help noise robustness and temporal stability for video sequence use cases.
+ * for more information, read to the following papers :
+ * Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
+ * regarding spatio-temporal filter and the bigger retina model :
+ * Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
+ */
+class CV_EXPORTS RetinaFastToneMapping : public Algorithm
+{
+public:
+
+ /**
+ * method that applies a luminance correction (initially High Dynamic Range (HDR) tone mapping) using only the 2 local adaptation stages of the retina parvocellular channel : photoreceptors level and ganlion cells level. Spatio temporal filtering is applied but limited to temporal smoothing and eventually high frequencies attenuation. This is a lighter method than the one available using the regular retina::run method. It is then faster but it does not include complete temporal filtering nor retina spectral whitening. Then, it can have a more limited effect on images with a very high dynamic range. This is an adptation of the original still image HDR tone mapping algorithm of David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
+ * -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
+ @param inputImage the input image to process RGB or gray levels
+ @param outputToneMappedImage the output tone mapped image
+ */
+ virtual void applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)=0;
+
+ /**
+ * setup method that updates tone mapping behaviors by adjusing the local luminance computation area
+ * @param photoreceptorsNeighborhoodRadius the first stage local adaptation area
+ * @param ganglioncellsNeighborhoodRadius the second stage local adaptation area
+ * @param meanLuminanceModulatorK the factor applied to modulate the meanLuminance information (default is 1, see reference paper)
+ */
+ virtual void setup(const float photoreceptorsNeighborhoodRadius=3.f, const float ganglioncellsNeighborhoodRadius=1.f, const float meanLuminanceModulatorK=1.f)=0;
+};
+
+CV_EXPORTS Ptr<RetinaFastToneMapping> createRetinaFastToneMapping(Size inputSize);
+
+}
+}
+#endif /* __OPENCV_CONTRIB_RETINAFASTTONEMAPPING_HPP__ */
+
namespace cv
{
-
+namespace hvstools
+{
// @author Alexandre BENOIT, benoit.alexandre.vision@gmail.com, LISTIC : www.listic.univ-savoie.fr Gipsa-Lab, France: www.gipsa-lab.inpg.fr/
//////////////////////////////////////////////////////////
}
}
-}
+}// end of namespace hvstools
+}// end of namespace cv
namespace cv
{
+namespace hvstools
+{
class BasicRetinaFilter
{
public:
* @param maxInputValue: the maximum amplitude value measured after local adaptation processing (c.f. function runFilter_LocalAdapdation & runFilter_LocalAdapdation_autonomous)
* @param meanLuminance: the a priori meann luminance of the input data (should be 128 for 8bits images but can vary greatly in case of High Dynamic Range Images (HDRI)
*/
- void setV0CompressionParameterToneMapping(const float v0, const float maxInputValue, const float meanLuminance=128.0f){ _v0=v0*maxInputValue; _localLuminanceFactor=1.0f; _localLuminanceAddon=meanLuminance*_v0; _maxInputValue=maxInputValue;};
+ void setV0CompressionParameterToneMapping(const float v0, const float maxInputValue, const float meanLuminance=128.0f){ _v0=v0*maxInputValue; _localLuminanceFactor=1.0f; _localLuminanceAddon=meanLuminance*v0; _maxInputValue=maxInputValue;};
/**
* update compression parameters while keeping v0 parameter value
};
-}
+}// end of namespace hvstools
+}// end of namespace cv
#endif
namespace cv
{
-
+namespace hvstools
+{
// constructor
ImageLogPolProjection::ImageLogPolProjection(const unsigned int nbRows, const unsigned int nbColumns, const PROJECTIONTYPE projection, const bool colorModeCapable)
:BasicRetinaFilter(nbRows, nbColumns),
return _sampledFrame;
}
-}
+}// end of namespace hvstools
+}// end of namespace cv
namespace cv
{
+namespace hvstools
+{
class ImageLogPolProjection:public BasicRetinaFilter
{
};
-}
+}// end of namespace hvstools
+}// end of namespace cv
#endif /*IMAGELOGPOLPROJECTION_H_*/
namespace cv
{
+namespace hvstools
+{
// Constructor and Desctructor of the OPL retina filter
MagnoRetinaFilter::MagnoRetinaFilter(const unsigned int NBrows, const unsigned int NBcolumns)
:BasicRetinaFilter(NBrows, NBcolumns, 2),
return (*_magnoYOutput);
}
-}
+}// end of namespace hvstools
+}// end of namespace cv
namespace cv
{
-
+namespace hvstools
+{
class MagnoRetinaFilter: public BasicRetinaFilter
{
public:
#endif
};
-}
+}// end of namespace hvstools
+}// end of namespace cv
#endif /*MagnoRetinaFilter_H_*/
namespace cv
{
+namespace hvstools
+{
//////////////////////////////////////////////////////////
// OPL RETINA FILTER
//////////////////////////////////////////////////////////
}
#endif
}
-}
+}// end of namespace hvstools
+}// end of namespace cv
namespace cv
{
+namespace hvstools
+{
//retina classes that derivate from the Basic Retrina class
class ParvoRetinaFilter: public BasicRetinaFilter
{
#endif
};
-}
+}// end of namespace hvstools
+}// end of namespace cv
#endif
--- /dev/null
+/*M///////////////////////////////////////////////////////////////////////////////////////
+//
+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
+//
+// By downloading, copying, installing or using the software you agree to this license.
+// If you do not agree to this license, do not download, install,
+// copy or use the software.
+//
+//
+// Intel License Agreement
+// For Open Source Computer Vision Library
+//
+// Copyright (C) 2000, Intel Corporation, all rights reserved.
+// Third party copyrights are property of their respective owners.
+//
+// Redistribution and use in source and binary forms, with or without modification,
+// are permitted provided that the following conditions are met:
+//
+// * Redistribution's of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+//
+// * Redistribution's in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// * The name of Intel Corporation may not be used to endorse or promote products
+// derived from this software without specific prior written permission.
+//
+// This software is provided by the copyright holders and contributors "as is" and
+// any express or implied warranties, including, but not limited to, the implied
+// warranties of merchantability and fitness for a particular purpose are disclaimed.
+// In no event shall the Intel Corporation or contributors be liable for any direct,
+// indirect, incidental, special, exemplary, or consequential damages
+// (including, but not limited to, procurement of substitute goods or services;
+// loss of use, data, or profits; or business interruption) however caused
+// and on any theory of liability, whether in contract, strict liability,
+// or tort (including negligence or otherwise) arising in any way out of
+// the use of this software, even if advised of the possibility of such damage.
+//
+//M*/
+
+#include "precomp.hpp"
+
+/* End of file. */
--- /dev/null
+/*M///////////////////////////////////////////////////////////////////////////////////////
+//
+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
+//
+// By downloading, copying, installing or using the software you agree to this license.
+// If you do not agree to this license, do not download, install,
+// copy or use the software.
+//
+//
+// License Agreement
+// For Open Source Computer Vision Library
+//
+// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
+// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
+// Third party copyrights are property of their respective owners.
+//
+// Redistribution and use in source and binary forms, with or without modification,
+// are permitted provided that the following conditions are met:
+//
+// * Redistribution's of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+//
+// * Redistribution's in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation
+// and/or other materials provided with the distribution.
+//
+// * The name of the copyright holders may not be used to endorse or promote products
+// derived from this software without specific prior written permission.
+//
+// This software is provided by the copyright holders and contributors "as is" and
+// any express or implied warranties, including, but not limited to, the implied
+// warranties of merchantability and fitness for a particular purpose are disclaimed.
+// In no event shall the Intel Corporation or contributors be liable for any direct,
+// indirect, incidental, special, exemplary, or consequential damages
+// (including, but not limited to, procurement of substitute goods or services;
+// loss of use, data, or profits; or business interruption) however caused
+// and on any theory of liability, whether in contract, strict liability,
+// or tort (including negligence or otherwise) arising in any way out of
+// the use of this software, even if advised of the possibility of such damage.
+//
+//M*/
+
+#ifndef __OPENCV_PRECOMP_H__
+#define __OPENCV_PRECOMP_H__
+
+#include "opencv2/bioinspired.hpp"
+#include "opencv2/core/utility.hpp"
+
+#include "opencv2/core/private.hpp"
+
+namespace cv
+{
+
+// special function to get pointer to constant valarray elements, since
+// simple &arr[0] does not compile on VS2005/VS2008.
+template<typename T> inline const T* get_data(const std::valarray<T>& arr)
+{ return &((std::valarray<T>&)arr)[0]; }
+
+}
+
+#endif
namespace cv
{
+namespace hvstools
+{
class RetinaImpl : public Retina
{
void run(InputArray inputImage);
/**
+ * method that applies a luminance correction (initially High Dynamic Range (HDR) tone mapping) using only the 2 local adaptation stages of the retina parvo channel : photoreceptors level and ganlion cells level. Spatio temporal filtering is applied but limited to temporal smoothing and eventually high frequencies attenuation. This is a lighter method than the one available using the regular run method. It is then faster but it does not include complete temporal filtering nor retina spectral whitening. This is an adptation of the original still image HDR tone mapping algorithm of David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
+ * -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
+ @param inputImage the input image to process RGB or gray levels
+ @param outputToneMappedImage the output tone mapped image
+ */
+ void applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage);
+
+ /**
* accessor of the details channel of the retina (models foveal vision)
* @param retinaOutput_parvo : the output buffer (reallocated if necessary), this output is rescaled for standard 8bits image processing use in OpenCV
*/
// Parameteres setup members
RetinaParameters _retinaParameters; // structure of parameters
- // Retina model related modules
+ // Retina model related modules
std::valarray<float> _inputBuffer; //!< buffer used to convert input cv::Mat to internal retina buffers format (valarrays)
// pointer to retina model
RetinaFilter* _retinaFilter; //!< the pointer to the retina module, allocated with instance construction
- /**
- * exports a valarray buffer outing from HVStools objects to a cv::Mat in CV_8UC1 (gray level picture) or CV_8UC3 (color) format
- * @param grayMatrixToConvert the valarray to export to OpenCV
- * @param nbRows : the number of rows of the valarray flatten matrix
- * @param nbColumns : the number of rows of the valarray flatten matrix
- * @param colorMode : a flag which mentions if matrix is color (true) or graylevel (false)
- * @param outBuffer : the output matrix which is reallocated to satisfy Retina output buffer dimensions
- */
- void _convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, OutputArray outBuffer);
-
- /**
- *
- * @param inputMatToConvert : the OpenCV cv::Mat that has to be converted to gray or RGB valarray buffer that will be processed by the retina model
- * @param outputValarrayMatrix : the output valarray
- * @return the input image color mode (color=true, gray levels=false)
- */
- bool _convertCvMat2ValarrayBuffer(InputArray inputMatToConvert, std::valarray<float> &outputValarrayMatrix);
-
//! private method called by constructors, gathers their parameters and use them in a unified way
void _init(const Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod=RETINA_COLOR_BAYER, const bool useRetinaLogSampling=false, const double reductionFactor=1.0, const double samplingStrenght=10.0);
Ptr<Retina> createRetina(Size inputSize){ return new RetinaImpl(inputSize); }
Ptr<Retina> createRetina(Size inputSize, const bool colorMode, RETINA_COLORSAMPLINGMETHOD colorSamplingMethod, const bool useRetinaLogSampling, const double reductionFactor, const double samplingStrenght){return new RetinaImpl(inputSize, colorMode, colorSamplingMethod, useRetinaLogSampling, reductionFactor, samplingStrenght);}
+
// RetinaImpl code
RetinaImpl::RetinaImpl(const cv::Size inputSz)
{
printf("%s\n", printSetup().c_str());
}
-void RetinaImpl::setup(cv::Retina::RetinaParameters newConfiguration)
+void RetinaImpl::setup(Retina::RetinaParameters newConfiguration)
{
// simply copy structures
- memcpy(&_retinaParameters, &newConfiguration, sizeof(cv::Retina::RetinaParameters));
+ memcpy(&_retinaParameters, &newConfiguration, sizeof(Retina::RetinaParameters));
// apply setup
setupOPLandIPLParvoChannel(_retinaParameters.OPLandIplParvo.colorMode, _retinaParameters.OPLandIplParvo.normaliseOutput, _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity, _retinaParameters.OPLandIplParvo.photoreceptorsTemporalConstant, _retinaParameters.OPLandIplParvo.photoreceptorsSpatialConstant, _retinaParameters.OPLandIplParvo.horizontalCellsGain, _retinaParameters.OPLandIplParvo.hcellsTemporalConstant, _retinaParameters.OPLandIplParvo.hcellsSpatialConstant, _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity);
setupIPLMagnoChannel(_retinaParameters.IplMagno.normaliseOutput, _retinaParameters.IplMagno.parasolCells_beta, _retinaParameters.IplMagno.parasolCells_tau, _retinaParameters.IplMagno.parasolCells_k, _retinaParameters.IplMagno.amacrinCellsTemporalCutFrequency,_retinaParameters.IplMagno.V0CompressionParameter, _retinaParameters.IplMagno.localAdaptintegration_tau, _retinaParameters.IplMagno.localAdaptintegration_k);
throw cv::Exception(-1, "RetinaImpl cannot be applied, wrong input buffer size", "RetinaImpl::run", "RetinaImpl.h", 0);
}
+void RetinaImpl::applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)
+{
+ // first convert input image to the compatible format :
+ const bool colorMode = _convertCvMat2ValarrayBuffer(inputImage.getMat(), _inputBuffer);
+ const unsigned int nbPixels=_retinaFilter->getOutputNBrows()*_retinaFilter->getOutputNBcolumns();
+
+ // process tone mapping
+ if (colorMode)
+ {
+ std::valarray<float> imageOutput(nbPixels*3);
+ _retinaFilter->runRGBToneMapping(_inputBuffer, imageOutput, true, _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity, _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity);
+ _convertValarrayBuffer2cvMat(imageOutput, _retinaFilter->getOutputNBrows(), _retinaFilter->getOutputNBcolumns(), true, outputToneMappedImage);
+ }else
+ {
+ std::valarray<float> imageOutput(nbPixels);
+ _retinaFilter->runGrayToneMapping(_inputBuffer, imageOutput, _retinaParameters.OPLandIplParvo.photoreceptorsLocalAdaptationSensitivity, _retinaParameters.OPLandIplParvo.ganglionCellsSensitivity);
+ _convertValarrayBuffer2cvMat(imageOutput, _retinaFilter->getOutputNBrows(), _retinaFilter->getOutputNBcolumns(), false, outputToneMappedImage);
+ }
+
+}
+
void RetinaImpl::getParvo(OutputArray retinaOutput_parvo)
{
if (_retinaFilter->getColorMode())
{
// basic error check
if (inputSz.height*inputSz.width <= 0)
- throw cv::Exception(-1, "Bad retina size setup : size height and with must be superior to zero", "RetinaImpl::setup", "RetinaImpl.h", 0);
+ throw cv::Exception(-1, "Bad retina size setup : size height and with must be superior to zero", "RetinaImpl::setup", "Retina.cpp", 0);
unsigned int nbPixels=inputSz.height*inputSz.width;
// resize buffers if size does not match
_retinaFilter = new RetinaFilter(inputSz.height, inputSz.width, colorMode, colorSamplingMethod, useRetinaLogSampling, reductionFactor, samplingStrenght);
// prepare the default parameter XML file with default setup
- setup(_retinaParameters);
+ setup(_retinaParameters);
// init retina
_retinaFilter->clearAllBuffers();
printf("%s\n", printSetup().c_str());
}
-void RetinaImpl::_convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, OutputArray outBuffer)
+void _convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, OutputArray outBuffer)
{
// fill output buffer with the valarray buffer
const float *valarrayPTR=get_data(grayMatrixToConvert);
}
}else
{
- const unsigned int doubleNBpixels=_retinaFilter->getOutputNBpixels()*2;
+ const unsigned int nbPixels=nbColumns*nbRows;
+ const unsigned int doubleNBpixels=nbColumns*nbRows*2;
outBuffer.create(cv::Size(nbColumns, nbRows), CV_8UC3);
Mat outMat = outBuffer.getMat();
for (unsigned int i=0;i<nbRows;++i)
cv::Point2d pixel(j,i);
cv::Vec3b pixelValues;
pixelValues[2]=(unsigned char)*(valarrayPTR);
- pixelValues[1]=(unsigned char)*(valarrayPTR+_retinaFilter->getOutputNBpixels());
+ pixelValues[1]=(unsigned char)*(valarrayPTR+nbPixels);
pixelValues[0]=(unsigned char)*(valarrayPTR+doubleNBpixels);
outMat.at<cv::Vec3b>(pixel)=pixelValues;
}
}
-bool RetinaImpl::_convertCvMat2ValarrayBuffer(InputArray inputMat, std::valarray<float> &outputValarrayMatrix)
+bool _convertCvMat2ValarrayBuffer(InputArray inputMat, std::valarray<float> &outputValarrayMatrix)
{
const Mat inputMatToConvert=inputMat.getMat();
// first check input consistency
const int dsttype = DataType<T>::depth; // output buffer is float format
+ const unsigned int nbPixels=inputMat.getMat().rows*inputMat.getMat().cols;
+ const unsigned int doubleNBpixels=inputMat.getMat().rows*inputMat.getMat().cols*2;
if(imageNumberOfChannels==4)
{
// create a cv::Mat table (for RGBA planes)
cv::Mat planes[4] =
{
- cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[_retinaFilter->getInputNBpixels()*2]),
- cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[_retinaFilter->getInputNBpixels()]),
+ cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[doubleNBpixels]),
+ cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[nbPixels]),
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[0])
};
planes[3] = cv::Mat(inputMatToConvert.size(), dsttype); // last channel (alpha) does not point on the valarray (not usefull in our case)
// create a cv::Mat table (for RGB planes)
cv::Mat planes[] =
{
- cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[_retinaFilter->getInputNBpixels()*2]),
- cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[_retinaFilter->getInputNBpixels()]),
+ cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[doubleNBpixels]),
+ cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[nbPixels]),
cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[0])
};
// split color cv::Mat in 3 planes... it fills valarray directely
void RetinaImpl::activateContoursProcessing(const bool activate){_retinaFilter->activateContoursProcessing(activate);}
-} // end of namespace cv
+}// end of namespace hvstools
+}// end of namespace cv
namespace cv
{
-
+namespace hvstools
+{
// init static values
static float _LMStoACr1Cr2[]={1.0, 1.0, 0.0, 1.0, -1.0, 0.0, -0.5, -0.5, 1.0};
//static double _ACr1Cr2toLMS[]={0.5, 0.5, 0.0, 0.5, -0.5, 0.0, 0.5, 0.0, 1.0};
}
}
-}
+}// end of namespace hvstools
+}// end of namespace cv
namespace cv
{
-
+namespace hvstools
+{
class RetinaColor: public BasicRetinaFilter
{
public:
* @param NBcolumns: number of columns of the input image
* @param samplingMethod: the chosen color sampling method
*/
- RetinaColor(const unsigned int NBrows, const unsigned int NBcolumns, const RETINA_COLORSAMPLINGMETHOD samplingMethod=RETINA_COLOR_DIAGONAL);
+ RetinaColor(const unsigned int NBrows, const unsigned int NBcolumns, const RETINA_COLORSAMPLINGMETHOD samplingMethod=RETINA_COLOR_BAYER);
/**
* standard destructor
#endif
};
-}
+}// end of namespace hvstools
+}// end of namespace cv
#endif /*RETINACOLOR_HPP_*/
--- /dev/null
+
+/*#******************************************************************************
+ ** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
+ **
+ ** By downloading, copying, installing or using the software you agree to this license.
+ ** If you do not agree to this license, do not download, install,
+ ** copy or use the software.
+ **
+ **
+ ** HVStools : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
+ **
+ ** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
+ **
+ ** Creation - enhancement process 2007-2013
+ ** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
+ **
+ ** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
+ ** Refer to the following research paper for more information:
+ ** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
+ ** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
+ ** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
+ **
+ **
+ ** This class is based on image processing tools of the author and already used within the Retina class (this is the same code as method retina::applyFastToneMapping, but in an independent class, it is ligth from a memory requirement point of view). It implements an adaptation of the efficient tone mapping algorithm propose by David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
+ ** -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
+ **
+ **
+ ** License Agreement
+ ** For Open Source Computer Vision Library
+ **
+ ** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
+ ** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
+ **
+ ** For Human Visual System tools (hvstools)
+ ** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
+ **
+ ** Third party copyrights are property of their respective owners.
+ **
+ ** Redistribution and use in source and binary forms, with or without modification,
+ ** are permitted provided that the following conditions are met:
+ **
+ ** * Redistributions of source code must retain the above copyright notice,
+ ** this list of conditions and the following disclaimer.
+ **
+ ** * Redistributions in binary form must reproduce the above copyright notice,
+ ** this list of conditions and the following disclaimer in the documentation
+ ** and/or other materials provided with the distribution.
+ **
+ ** * The name of the copyright holders may not be used to endorse or promote products
+ ** derived from this software without specific prior written permission.
+ **
+ ** This software is provided by the copyright holders and contributors "as is" and
+ ** any express or implied warranties, including, but not limited to, the implied
+ ** warranties of merchantability and fitness for a particular purpose are disclaimed.
+ ** In no event shall the Intel Corporation or contributors be liable for any direct,
+ ** indirect, incidental, special, exemplary, or consequential damages
+ ** (including, but not limited to, procurement of substitute goods or services;
+ ** loss of use, data, or profits; or business interruption) however caused
+ ** and on any theory of liability, whether in contract, strict liability,
+ ** or tort (including negligence or otherwise) arising in any way out of
+ ** the use of this software, even if advised of the possibility of such damage.
+ *******************************************************************************/
+
+/*
+ * retinafasttonemapping.cpp
+ *
+ * Created on: May 26, 2013
+ * Author: Alexandre Benoit
+ */
+
+#include "precomp.hpp"
+#include "basicretinafilter.hpp"
+#include "retinacolor.hpp"
+#include <cstdio>
+#include <sstream>
+#include <valarray>
+
+namespace cv
+{
+namespace hvstools
+{
+/**
+ * @class RetinaFastToneMappingImpl a wrapper class which allows the tone mapping algorithm of Meylan&al(2007) to be used with OpenCV.
+ * This algorithm is already implemented in thre Retina class (retina::applyFastToneMapping) but used it does not require all the retina model to be allocated. This allows a light memory use for low memory devices (smartphones, etc.
+ * As a summary, these are the model properties:
+ * => 2 stages of local luminance adaptation with a different local neighborhood for each.
+ * => first stage models the retina photorecetors local luminance adaptation
+ * => second stage models th ganglion cells local information adaptation
+ * => compared to the initial publication, this class uses spatio-temporal low pass filters instead of spatial only filters.
+ * ====> this can help noise robustness and temporal stability for video sequence use cases.
+ * for more information, read to the following papers :
+ * Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
+ * regarding spatio-temporal filter and the bigger retina model :
+ * Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
+ */
+
+class RetinaFastToneMappingImpl : public RetinaFastToneMapping
+{
+public:
+ /**
+ * constructor
+ * @param imageInput: the size of the images to process
+ */
+ RetinaFastToneMappingImpl(Size imageInput)
+ {
+ unsigned int nbPixels=imageInput.height*imageInput.width;
+
+ // basic error check
+ if (nbPixels <= 0)
+ throw cv::Exception(-1, "Bad retina size setup : size height and with must be superior to zero", "RetinaImpl::setup", "retinafasttonemapping.cpp", 0);
+
+ // resize buffers
+ _inputBuffer.resize(nbPixels*3); // buffer supports gray images but also 3 channels color buffers... (larger is better...)
+ _imageOutput.resize(nbPixels*3);
+ _temp2.resize(nbPixels);
+ // allocate the main filter with 2 setup sets properties (one for each low pass filter
+ _multiuseFilter = new BasicRetinaFilter(imageInput.height, imageInput.width, 2);
+ // allocate the color manager (multiplexer/demultiplexer
+ _colorEngine = new RetinaColor(imageInput.height, imageInput.width);
+ // setup filter behaviors with default values
+ setup();
+ }
+
+ /**
+ * basic destructor
+ */
+ virtual ~RetinaFastToneMappingImpl(){};
+
+ /**
+ * method that applies a luminance correction (initially High Dynamic Range (HDR) tone mapping) using only the 2 local adaptation stages of the retina parvocellular channel : photoreceptors level and ganlion cells level. Spatio temporal filtering is applied but limited to temporal smoothing and eventually high frequencies attenuation. This is a lighter method than the one available using the regular retina::run method. It is then faster but it does not include complete temporal filtering nor retina spectral whitening. Then, it can have a more limited effect on images with a very high dynamic range. This is an adptation of the original still image HDR tone mapping algorithm of David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
+ * -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
+ @param inputImage the input image to process RGB or gray levels
+ @param outputToneMappedImage the output tone mapped image
+ */
+ virtual void applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage)
+ {
+ // first convert input image to the compatible format :
+ const bool colorMode = _convertCvMat2ValarrayBuffer(inputImage.getMat(), _inputBuffer);
+
+ // process tone mapping
+ if (colorMode)
+ {
+ _runRGBToneMapping(_inputBuffer, _imageOutput, true);
+ _convertValarrayBuffer2cvMat(_imageOutput, _multiuseFilter->getNBrows(), _multiuseFilter->getNBcolumns(), true, outputToneMappedImage);
+ }else
+ {
+ _runGrayToneMapping(_inputBuffer, _imageOutput);
+ _convertValarrayBuffer2cvMat(_imageOutput, _multiuseFilter->getNBrows(), _multiuseFilter->getNBcolumns(), false, outputToneMappedImage);
+ }
+
+ }
+
+ /**
+ * setup method that updates tone mapping behaviors by adjusing the local luminance computation area
+ * @param photoreceptorsNeighborhoodRadius the first stage local adaptation area
+ * @param ganglioncellsNeighborhoodRadius the second stage local adaptation area
+ * @param meanLuminanceModulatorK the factor applied to modulate the meanLuminance information (default is 1, see reference paper)
+ */
+ virtual void setup(const float photoreceptorsNeighborhoodRadius=3.f, const float ganglioncellsNeighborhoodRadius=1.f, const float meanLuminanceModulatorK=1.f)
+ {
+ // setup the spatio-temporal properties of each filter
+ _meanLuminanceModulatorK = meanLuminanceModulatorK;
+ _multiuseFilter->setV0CompressionParameter(1.f, 255.f, 128.f);
+ _multiuseFilter->setLPfilterParameters(0.f, 0.f, photoreceptorsNeighborhoodRadius, 1);
+ _multiuseFilter->setLPfilterParameters(0.f, 0.f, ganglioncellsNeighborhoodRadius, 2);
+ }
+
+private:
+ // a filter able to perform local adaptation and low pass spatio-temporal filtering
+ cv::Ptr <BasicRetinaFilter> _multiuseFilter;
+ cv::Ptr <RetinaColor> _colorEngine;
+
+ //!< buffer used to convert input cv::Mat to internal retina buffers format (valarrays)
+ std::valarray<float> _inputBuffer;
+ std::valarray<float> _imageOutput;
+ std::valarray<float> _temp2;
+ float _meanLuminanceModulatorK;
+
+ // run the initilized retina filter in order to perform gray image tone mapping, after this call all retina outputs are updated
+ void _runGrayToneMapping(const std::valarray<float> &grayImageInput, std::valarray<float> &grayImageOutput)
+ {
+ // apply tone mapping on the multiplexed image
+ // -> photoreceptors local adaptation (large area adaptation)
+ _multiuseFilter->runFilter_LPfilter(grayImageInput, grayImageOutput, 0); // compute low pass filtering modeling the horizontal cells filtering to acess local luminance
+ _multiuseFilter->setV0CompressionParameterToneMapping(1.f, grayImageOutput.max(), _meanLuminanceModulatorK*grayImageOutput.sum()/(float)_multiuseFilter->getNBpixels());
+ _multiuseFilter->runFilter_LocalAdapdation(grayImageInput, grayImageOutput, _temp2); // adapt contrast to local luminance
+
+ // -> ganglion cells local adaptation (short area adaptation)
+ _multiuseFilter->runFilter_LPfilter(_temp2, grayImageOutput, 1); // compute low pass filtering (high cut frequency (remove spatio-temporal noise)
+ _multiuseFilter->setV0CompressionParameterToneMapping(1.f, _temp2.max(), _meanLuminanceModulatorK*grayImageOutput.sum()/(float)_multiuseFilter->getNBpixels());
+ _multiuseFilter->runFilter_LocalAdapdation(_temp2, grayImageOutput, grayImageOutput); // adapt contrast to local luminance
+
+ }
+
+ // run the initilized retina filter in order to perform color tone mapping, after this call all retina outputs are updated
+ void _runRGBToneMapping(const std::valarray<float> &RGBimageInput, std::valarray<float> &RGBimageOutput, const bool useAdaptiveFiltering)
+ {
+ // multiplex the image with the color sampling method specified in the constructor
+ _colorEngine->runColorMultiplexing(RGBimageInput);
+
+ // apply tone mapping on the multiplexed image
+ _runGrayToneMapping(_colorEngine->getMultiplexedFrame(), RGBimageOutput);
+
+ // demultiplex tone maped image
+ _colorEngine->runColorDemultiplexing(RGBimageOutput, useAdaptiveFiltering, _multiuseFilter->getMaxInputValue());//_ColorEngine->getMultiplexedFrame());//_ParvoRetinaFilter->getPhotoreceptorsLPfilteringOutput());
+
+ // rescaling result between 0 and 255
+ _colorEngine->normalizeRGBOutput_0_maxOutputValue(255.0);
+
+ // return the result
+ RGBimageOutput=_colorEngine->getDemultiplexedColorFrame();
+ }
+
+};
+
+CV_EXPORTS Ptr<RetinaFastToneMapping> createRetinaFastToneMapping(Size inputSize)
+{
+ return new RetinaFastToneMappingImpl(inputSize);
+}
+
+}// end of namespace hvstools
+}// end of namespace cv
namespace cv
{
+namespace hvstools
+{
// standard constructor without any log sampling of the input frame
RetinaFilter::RetinaFilter(const unsigned int sizeRows, const unsigned int sizeColumns, const bool colorMode, const RETINA_COLORSAMPLINGMETHOD samplingMethod, const bool useRetinaLogSampling, const double reductionFactor, const double samplingStrenght)
:
// apply tone mapping on the multiplexed image
// -> photoreceptors local adaptation (large area adaptation)
_photoreceptorsPrefilter.runFilter_LPfilter(grayImageInput, grayImageOutput, 2); // compute low pass filtering modeling the horizontal cells filtering to acess local luminance
- _photoreceptorsPrefilter.setV0CompressionParameterToneMapping(PhotoreceptorsCompression, grayImageOutput.sum()/(float)_photoreceptorsPrefilter.getNBpixels());
+ _photoreceptorsPrefilter.setV0CompressionParameterToneMapping(1.f-PhotoreceptorsCompression, grayImageOutput.max(), 1.f*grayImageOutput.sum()/(float)_photoreceptorsPrefilter.getNBpixels());
_photoreceptorsPrefilter.runFilter_LocalAdapdation(grayImageInput, grayImageOutput, temp2); // adapt contrast to local luminance
- // high pass filter
- //_spatiotemporalLPfilter(_localBuffer, _filterOutput, 2); // compute low pass filtering (high cut frequency (remove spatio-temporal noise)
-
- //for (unsigned int i=0;i<_NBpixels;++i)
- // _localBuffer[i]-= _filterOutput[i]/2.0;
-
// -> ganglion cells local adaptation (short area adaptation)
_photoreceptorsPrefilter.runFilter_LPfilter(temp2, grayImageOutput, 1); // compute low pass filtering (high cut frequency (remove spatio-temporal noise)
- _photoreceptorsPrefilter.setV0CompressionParameterToneMapping(ganglionCellsCompression, temp2.max(), temp2.sum()/(float)_photoreceptorsPrefilter.getNBpixels());
+ _photoreceptorsPrefilter.setV0CompressionParameterToneMapping(1.f-ganglionCellsCompression, temp2.max(), 1.f*temp2.sum()/(float)_photoreceptorsPrefilter.getNBpixels());
_photoreceptorsPrefilter.runFilter_LocalAdapdation(temp2, grayImageOutput, grayImageOutput); // adapt contrast to local luminance
-
}
+
// run the initilized retina filter in order to perform color tone mapping, after this call all retina outputs are updated
void RetinaFilter::runRGBToneMapping(const std::valarray<float> &RGBimageInput, std::valarray<float> &RGBimageOutput, const bool useAdaptiveFiltering, const float PhotoreceptorsCompression, const float ganglionCellsCompression)
{
return true;
}
-}
+}// end of namespace hvstools
+}// end of namespace cv
//#define __RETINADEBUG // define RETINADEBUG to display debug data
namespace cv
{
-
+namespace hvstools
+{
// retina class that process the 3 outputs of the retina filtering stages
class RetinaFilter//: public BasicRetinaFilter
{
};
-}
+}// end of namespace hvstools
+}// end of namespace cv
+
#endif /*RETINACLASSES_H_*/
#include <cmath>
+//#define __TEMPLATEBUFFERDEBUG //define TEMPLATEBUFFERDEBUG in order to display debug information
+
+namespace cv
+{
+namespace hvstools
+{
//// If a parallelization method is available then, you should define MAKE_PARALLEL, in the other case, the classical serial code will be used
#define MAKE_PARALLEL
// ==> then include required includes
};
#endif
-//#define __TEMPLATEBUFFERDEBUG //define TEMPLATEBUFFERDEBUG in order to display debug information
-
-namespace cv
-{
/**
* @class TemplateBuffer
* @brief this class is a simple template memory buffer which contains basic functions to get information on or normalize the buffer content
return std::fabs(x);
}
-}
+}// end of namespace hvstools
+}// end of namespace cv
#endif
--- /dev/null
+#include "test_precomp.hpp"
+
+CV_TEST_MAIN("cv")
--- /dev/null
+#include "test_precomp.hpp"
--- /dev/null
+#ifdef __GNUC__
+# pragma GCC diagnostic ignored "-Wmissing-declarations"
+# if defined __clang__ || defined __APPLE__
+# pragma GCC diagnostic ignored "-Wmissing-prototypes"
+# pragma GCC diagnostic ignored "-Wextra"
+# endif
+#endif
+
+#ifndef __OPENCV_TEST_PRECOMP_HPP__
+#define __OPENCV_TEST_PRECOMP_HPP__
+
+#include "opencv2/ts.hpp"
+#include "opencv2/bioinspired.hpp"
+#include <iostream>
+
+#endif
+
#include <iostream>
#include <cstring>
-#include "opencv2/contrib.hpp"
+#include "opencv2/bioinspired.hpp"
#include "opencv2/highgui.hpp"
static void help(std::string errorMessage)
try
{
// create a retina instance with default parameters setup, uncomment the initialisation you wanna test
- cv::Ptr<cv::Retina> myRetina;
+ cv::Ptr<Retina> myRetina;
// if the last parameter is 'log', then activate log sampling (favour foveal vision and subsamples peripheral vision)
if (useLogSampling)
{
- myRetina = cv::createRetina(inputFrame.size(), true, cv::RETINA_COLOR_BAYER, true, 2.0, 10.0);
+ myRetina = createRetina(inputFrame.size(), true, RETINA_COLOR_BAYER, true, 2.0, 10.0);
}
else// -> else allocate "classical" retina :
{
- myRetina = cv::createRetina(inputFrame.size());
+ myRetina = createRetina(inputFrame.size());
}
// save default retina parameters file in order to let you see this and maybe modify it and reload using method "setup"