Light scattering inside disordered media poses a significant challenge to achieve deep depth and high resolution simultaneously in biomedical optical imaging. Wavefront shaping emerged recently as one of the most potential methods to tackle this problem. So far, numerous algorithms have been reported, while each has its own pros and cons. In this article, we exploit a new thought that one algorithm can be reinforced by another complementary algorithm since they effectively compensate each other’s weaknesses, resulting in a more efficient hybrid algorithm. Herein, we introduce a systematical approach named GeneNN (Genetic Neural Network) as a proof of concept. Preliminary light focusing has been achieved by a deep neural network, whose results are fed to a genetic algorithm as an initial condition. The genetic algorithm furthers the optimization, evolving to converge into the global optimum. Experimental results demonstrate that with the proposed GeneNN, optimization speed is almost doubled and wavefront shaping performance can be improved up to 40% over conventional methods. The reinforced hybrid algorithm shows great potential in facilitating various biomedical and optical imaging techniques.

Light focusing and imaging through disordered media are of great significance for biomedical imaging, but they have been considered challenging for decades due to the inevitable multiple scattering of light in biological tissues. Traditional wisdom in the field has assumed that image information is carried only by unscattered ballistic or quasiballistic light, whose intensity decays exponentially in heterogeneous media.1 After propagating more than one transport mean free path, light will be totally scrambled; as a consequence, image information will be lost. If light is coherent, scattered light from different optical paths interferes randomly, forming speckles. Recently, researchers successfully overcame the effect of multiple scattering and realized light focusing inside or through the disordered media by various techniques such as optical time reversal2–10 and iterative wavefront shaping.11–15 Built upon the former, Time-Reversed Ultrasonically Encoded (TRUE)2–4 and Time Reversal of Variance-Encoded (TROVE)5 adopt ultrasound as virtual guide stars, and diffused light encoded by ultrasonic waves is time reversed and focused inside media. By contrast, wavefront shaping algorithms modulate phases of light incident into scattering media based on the transmission matrix11 or adaptive feedback signals provided by light intensity12,13 or photoacoustic signals.14,16 Some neural networks were recently introduced to focus light through scattering media, for example, Refs. 17–19. So far, wavefront shaping has been employed in numerous areas such as fluorescence imaging,20–22 photoacoustic imaging,23–25 and optical coherence tomography.26–28 

In the field of wavefront shaping, a lot of algorithms have been reported to guide the modulation of the incident light phase patterns, including the stepwise sequential algorithm,12 continuous sequential algorithm,29 partitioning algorithm,30 phase retrieval algorithm,31 genetic algorithm (GA),32,33 and neural networks.17 The stepwise sequential algorithm and continuous sequential algorithm compute the incident phase values one by one; thus, the optimization process is slow, and their performance is susceptible to noises.32 The partition algorithm has a larger signal-to-noise ratio, but much more time is required to reach an optimum than a continuous sequential algorithm.34 As for phase retrieval, particularly, it is applied to digital optical phase conjugation (DOPC) and the transmission matrix measurement. The implementation of binary phase retrieval in DOPC can shorten the optimization time to several milliseconds when working with a DMD (digital micromirror device) and a Field-Programmable Gate Array (FPGA),35,36 making in vivo light focusing possible. In contrast, when a phase retrieval algorithm is used in transmission matrix measurement that requires retrieval of more complicated phase patterns, the time spent in computation is increased significantly, varying from several minutes to hours.31,37 Moreover, the experimental setup for the aforementioned phase retrieval based wavefront shaping is more complicated, posing high demands to the optical alignment and workflow control. Meanwhile, the initial guess involved in the algorithm can lead to artifacts in reconstructed images.38,39 Last but not least, like some other iterative algorithms, the phase retrieval algorithm cannot guarantee the convergence to the global optima, tending to stagnate after several iterations.40 As an adaptive method, the GA demonstrates faster convergence and is also well-suited for noisy environments. Nevertheless, GA results can be easily trapped into a local minimum. Meanwhile, optimization results differ largely each time the GA is run due to the GA’s high sensitivity to initial solutions.41 As for neural networks, it directly establishes the mapping from transmitted speckles to the phase patterns of incident light through training without iteration, making this algorithm quite straightforward. That said, the performance of supervised learning heavily relies on the quality and the amount of training samples, which suggests that achieving the globally optimal focusing with only supervised learning is hardly possible, considering that it is impractical that utilizing finite training samples can represent all scattering conditions.

While promising as the aforementioned representative algorithms are all able to focus light through scattering media, each has its own limitations. Employing a single algorithm toward globally optimal solution may consume extremely long time and enormous computational sources. To alleviate the burden, we propose an enhanced algorithm through the hybrid of two or more selected algorithms, taking advantage of the complementation between them. Among all algorithms, deep learning, which is often known as deep neural networks (DNNs), and the GA precisely compensate each other’s drawbacks. Hence, hybridizing them as a systematical approach, which is named GeneNN (Genetic Neural Network), shows great potential in paving a way to a reinforced, more efficient optimization algorithm. Basically, under the proposed framework, the pretrained DNNs are able to form a focused speckle through disordered media, and then, DNN results are regarded as good initial patterns to the GA. The GA continues the optimization toward the global optimum, resulting in enhanced performance from many aspects; for instance, the possibility of being stuck in local minima will be largely reduced, and the optimization process will also be speeded up comparing with using the GA or DNNs alone.

In this article, we introduce the systematic approach of GeneNN to demonstrate the merits of reinforced hybrid algorithms over the single algorithm. The rest of this paper is organized as follows. In Sec. II, the working principle of both the GA and DNNs in the context of wavefront shaping is briefly introduced. After the analysis of their advantages and shortcomings, the roadmap to hybridize them is proposed. In Sec. III, experimental results and comparisons with individual DNNs and the GA are shown. In Sec. IV, factors that influence GeneNN performance are discussed, including prefocusing performance reached by DNNs, mutation rate, and the number of phase patterns used in the GA. In Sec. V, the work is summarized.

The forward multiple scattering process is described by the following linear model relating incident optical modes and transmitted optical modes:30 

(1)

where En is the nth complex incident mode with amplitude |En| and phase ϕn, while Em is the mth complex optical mode transmitted from the scattering media. tmn is one element in the complex transmission matrix which represents light scattering paths. Phase values in the globally optimal phase pattern satisfy ϕn = −ϕmn.42 When it is configured to this condition, light will be perfectly focused to the chosen position.

The process of employing the GA for wavefront shaping roughly consists of the following five steps: initialization, ranking, breeding, mutation, and iteration, as shown in Fig. 1. First of all, a certain number G of phase patterns are created, while each phase value is chosen from a uniform pseudorandom distribution.32 Then, these patterns are scored by a specially designed fitness function. Using Eq. (1), the fitness function is defined as the light intensity at a chosen point,32 

(2)

where An is the amplitude of En. The phase patterns are ranked in the light of fitness function evaluation results. The higher the score, the higher the ranking. The next step is breeding. The offspring is generated through offspring = T × ma + (1 − T) × pa, where T is a random binary template and ma and pa are parent patterns. These two parents are selected according to the rule that patterns with higher ranking are more likely to be chosen. After breeding, some segments in the offspring are mutated and randomly changed. The mutation rate R decreases with the increase in generations n to avoid over mutation, following R = (R0Rend) × exp(−n/λ) + Rend, where R0, Rend, and λ are the initial mutation rate, the final mutation rate, and the decay factor, respectively.32 The offspring will also be evaluated by the fitness function. During each generation, a particular number (in our paper, G/2) of offsprings will be reproduced to replace G/2 existing patterns with lower rankings.32 Afterward, all G phase patterns are ranked again based on their scores. The above breeding and mutation procedures will be iterated multiple times until the end condition is met. In general, the iteration stops when a predefined number of generations are reproduced or the fitness function evaluation result reaches a threshold.

FIG. 1.

Working principle of the employing genetic algorithm for wavefront shaping. First of all, a specific number G of initial phase patterns are created, and then, they are ranked according to their fitness function scoring results. Two patterns are selected as parents to generate offsprings. Several segments of the offspring are mutated (indicated by white circles), and then, the offspring is evaluated by the fitness function. Each generation G/2 offspring is reproduced to replace existing G/2 patterns, and then, all phase patterns are ranked again based on their fitness function scores. The whole breeding and mutating process is iterated multiple times until the end condition has been met; typically, a predefined number of generations are reproduced, or the fitness function evaluation result reaches a threshold.

FIG. 1.

Working principle of the employing genetic algorithm for wavefront shaping. First of all, a specific number G of initial phase patterns are created, and then, they are ranked according to their fitness function scoring results. Two patterns are selected as parents to generate offsprings. Several segments of the offspring are mutated (indicated by white circles), and then, the offspring is evaluated by the fitness function. Each generation G/2 offspring is reproduced to replace existing G/2 patterns, and then, all phase patterns are ranked again based on their fitness function scores. The whole breeding and mutating process is iterated multiple times until the end condition has been met; typically, a predefined number of generations are reproduced, or the fitness function evaluation result reaches a threshold.

Close modal

The benefits of the GA are significant. The GA can find a fit solution in a short time. Moreover, the GA is robust to noises as the GA updates as many pixels altogether instead of adjusting pixels one by one. Whereas GA results are remarkably influenced by a lot of factors such as the rate of mutation and breeding, fitness function, and especially the amount of phase patterns used in each generation, i.e., the population size G,43 finding proper parameters is nontrivial and requires both time and experience. More importantly, there are some scenarios that randomly create initial patterns, which may be in the neighborhood of one or several local minima. Considering that the GA is a stepwise optimization process and that offsprings are reproduced by breeding and mutating existing patterns, the GA is prone to be stuck in a local minimum, posing the risk that probably better solutions are not able to be explored. Consequently, employing a good initialization is crucial to achieve the global optima or near-global optima.44 In addition, with random initializations, optimization results can vary significantly each time the GA is run owing to the uncertainty in randomly created initial solutions.

As a data-driven approach, deep learning uses a totally different strategy to compute the phase pattern required for light focusing. The ill-posedness and nonlinearity of inverse scattering problems indicates that the direct inversion is not practical, bringing forward the requirements for iterative algorithms with regulation.45 The typical form of the inverse problem is given as46 

(3)

where H is the forward scattering model, y is the recorded speckle intensity, W is a transformation, and p represent transformation coefficients so that x^=Wp is a desired reconstruction.

Almost all state-of-the-art iterative algorithms for inverse scattering problems are cascades of linear convolutions and pointwise nonlinear operations,47 which are similar to the structure of convolutional neural networks (CNNs). As a representative application, the popular iterative shrinkage-thresholding algorithms (ISTA) are based on the following block model:46 

(4)

where L is the Lipschitz constant. The iterative optimization governed by Eq. (4) can be treated as a convolutional process with kernel I − (1/L)W * H * HW and bias (1/L)W * H * y, followed by a nonlinear activation function Aθ. Hence, CNNs can be considered intrinsically suited to solve inverse scattering problems.45,48–50 The success of conventional iterative algorithms safely serves as the guarantee for employing CNNs to focus light through scattering media.

Known as universal approximators, deep CNNs (DCCNs) have been proved to be powerful in resolving inverse problems.48,50,51 Hence, DCNNs are suited to be adopted to model the inverse scattering process H−1 via supervised learning, revealing the relationship between transmitted speckles y and incident optical phase patterns x. As shown in Fig. 2, DCNN inputs are the intensity distributions of transmitted speckles recorded using a camera while the outputs are their corresponding incident phase patterns modulated by a spatial light modulator (SLM). After training, the DCNN will establish an accurate mapping from speckles to incident phase patterns, and thus, the DCNN is capable of predicting the required phase pattern to focus light through a specific scattering medium.

FIG. 2.

Working principle of applying deep learning for wavefront shaping. First of all, a DCNN is trained with a number of samples. DCNN inputs are speckles recorded using a camera and outputs are the corresponding incident phase patterns modulated by a SLM. After training, the DCNN can accurately map speckles to their corresponding phase patterns. Afterward, a focused speckle is sent to the DCNN as the desired pattern, and the phase pattern predicted by the DCNN is able to focus light through the medium when the pattern is loaded on the SLM.

FIG. 2.

Working principle of applying deep learning for wavefront shaping. First of all, a DCNN is trained with a number of samples. DCNN inputs are speckles recorded using a camera and outputs are the corresponding incident phase patterns modulated by a SLM. After training, the DCNN can accurately map speckles to their corresponding phase patterns. Afterward, a focused speckle is sent to the DCNN as the desired pattern, and the phase pattern predicted by the DCNN is able to focus light through the medium when the pattern is loaded on the SLM.

Close modal

Comparing with iterative wavefront shaping methods, the deep learning approach is simpler since the relationship between speckles and incident phase patterns is learned straightforwardly through training, circumventing the need for interferometers or iterations. Nevertheless, its performance is significantly affected by training samples. Only when the training sample size is sufficiently large, meanwhile, these samples are typical enough to represent the entire scattering processes, and then, the DCNN will be able to predict the optimal phase pattern for light focusing after training. Unfortunately, it is impractical to include all possible phase patterns and speckles for training, and the typicality of samples is also difficult to measure; thus, the supervised learning method genuinely suffers from the dilemma in reaching the global optimum. Notwithstanding this, results from DCNNs can be regarded as a good initialization for the GA. As for the GA, convergence to the global maximum is achieved on the condition that the initial value is around the global optimum.44 Hence, the possibility of the optimization trapped in local minima for the GA will be largely reduced, thanks to the initial patterns provided by the DCNN. Considering that the DCNN and GA serve as the complementary algorithm to each other, the hybrid of them, named Genetic Neural Network (GeneNN), will demonstrate significant improvements against the GA or deep learning alone in terms of convergence speed and focusing efficiency.

The proposed GeneNN consists of two parts. The first part is collecting samples to train a DCNN. After training, an initial focused speckle can be obtained with the phase pattern predicted by the DCNN. The second part is adopting the GA to further the optimization of the focusing process. We put forward two methods to create initial phase patterns, both utilizing the DCNN results. The first method, named GeneNNv1, is taking the pattern predicted by the DCNN as one of the initial patterns, while all the other patterns are created following a uniform pseudorandom distribution,32 as shown in Fig. 3(a). The pattern from the DCNN will certainly have the highest ranking, so it has the highest opportunity to be chosen for breeding. The other method, named GeneNNv2, is that all the initial patterns are created based on the DCNN predicted pattern but by augmenting various additional phase patterns to this common base, while all the additional patterns are also produced with a uniform pseudorandom distribution, as shown in Fig. 3(b). In Sec. III, the experimental results of both methods will be presented in details.

FIG. 3.

Two methods to apply DCNN results to the GA in the Genetic Neural Network (GeneNN). (a) GeneNNv1. The phase pattern predicted by the DCNN is one of the initial phase patterns in the GA. This pattern should have the highest ranking, so it is of great possibility of being chosen to breed the next generation. (b) GeneNNv2. The SLM pattern from the DCNN works as the common base, and initial phase patterns in the GA are created by augmenting various patterns to it.

FIG. 3.

Two methods to apply DCNN results to the GA in the Genetic Neural Network (GeneNN). (a) GeneNNv1. The phase pattern predicted by the DCNN is one of the initial phase patterns in the GA. This pattern should have the highest ranking, so it is of great possibility of being chosen to breed the next generation. (b) GeneNNv2. The SLM pattern from the DCNN works as the common base, and initial phase patterns in the GA are created by augmenting various patterns to it.

Close modal

The experimental setup is illustrated in Fig. 4. Light emitted from a He-Ne CW laser (633 nm, Melles Griot) passes through a ND filter (NDC-50C-4M, Thorlabs), and then, the light beam is collimated and expanded by 4.3 times with a telescope. A half-wave plate (HWP) (WPH10M-633, Thorlabs) and a polarizer (LPVISC100-MP2, Thorlabs) are employed together to adjust the polarization state of the incident light to be parallel along the long axis of the spatial light modulator (SLM, X13138-01, Hamamatsu). Light is modulated and reflected by the SLM, and hence, the SLM patterns are used to represent the phase patterns of the incident light. Light is focused onto the front surface of a diffuser (120 Grit, 83-419, Edmund) by an objective lens (OBJ) (50X/0.80, Nikon TU Plan Fluor). Then, light is scattered inside the diffuser. Transmitted optical speckles are collected by another objective lens (20X/0.45, Nikon TU Plan Fluor) placed behind the diffuser. Finally, the intensity distribution of the speckles is recorded using a camera (Zyla 4.2 sCMOS, Andor). The input signal levels to the SLM are set as 32, i.e., 32 different gray levels are employed to represent the phase values from 0 to 2π. The resolution of the SLM screen is 1280 × 1024, while in the experiment, the SLM pattern size is 32 × 32, i.e., the whole SLM screen is divided into 32 × 32 macropixels with one macropixel containing 40 × 32 pixels. The size of the speckle patterns is 64 × 64 pixels.

FIG. 4.

Experimental setup. Light emitted from the laser passes through a ND filter (F), and then, light is expanded and collimated by a telescope (L1 and L2). A half-wave plate (HWP) and a polarizer (P) are used together to adjust the polarization state of light. Light is modulated and reflected by the SLM. The following two lenses (L3 and L4) reduce the beam size so that light can enter an objective lens (OBJ1). OBJ1 focuses light on the surface of the diffuser (D), and another objective lens (OBJ2) placed behind the diffuser collects scattered light and projects it onto the camera.

FIG. 4.

Experimental setup. Light emitted from the laser passes through a ND filter (F), and then, light is expanded and collimated by a telescope (L1 and L2). A half-wave plate (HWP) and a polarizer (P) are used together to adjust the polarization state of light. Light is modulated and reflected by the SLM. The following two lenses (L3 and L4) reduce the beam size so that light can enter an objective lens (OBJ1). OBJ1 focuses light on the surface of the diffuser (D), and another objective lens (OBJ2) placed behind the diffuser collects scattered light and projects it onto the camera.

Close modal

As stated above, CNNs are intrinsically related to iterative algorithms for inverse scattering problems. Moreover, DCNNs are known to be good at processing images.47,52–54 These properties make the DCNN suited for solving wavefront shaping problems. The structure of the DCNN employed in our work is shown in Fig. 5. The DCNN has three inputs and one output. They work collaboratively, contributing together to guarantee that the proposed DCNN can learn sufficient information on the scattering medium; hence, an accurate mapping from speckles to their corresponding SLM patterns will be established. Input 2 is randomly generated SLM patterns, while the speckles obtained with them are noted as Input 1. Input 3 is the target speckle, and the DCNN output is the required SLM pattern to generate the desired speckle. In general, the last few layers are task specific, while the earlier parts are modality specific.55 With Input 1, Input 2, the first three convolutional layers (Conv1, Conv2, and Cov3), and the first fully connected layer (FC1), the DCNN is supposed to learn the information on scattering processes in the medium. The last fully connected layer (FC2) is trained to establish the mapping from speckles to SLM patterns. Conv1, Conv2, and Cov3 work together to extract and abstract image features from Input 1, and then, these features are flattened to a 1D array. Input 2, which is a 2D SLM pattern, is also flattened to concatenate with the features from Input 1, followed by a fully connected layer (FC1). Output of FC1 is concatenated with flattened image features from Input 3, and then, the final fully connected layer (FC2) predicts the SLM patterns required in order to get the desired speckle indicated as Input 3. Convolutional layers (Conv1, Conv2, and Cov3) consist of 16, 32, and 48 filters, and the filter size of each layer is 7 × 7, 5 × 5, and 3 × 3 with stride set as 3 × 3, 2 × 2, and 1 × 1, respectively. Conv1 and Conv4, Conv2 and Conv5, and Conv3 and Conv6 have the same structure. FC1 consists of 512 neurons, whose dropout rate is set as 0.5. FC2 has 1024 neurons. The activation function of the last output layer is sigmoid, while the activation function of all other layers are tanh. The mean squared error is employed as the loss function. Adam is used as the optimizer with alpha, beta1, beta2, and epsilon set as 0.0005, 0.9, 0.99, and 0.0001, respectively. The Tensorflow Keras library is used to construct the CNN model. The Graphics Processing Unit (GPU) type is NVIDIA GEFORCE GTX 980.

FIG. 5.

The structure of the proposed deep convolutional neural network (DCNN). Input 1 is speckle patterns, and their corresponding SLM patterns are noted as Input 2. Input 3 is the desired speckle, and the DCNN output is its required SLM pattern corresponding to Input 3. With Input 1, Input 2, the first three convolutional layers (Conv1, Conv2, and Cov3), and the first fully connected layer (FC1), the DCNN is supposed to learn the information on scattering processes in the medium. The last fully connected layer (FC2) is trained to establish the mapping from speckles to SLM patterns.

FIG. 5.

The structure of the proposed deep convolutional neural network (DCNN). Input 1 is speckle patterns, and their corresponding SLM patterns are noted as Input 2. Input 3 is the desired speckle, and the DCNN output is its required SLM pattern corresponding to Input 3. With Input 1, Input 2, the first three convolutional layers (Conv1, Conv2, and Cov3), and the first fully connected layer (FC1), the DCNN is supposed to learn the information on scattering processes in the medium. The last fully connected layer (FC2) is trained to establish the mapping from speckles to SLM patterns.

Close modal

Experimental results are reported here. First of all, samples were collected for training. 10 000 SLM patterns were randomly created, and then, they were sequentially sent to the SLM to modulate the incident light phases. Each time a SLM pattern was loaded, the camera recorded the corresponding speckle pattern. Afterward, both SLM patterns and speckles were respectively normalized to be between 0 and 1. Epoch was set as 15. Training time was approximately 3 min. After training, a focused speckle pattern was sent to the DCNN as the desired pattern, and then, one SLM pattern was predicted and sent to the SLM for light modulation, resulting in a transmitted speckle shown as Fig. 6(a) in practice. A focal point has already been observed. Enhancement η is introduced to quantitatively evaluate light focusing performance, which is defined as the ratio between the intensity at the focal point after optimization and the average intensity before light focusing.12 The enhancement η achieved by the DCNN is 33. Next, the initial phase patterns for the GA were produced according to the two proposed algorithms indicated above (GeneNNv1 and GeneNNv2). For GeneNNv2, additional phase values were selected from [−π/2 π/2], following a uniform pseudorandom distribution as well. The summation patterns are scaled to [0 2π]. The optimization performance of both algorithms was tested when the population size G was 10, 20, 30, 40, and 50, while measurement times varied with different sizes. Each algorithm was run 10 times under each population size, and then, the results were averaged. For comparison, the GA was also tested 10 times using a random initialization. Experimental results were shown from Figs. 6(c)6(g). Figure 6(b) is a representative focused speckle achieved using GeneNNv2, and the population size G is set to 50. The focal point in Fig. 6(b) is much brighter and the background is also darker than Fig. 6(a), with the enhancement η improved more than four times, reaching 148. Figures 6(a) and 6(b) use the same color bar and scale. As illustrated in Figs. 6(c)6(g), the enhancements η reached by the two hybrid algorithms (GeneNNv1 and GeneNNv2) are always higher than the one by the random GA regardless of measurement times. As more and more generations are reproduced, the merits of the hybrid algorithm become more significant as its enhancement η rises faster. To achieve the same enhancement η, the time spent by hybrid algorithms is only a half of that the random GA needs, despite the population size. With different hybrid algorithms or population sizes, the increase percentage in enhancement η over the randomly initialized GA also varies, as listed in Table I. The increase percentage of the enhancement ratio is defined as [(η achieved by GeneNN/η achieved by a randomly initialized GA) − 1]. Except the differences in initial conditions, all the other parameters are the same. The improvement in η ranges from 20% to 40%, depending on the algorithm and population size. However, in all cases, the hybrid algorithms contribute significantly to better focusing.

FIG. 6.

Experimental results of GeneNNv1, GeneNNv2, and the randomly initialized GA. (a) A focused speckle obtained using the SLM pattern predicted by the pretrained DCNN. (b) Focusing result with GeneNNv2 while the population size is 50. (a) and (b) use the same color bar and scale. (c)–(g) Enhancement η achieved by the three algorithms: GeneNNv1, GeneNNv2, and randomly initialized GA with different population sizes G = 10, 20, 30, 40, and 50, respectively. (h) Different amounts of samples are used for DCNN training, resulting in different enhancement η after light focusing. The increase percentage in η over the result achieved when the training sample size is 10 000 is shown as a function of training sample size.

FIG. 6.

Experimental results of GeneNNv1, GeneNNv2, and the randomly initialized GA. (a) A focused speckle obtained using the SLM pattern predicted by the pretrained DCNN. (b) Focusing result with GeneNNv2 while the population size is 50. (a) and (b) use the same color bar and scale. (c)–(g) Enhancement η achieved by the three algorithms: GeneNNv1, GeneNNv2, and randomly initialized GA with different population sizes G = 10, 20, 30, 40, and 50, respectively. (h) Different amounts of samples are used for DCNN training, resulting in different enhancement η after light focusing. The increase percentage in η over the result achieved when the training sample size is 10 000 is shown as a function of training sample size.

Close modal
TABLE I.

Increase percentage in enhancement η comparing with the randomly initialized GA.

G = 10G = 20G = 30G = 40G = 50
GeneNNv1 (%) 18 27 21 44 35 
GeneNNv2 (%) 20 23 21 31 33 
G = 10G = 20G = 30G = 40G = 50
GeneNNv1 (%) 18 27 21 44 35 
GeneNNv2 (%) 20 23 21 31 33 

In the GeneNN, the DCNN was trained with 10 000 samples, and then, the GA continued the reinforced optimization based on the DCNN prediction. In order to compare with optimization performance relying on deep learning individually, the size of training samples is enlarged, improving DCNN performance accordingly. Results are illustrated in Fig. 6(h). The increase percentage of enhancement η is calculated as the ratio between η achieved by the DCNN when the amount of training samples is increased and η reached with 10 000 samples. It can be expected that the larger the sample size, the higher the enhancement η. Enlarging the training sample size by about 60% is able to improve the focusing performance by almost 50%. Nevertheless, limited by the SLM frame rate, collecting 6000 samples takes nearly 18 min. In contrast, from Figs. 6(c)6(g), with the GeneNN, increasing the DCNN performance by 50% requires only 50–200 measurements, depending on the algorithm and population size. At all events, the optimization time is always less than 10 min, which is almost half of the time spent in deep learning. Therefore, our hybrid GeneNN can be claimed as a more efficient approach in optimizing SLM patterns against individual deep learning.

The experimental results confirm the efficacy of the GeneNN in improving the optimization performance in various aspects. The hybrid of the DCNN and the GA successfully leads to a reinforced algorithm considering that they compensate for each other’s drawbacks. Instead of employing a random initialization, the initial phase patterns in the GA are created with the help of the pretrained DCNN. Phase patterns from the DCNN have already realized light focusing, indicating that they are already close to the globally optimal SLM pattern. During breeding, offsprings are generated by amending the SLM pattern from the DCNN, and thus, the optimization process is effectively guided, avoiding the time-consuming random trials at the initial stage of the GA. In addition, since the GA inherently favors patterns with a higher fitness score, a good initialization decreases the risk that the gene pool is trapped around one local minimum, which may result in the failure in exploring globally optimal solution. It can be observed from experimental results that faster and smooth convergence has been achieved by the GeneNN, indicating that a better convergence direction has been found. However, the focusing performance achieved by the proposed two hybrid algorithms (GeneNN v1 and GeneNN v2) are almost the same, which suggests that with the two methods, the DCNN actually imposes almost equal influence on phase patterns generation and contributes robustly to the performance improvement of the subsequent GA algorithm.

Focusing light and imaging inside or through disordered media is an important yet a challenging problem. As a widely accepted solution, wavefront shaping successfully achieves light focusing by virtue of various algorithms, while each has its own pros and cons. In this article, we propose a hybrid of complementary algorithms to benefit each other toward developing a reinforced algorithm. As a proof of concept, the GeneNN has demonstrated effective performance enhancement in many aspects including convergence speed and light focusing performance. Applying DCNN results to assist the creation of initial patterns in the GA, a much better focused speckle could be observed and the optimization process was also faster than the GA or CNN algorithm alone.

The performance of the GeneNN algorithm is affected by many elements, among which the influence from population size G and mutation rate R are particularly significant. First of all, the impact of population size G is discussed. With different G, the required number of measurements to increase the enhancement η to 60, 70, and 80 also varies, as listed in Table II. In general, the larger the population size G, the less the measurements needed to achieve the same η, regardless of hybrid algorithms. With GeneNNv1, in order to improve η to 60, the necessary measurements are almost 1/5 of that needed when G is only 10. By virtue of the larger diversity in a higher population, the overall probability of generating better offsprings becomes larger. Moreover, with larger amounts of patterns, better light focusing can be achieved without significantly increasing computational time.56 For instance, with GeneNNv2, when the population size G is 10, nearly 7 min (150 measurements) were spent to increase the enhancement η to 60 [Fig. 6(c)], while less than 6 min (less than 50 measurements) were costed once the G was enlarged to 50 [Fig. 6(g)]. In this circumstance, a larger population size not only contributes to better light focusing but also saves computational time. Besides, in Figs. 6(f) and 6(g), with the increase in measurement times, enhancement η rises much more smoothly than in Figs. 6(a)6(c), which implies that as the population size G becomes larger, the GeneNN will be more stable and more robust to environmental disturbance. In addition, the improvement percentage realized by the GeneNN over the randomly initialized GA is also affected by the population size G. In Table I, when G is increased from 10 to 50, the improvement percentage also rises up from nearly 20% to 40%, while the way of applying DCNN results to the GA (GeneNNv1 or GeneNNv2) does not induce obvious difference on final optimization results. Despite this, a larger population size does not always be helpful. Chen. et al. have discussed that in some situation, a large population may degrade algorithm performance.57 The increase in the population size can improve the accuracy, but it will saturate when reaching a certain size; then, the adoption of more phase patterns will only consume more computational resources without any improvement in accuracy.58 Finding a suitable population size is nontrival, requiring both experience and time.

TABLE II.

The number of required measurements to reach different enhancements η with varying population sizes G and hybrid algorithms.

Population size GGeneNNv1GeneNNv2
η = 60 10 173 161 
 20 100 89 
 30 81 71 
 40 58 68 
 50 36 42 
η = 70 10 214 177 
 20 118 106 
 30 106 87 
 40 76 79 
 50 44 50 
η = 80 10 282 269 
 20 146 136 
 30 123 106 
 40 93 105 
 50 49 59 
Population size GGeneNNv1GeneNNv2
η = 60 10 173 161 
 20 100 89 
 30 81 71 
 40 58 68 
 50 36 42 
η = 70 10 214 177 
 20 118 106 
 30 106 87 
 40 76 79 
 50 44 50 
η = 80 10 282 269 
 20 146 136 
 30 123 106 
 40 93 105 
 50 49 59 

In the GeneNN, breeding leads the convergence to a specific minimum, while mutation tries to expand searching space and avoids convergence. As a consequence, mutation rate R plays an important role in the optimization process, and finding a balance between exploration and exploitation is vital. An experimental comparison of the enhancement η achieved with various mutation rates R using two GeneNN algorithms is shown in Fig. 7. Initial mutation rate R0, final mutation rate Rend, and decay factor λ were adjusted respectively to discover their effect on the GeneNN optimization process. The population size G was set at 30. Comparing the results under different conditions, in general, lower mutation rate R leads to faster convergence, regardless of the hybrid algorithm. Among the three parameters, initial mutation rate R0 imposes larger influence on the GeneNN optimization than the other two factors, considering that the mutation rate R decays following the equation R = (R0Rend)·exp(−n/λ) + Rend and the diminution in R0 contributes to faster decreasing in R. Nevertheless, employing smaller mutation rate R poses a risk of not reaching the global optimum due to the lack of explorations. If the mutation rate R is too high, then the optimization process will be degraded to a random trail. The best mutation rate R is always problem-specific, and an adaptive steering of mutation rate R is preferred.59 In our work, a gradually decreasing mutation rate R is adopted. The initial larger mutation rate is able to induce high diversity in phase patterns to effectively avoid converging to local maxima. With the increase in iteration times, the mutation rate keeps reducing adaptively in order to deliver a specific and best solution.

FIG. 7.

GeneNN optimization results with different mutation rates R using GeneNNv1 and GeneNNv2.

FIG. 7.

GeneNN optimization results with different mutation rates R using GeneNNv1 and GeneNNv2.

Close modal

The experimental results presented in Sec. III demonstrate that the DCNN efficiently contributes to improving light focusing. Herein, we further explore the performance of the two GeneNN algorithms as a function of the quality of DCNN predictions, and the results are shown in Fig. 8. The DCNN was trained by five sets of samples whose sizes are varied, resulting in different enhancements η after light focusing. With each sample set, the DCNN was trained five times. Each time after training, light was modulated by the SLM pattern output from the DCNN, and a focused speckle could be recorded. Then, results were averaged. The enhancement η achieved by the DCNN with different sample sets was 42, 37, 32, 26, and 19, respectively. As expected, the less the training samples, the lower the enhancement η. Afterward, using different DCNN results as initialization, each GeneNN algorithm was run ten times with different population sizes G, and then, the mean value was calculated and presented in Fig. 8. The amount of phase patterns was set as 10, 20, and 30. At the initial stage of the GA, the phase pattern from the DCNN is randomly modified to some extent due to breeding and mutation, which causes a drop in enhancement η, as observed in Fig. 8. For better vision, the initial enhancement η offered by the DCNN is shown as the first measured value in Fig. 8, while the second value is with the first generation with the GA. Steep drops exist in all figures in Fig. 8. But soon, the performance will be recovered and exceeded. With the increase in the population size G, the enhancement η recovery speed becomes higher. To fully restore the DCNN performance, approximately 100 measurements are required when G is 10, while less than 50 measurements are enough as G is enlarged to 30. Besides, under the same population size, the drop of enhancement η is more significant with GeneNNv1 than GeneNNv2 after the first measurement of the GA, as shown in Table III. The drop percentage of the enhancement ratio is defined as [1 − (η obtained with offsprings of the first generation in GA/η achieved by the pretrained DCNN)]. For GeneNNv1, the drop percentage of the enhancement η is 80%–90% compared with the original DCNN performance, whereas using GeneNNv2, the drop percentage is 60%–80%. This phenomenon implies that larger changes in phase patterns are induced by GeneNNv1. The result is reasonable considering that with GeneNNv2, all the initial phase patterns are created based on the SLM pattern from the DCNN. No matter which two patterns are selected for breeding, the next generation will still be influenced and guided more or less by the DCNN results. In contrast, in GeneNNv1, the SLM pattern from the DCNN only serves as one initial phase pattern. In this situation, offsprings receive less information from the DCNN, thus are more susceptible to random changes. In addition, with the same hybrid algorithm and measurement times, better preliminary focusing results from the DCNN contribute to higher GeneNN performance. As shown in Fig. 8, when the enhancement η achieved by the DCNN is 42 or 37, after reinforced optimization, η reached by the GeneNN is approximately twice of that obtained when the DCNN performance is only 19 or 26. In general, the higher the enhancement η achieved by the DCNN, the better the focusing performance of the GeneNN algorithms, regardless of the population size. Higher enhancement η suggests that the phase pattern from the DCNN is closer to the optimal one and is less likely to be around local optima. During the reinforced optimization process, better SLM patterns are further generated as they are obtained by applying adjustments to the initial SLM pattern, and thus, improved focused speckles will be observed.

FIG. 8.

The focusing performance of GeneNNv1 and GeneNNv2 with various enhancements η achieved by the DCNN under different population sizes G. (a) The enhancement η achieved by GeneNNv1 when G = 10 with different initial η offered by the DCNN. The initial η realized by the DCNN are 42 (gray line), 37 (red line), 32 (blue line), 26 (green line), and 19 (purple line), respectively. (b) The enhancement η achieved by GeneNNv2 when G = 10 with different initial η. (c) The enhancement η achieved by GeneNNv1 when G = 20 with different initial η. (d) The enhancement η achieved by GeneNNv2 when G = 20 with different initial η. (e) The enhancement η achieved by GeneNNv1 when G = 30 with different initial η. (f) The enhancement η achieved by GeneNNv2 when G = 30 with different initial η.

FIG. 8.

The focusing performance of GeneNNv1 and GeneNNv2 with various enhancements η achieved by the DCNN under different population sizes G. (a) The enhancement η achieved by GeneNNv1 when G = 10 with different initial η offered by the DCNN. The initial η realized by the DCNN are 42 (gray line), 37 (red line), 32 (blue line), 26 (green line), and 19 (purple line), respectively. (b) The enhancement η achieved by GeneNNv2 when G = 10 with different initial η. (c) The enhancement η achieved by GeneNNv1 when G = 20 with different initial η. (d) The enhancement η achieved by GeneNNv2 when G = 20 with different initial η. (e) The enhancement η achieved by GeneNNv1 when G = 30 with different initial η. (f) The enhancement η achieved by GeneNNv2 when G = 30 with different initial η.

Close modal
TABLE III.

The drop percentage of the enhancement η after the first measurement of the GA against initial η offered by the DCNN with different population sizes G, hybrid algorithms, and initial η.

η achieved by the DCNN42 (%)37 (%)32 (%)26 (%)19 (%)
G = 10 GeneNNv1 77.8 81.7 85.5 79.3 87.9 
GeneNNv2 75.5 76.4 76.9 76.9 61.7 
G = 20 GeneNNv1 84.8 89.8 83.5 91.3 89.6 
GeneNNv2 57.2 80.6 89.0 70.9 84.2 
G = 30 GeneNNv1 88.6 83.4 83.1 92.6 89.4 
GeneNNv2 61.4 80.7 74.3 67.2 71.1 
η achieved by the DCNN42 (%)37 (%)32 (%)26 (%)19 (%)
G = 10 GeneNNv1 77.8 81.7 85.5 79.3 87.9 
GeneNNv2 75.5 76.4 76.9 76.9 61.7 
G = 20 GeneNNv1 84.8 89.8 83.5 91.3 89.6 
GeneNNv2 57.2 80.6 89.0 70.9 84.2 
G = 30 GeneNNv1 88.6 83.4 83.1 92.6 89.4 
GeneNNv2 61.4 80.7 74.3 67.2 71.1 

In summary, we have introduced a new strategy to develop a hybrid algorithm for adaptive wavefront shaping to reinforce a single algorithm by compensating its limitations. As an institutive approach, we propose the hybrid of deep neural networks and the genetic algorithm, named GeneNN, considering that they can effectively compensate for each other’s drawbacks. The phase pattern output by a pretrained DCNN has already realized a preliminary focused speckle, and then, optimal focusing performance is reinforced by successive optimization with a GA. We put forward two different hybrid algorithms to apply DCNN results to the GA. Even though both of them demonstrate similar improvements, GeneNNv2 leads to less drop of the enhancement η in the initial stage after applying the GA after the DCNN. It has been proved that GeneNN achieves reliably higher enhancement η and a faster convergence rate than the individual GA or DCNN can do. The prefocusing performance reached by the DCNN and the population size G can significantly influence the reinforced optimization performance of the GeneNN. In general, the higher the enhancement η reached by the DCNN, the better the performance of the GeneNN. With the increase in population size G, the optimization process will be more stable, and the performance improvement percentage realized by the GeneNN over the single algorithm will also be higher. This pioneering work demonstrates that the hybrid of supervised and reinforcement learning algorithms can effectively enhance the individual algorithm, which has great potential in boosting global optimization efficiency in various aspects.

This work was supported by the A*STAR SERC AME Program: Nanoantenna Spatial Light Modulators for Next Generation Display Technologies (Grant No. A18A7b0058), the National Natural Science Foundation of China (Grant Nos. 81671726, 81627805, and 81930048), the Hong Kong Research Grant Council (Grant No. 25204416), the Hong Kong Innovation and Technology Commission (Grant No. ITS/022/18), and the Shenzhen Science and Technology Innovation Commission (Grant No. JCYJ20170818104421564).

1.
L.
Shi
and
R. R.
Alfano
,
Deep Imaging in Tissue and Biomedical Materials: Using Linear and Nonlinear Optical Methods
(
Jenny Stanford Publishing
,
2017
).
2.
X.
Xu
,
H.
Liu
, and
L. V.
Wang
,
Nat. Photonics
5
(
3
),
154
(
2011
).
3.
P.
Lai
,
X.
Xu
,
H.
Liu
,
Y.
Suzuki
, and
L. V.
Wang
,
J. Biomed. Opt.
16
(
8
),
080505
(
2011
).
4.
Y.
Liu
,
P.
Lai
,
C.
Ma
,
X.
Xu
,
A. A.
Grabar
, and
L. V.
Wang
,
Nat. Commun.
6
,
5904
(
2015
).
5.
B.
Judkewitz
,
Y. M.
Wang
,
R.
Horstmeyer
,
A.
Mathy
, and
C.
Yang
,
Nat. Photonics
7
(
4
),
300
305
(
2013
).
6.
Z.
Yaqoob
,
D.
Psaltis
,
M. S.
Feld
, and
C.
Yang
,
Nat. Photonics
2
(
2
),
110
(
2008
).
7.
Z.
Yu
,
J.
Huangfu
,
F.
Zhao
,
M.
Xia
,
X.
Wu
,
X.
Niu
,
D.
Li
,
P.
Lai
, and
D.
Wang
,
Sci. Rep.
8
(
1
),
2927
(
2018
).
8.
Z.
Yu
,
M.
Xia
,
H.
Li
,
T.
Zhong
,
F.
Zhao
,
H.
Deng
,
Z.
Li
,
D.
Li
,
D.
Wang
, and
P.
Lai
,
Sci. Rep.
9
(
1
),
1537
(
2019
).
9.
J.
Yang
,
J.
Li
,
S.
He
, and
L. V.
Wang
,
Optica
6
(
3
),
250
256
(
2019
).
10.
J.
Yang
,
Y.
Shen
,
Y.
Liu
,
A. S.
Hemphill
, and
L. V.
Wang
,
Appl. Phys. Lett.
111
(
20
),
201108
(
2017
).
11.
S. M.
Popoff
,
G.
Lerosey
,
R.
Carminati
,
M.
Fink
,
A. C.
Boccara
, and
S.
Gigan
,
Phys. Rev. Lett.
104
(
10
),
100601
(
2010
).
12.
I. M.
Vellekoop
and
A. P.
Mosk
,
Opt. Lett.
32
(
16
),
2309
2311
(
2007
).
13.
D.
Akbulut
,
T. J.
Huisman
,
E. G.
van Putten
,
W. L.
Vos
, and
A. P.
Mosk
,
Opt. Express
19
(
5
),
4017
4029
(
2011
).
14.
P.
Lai
,
L.
Wang
,
J. W.
Tay
, and
L. V.
Wang
,
Nat. Photonics
9
(
2
),
126
132
(
2015
).
15.
O.
Tzang
,
E.
Niv
,
S.
Singh
,
S.
Labouesse
,
G.
Myatt
, and
R.
Piestun
,
Nat. Photonics
13
,
788
793
(
2019
).
16.
F.
Kong
,
R. H.
Silverman
,
L.
Liu
,
P. V.
Chitnis
,
K. K.
Lee
, and
Y.-C.
Chen
,
Opt. Lett.
36
(
11
),
2053
2055
(
2011
).
17.
A.
Turpin
,
I.
Vishniakou
, and
J. d.
Seelig
,
Opt. Express
26
(
23
),
30911
30929
(
2018
).
18.
S.
Cheng
,
H.
Li
,
Y.
Luo
,
Y.
Zheng
, and
P.
Lai
,
J. Innovative Opt. Health Sci.
12
(
4
),
1930006
(
2019
).
19.
Y.
Luo
,
S.
Yan
,
H.
Li
,
P.
Lai
, and
Y.
Zheng
, preprint arXiv:1909.00210 (
2019
).
20.
I. M.
Vellekoop
and
C. M.
Aegerter
,
Opt. Lett.
35
(
8
),
1245
1247
(
2010
).
21.
Y. M.
Wang
,
B.
Judkewitz
,
C. A.
DiMarzio
, and
C.
Yang
,
Nat. Commun.
3
,
928
(
2012
).
22.
H.
Yu
,
J.
Park
,
K.
Lee
,
J.
Yoon
,
K.
Kim
,
S.
Lee
, and
Y.
Park
,
Curr. Appl. Phys.
15
(
5
),
632
641
(
2015
).
23.
R.
Horstmeyer
,
H.
Ruan
, and
C.
Yang
,
Nat. Photonics
9
(
9
),
563
(
2015
).
24.
D. B.
Conkey
,
A. M.
Caravaca-Aguirre
,
J. D.
Dove
,
H.
Ju
,
T. W.
Murray
, and
R.
Piestun
,
Nat. Commun.
6
,
7902
(
2015
).
25.
T.
Chaigne
,
O.
Katz
,
A. C.
Boccara
,
M.
Fink
,
E.
Bossy
, and
S.
Gigan
,
Nat. Photonics
8
(
1
),
58
(
2014
).
26.
J.
Jang
,
J.
Lim
,
H.
Yu
,
H.
Choi
,
J.
Ha
,
J.-H.
Park
,
W.-Y.
Oh
,
W.
Jang
,
S.
Lee
, and
Y.
Park
,
Opt. Express
21
(
3
),
2890
2902
(
2013
).
27.
H.
Yu
,
J.
Jang
,
J.
Lim
,
J.-H.
Park
,
W.
Jang
,
J.-Y.
Kim
, and
Y.
Park
,
Opt. Express
22
(
7
),
7514
7523
(
2014
).
28.
H.
Yu
,
P.
Lee
,
K.
Lee
,
J.
Jang
,
J.
Lim
,
W.
Jang
,
Y.
Jeong
, and
Y.
Park
,
J. Biomed. Opt.
21
(
10
),
101406
(
2016
).
29.
J.
Thompson
,
B.
Hokr
, and
V.
Yakovlev
,
J. Mod. Opt.
63
(
1
),
80
84
(
2016
).
30.
I. M.
Vellekoop
and
A.
Mosk
,
Opt. Commun.
281
(
11
),
3071
3080
(
2008
).
31.
A.
Drémeau
,
A.
Liutkus
,
D.
Martina
,
O.
Katz
,
C.
Schülke
,
F.
Krzakala
,
S.
Gigan
, and
L.
Daudet
,
Opt. Express
23
(
9
),
11898
11911
(
2015
).
32.
D. B.
Conkey
,
A. N.
Brown
,
A. M.
Caravaca-Aguirre
, and
R.
Piestun
,
Opt. Express
20
(
5
),
4840
4849
(
2012
).
33.
X.
Zhang
and
P.
Kner
,
J. Opt.
16
(
12
),
125704
(
2014
).
34.
Z.
Fayyaz
,
N.
Mohammadian
, and
M. R.
Avanaki
, paper presented at the
Photons Plus Ultrasound: Imaging and Sensing 2018
,
2018
.
35.
Y.
Liu
,
C.
Ma
,
Y.
Shen
,
J.
Shi
, and
L. V.
Wang
,
Optica
4
(
2
),
280
288
(
2017
).
36.
D.
Wang
,
E. H.
Zhou
,
J.
Brake
,
H.
Ruan
,
M.
Jang
, and
C.
Yang
,
Optica
2
(
8
),
728
735
(
2015
).
37.
M.
N’Gom
,
M.-B.
Lien
,
N. M.
Estakhri
,
T. B.
Norris
,
E.
Michielssen
, and
R. R.
Nadakuditi
,
Sci. Rep.
7
(
1
),
2518
(
2017
).
38.
A.
Sanjeev
,
Y.
Kapellner
,
N.
Shabairou
,
E.
Gur
,
M.
Sinvani
, and
Z.
Zalevsky
,
Sci. Rep.
9
(
1
),
12275
(
2019
).
39.
B.
Hwang
,
T.
Woo
, and
J.-H.
Park
,
Opt. Lett.
44
(
24
),
5985
5988
(
2019
).
40.
J. R.
Fienup
,
Appl. Opt.
21
(
15
),
2758
2769
(
1982
).
41.
S. N.
Sivanandam
and
S. N.
Deepa
,
Introduction to Genetic Algorithms
(
Springer Berlin Heidelberg
,
Berlin, Heidelberg
,
2008
), pp.
15
37
.
42.
E.
Bossy
and
S.
Gigan
,
Photoacoustics
4
(
1
),
22
35
(
2016
).
43.
S. M.
Shorman
and
S. A.
Pitchay
,
ARPN J. Eng. Applied Sci.
10
(
2
),
585
893
(
2015
).
44.
U.
Bodenhofer
,
Genetic Algorithms: Theory and Applications (2nd Edition)
, Lecture Notes (
Fuzzy Logic Laboratorium Linz-Hagenberg
,
Winter
,
2003
).
45.
Z.
Wei
and
X.
Chen
,
IEEE Trans. Geosci. Remote Sens.
57
(
4
),
1849
1860
(
2018
).
46.
K. H.
Jin
,
M. T.
McCann
,
E.
Froustey
, and
M.
Unser
,
IEEE Trans. Image Process.
26
(
9
),
4509
4522
(
2017
).
47.
M. T.
McCann
,
K. H.
Jin
, and
M.
Unser
,
IEEE Signal Process. Mag.
34
(
6
),
85
95
(
2017
).
48.
L.
Xu
,
J. S.
Ren
,
C.
Liu
, and
J.
Jia
, paper presented at the
Advances in Neural Information Processing Systems
,
2014
.
49.
Z.
Yuan
and
H.
Wang
, preprint arXiv:1806.09968 (
2018
).
50.
S.
Li
,
M.
Deng
,
J.
Lee
,
A.
Sinha
, and
G.
Barbastathis
,
Optica
5
(
7
),
803
813
(
2018
).
51.
A.
Lucas
,
M.
Iliadis
,
R.
Molina
, and
A. K.
Katsaggelos
,
IEEE Signal Process. Mag.
35
(
1
),
20
36
(
2018
).
52.
Y.
LeCun
,
Y.
Bengio
, and
G.
Hinton
,
Nature
521
(
7553
),
436
(
2015
).
53.
C.
Dong
,
C. C.
Loy
,
K.
He
, and
X.
Tang
,
IEEE Trans. Pattern Anal. Mach. Intell.
38
(
2
),
295
307
(
2015
).
54.
A.
Kappeler
,
S.
Yoo
,
Q.
Dai
, and
A. K.
Katsaggelos
,
IEEE Trans. Comput. Imaging
2
(
2
),
109
122
(
2016
).
55.
J.
Yosinski
,
J.
Clune
,
Y.
Bengio
, and
H.
Lipson
, paper presented at the
Advances in Neural Information Processing Systems
,
2014
.
56.
S. P. T. P.
Phyu
and
G.
Srijuntongsiri
, paper presented at the
2016 11th International Conference on Knowledge, Information and Creativity Support Systems (KICSS)
,
2016
.
57.
T.
Chen
,
K.
Tang
,
G.
Chen
, and
X.
Yao
,
Theor. Comput. Sci.
436
,
54
70
(
2012
).
58.
O.
Roeva
,
S.
Fidanova
, and
M.
Paprzycki
,
Recent Advances in Computational Optimization
(
Springer
,
2015
), pp.
107
120
.
59.
D.
Thierens
, paper presented at the
Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02 (Cat. No. 02TH8600)
,
2002
.