Download PDF
Review  |  Open Access  |  21 Feb 2023

An overview of intelligent image segmentation using active contour models

Views: 1432 |  Downloads: 907 |  Cited:  19
Intell Robot 2023;3(1):23-55.
10.20517/ir.2023.02 |  © The Author(s) 2023.
Author Information
Article Notes
Cite This Article

Abstract

The active contour model (ACM) approach in image segmentation is regarded as a research hotspot in the area of computer vision, which is widely applied in different kinds of applications in practice, such as medical image processing. The essence of ACM is to make use ofuse an enclosed and smooth curve to signify the target boundary, which is usually accomplished by minimizing the associated energy function by means ofthrough the standard descent method. This paper presents an overview of ACMs for handling image segmentation problems in various fields. It begins with an introduction briefly reviewing different ACMs with their pros and cons. Then, some basic knowledge in of the theory of ACMs is explained, and several popular ACMs in terms of three categories, including region-based ACMs, edge-based ACMs, and hybrid ACMs, are detailedly reviewed with their advantages and disadvantages. After that, twelve ACMs are chosen from the literature to conduct three sets of segmentation experiments to segment different kinds of images, and compare the segmentation efficiency and accuracy with different methods. Next, two deep learning-based algorithms are implemented to segment different types of images to compare segmentation results with several ACMs. Experimental results confirm some useful conclusions about their sharing strengths and weaknesses. Lastly, this paper points out some promising research directions that need to be further studied in the future.

Keywords

Active contour model, level set, energy function, intensity inhomogeneity, deep learning

1. INTRODUCTION

Image segmentation is a significant component in image processing, and serves as the foundation for image analysis and image understanding. The accuracy of image segmentation hugely affects the quality of subsequent image processing procedures. Its major role is to separate the input images into a series of disjoint sub-regions with unique features, and extract objects of interest. Therefore, image segmentation has been extensively employed in a variety of areas such as medical image processing [1-4], target recognition [5-8], moving target tracking [9-12], etc.

In the last decade, active contour model (ACM) using the level set approach has become one of the most efficient tools for image segmentation, which has been extensively employed in tasks of image segmentation. The image segmentation algorithm based on ACM is an image processing technique that combines upper-level and various prior knowledge for stable image segmentation, which can add image grayscale and edge information during the process of optimization. It provides a piece-wise smooth closed contour as the final outcome, which haswith superior performance such as diverse forms and flexible structures. ACM converts the image segmentation problem into the process of solving a minimization problem with the energy function. The contour of the target object is expressed by means of the zero level set in the execution process, which is convenient to dealfor dealing with the topological deformation during the curve evolution. Nevertheless, the topology of the segmented region changes in an automatic and uncontrollable manner can either be an advantage or an inconvenience according to different applications. The essence of ACM is to employ a continuous and closed curve to represent object boundary, which is achieved through the standard gradient descent approach to minimize the associated energy function.

ACMs are generally comprised of two categories: parametric ACMs and geometric ACMs. In parametric ACMs [13,14], the evolution curve is described in the parametric form to obtain object boundary. However, parametric ACMs can only deal with images that include a sole target object with an obvious boundary through the process of parameterization. Most importantly, they cannot automatically handle topology changes during the process of curve evolution. The geometric ACM is also named as the level set method, which guides evolution curves to evolve towards the target boundary through the geometric measurement parameters. The introduction of level set methods makes it possible to segment images with multiple target objects simultaneously, and solves the issue of topology changes (merging or broken curves) caused by the process of parameterization in parametric ACMs. This paper mainly pays attention to the existing geometric ACMs, and they can be further categorized into three types: region-based ACMs [15-17], which aim at identifying each region of interest using a defined region descriptor to guide the evolution motion of active contour; edge-based ACMs [18,19], which utilize gradient information of the target boundary as the major driving force to attract the active contour to the object boundary; and hybrid ACMs [20-24], which combine local region and edge information together to instruct evolution curve to move towards target boundary.

The Chan-Vese (CV) model [15] utilized the average gray values of the outside and inside areas of the contour to characterize the foreground and background of the input image, respectively. As a classical region-based ACM, CV model does not need to utilize image gradient information, which makes it very suitable to segmentfor segmenting images with blurred or discontinuous edges. However, for images with uneven grayscale such as images subjected to uneven illumination, CV model could obtain undesired segmentation results in the form of having difficulty extracting meaningful objects out of images and falling into local minima [25]. To solve this issue, the pieceiwse piecewise smooth (PS) model [26] was developed to segment images with intensity non-uniformity to some degree due to the consideration of image local attributes. Nevertheless, PS model is sensitive to different initial contours and inefficient due to the complex computation process. The geometric active contours (GAC) model [27,28] translated curve evolution into the evolution of level set function through variational methods, which effectively solves topology change problems. However, this model has to continuously re-initialize the level set function to zero level set, which results in inefficient segmentation and possible boundary leakage. In addition, the segmentation results of medical images that usually contain noise and blurred or discontinuous edges are poor. The fast global minimization (FGM) model [29] defined global minimizers to overcome the drawback of falling into local minima in the local optimization approach such as CV model [15], which enables the FGM model to be independent of random positions of initial contours and gets rid of frequent re-initialization of distance function in GAC model [28]. The model [30] was competent to obtain the global minimum of an ACM between two sides, which enables the initialization process easier and reduces the chance of falling into a local minimum at the false edge. The key of this model is to take advantage of a novel numerical method to compute the minimum route, which is the global minimum of the associated energy function among all routes connecting the two end points.

To make the level set function inherently stable, the distance regularized level set evolution (DRLSE) model [31] added a distance regularization term, which controls the deviation between the level set function and the standard signed distance function during the curve evolution. In addition, this model avoided the problem of constant re-initialization during the curve evolution. Nevertheless, this model has no self-adjustment ability during the process of energy minimization due to uni-directional area term, and remains sensitive to different selections of initial contours. The bias correction (BC) model [32] was designed to segment the image and compute the bias field simultaneously to correct unevenly distributed intensity in medical images, which is more precise and has less segmentation time than the famous PS model. However, this model is nowadays inefficient and less accurate than many newly developed ACMs. In addition, it is not very effective in segmenting natural images taken from nature. The local binary fitting (LBF) [33] and region- scalable fitting (RSF) [34] models were constructed to segment images with intensity non-uniformity, which use a kernel function to design a local binary fitting energy and embeds information of local area to guide the motion of level set function. In addition, these two models incorporates a penalty term in the energy function, which avoids the periodic re-initialization process and greatly improves algorithm efficiency. However, the introduced kernel function only calculates the grayscale value of image locally, which makes it possible to get trapped into local minimum during the procedure of minimizing its energy. In other words, these two models are sensitive to initial contours. In addition, it takes time to calculate the two fitting functions that need to be continuously updated during each iteration, resulting in the inefficient segmentation of RSF model. The local image fit (LIF) model [35] considered the technique of Gaussian filtering and local image information to segment different images with intensity non-uniformity, which segments images faster than RSF model due to only two convolution operations during each iteration. However, this model still remains susceptible to different initial contours. Specifically, an inappropriate initial contour may result in a wrong segmentation due to the fact that the majority of existing ACMs have non-convex energy functions. To solve the issue of non-convex functions, the approach [36] was designed to translate non-convex function to convex function, which handles the problem of local minima frequently occurred occurring in non-convex function. Nevertheless, this approach is too complex and time-consuming to be applied in practice. In addition, the method [37] numerically tracked an accurate numerical approximation of the most optimized solution for some relaxed problems, which is capable of providing a close bound between the calculated solution and the real minimizer. Nevertheless, this model is not guaranteed to obtain a global minimizer of the minimal partition problem (also known as spatially continuous Potts model).

The local and global intensity fitting (LGIF) model [38] was defined as a linear combination of local image fit (LIF) energy and global image fit (GIF) energy. By choosing the appropriate weights that are used to control the ratio of LIF energy and GIF energy, this model can effectively handle the grayscale non-uniformity and has good initialization robustness. However, the weights of LIF and GIF models are unpredictable for different images and often need to be manually calibrated with respect to the degree of grayscale non-uniformity. The segmentation will fail if it is chosen poorly [39]. The local gaussian distribution fitting (LGDF) model [40] defined a fitting energy based on the mean and variance of the local gray values. Compared with RSF model, this model is able to segment local areas with the same mean gray value but different variances. However, this model is less efficient than RSF model due to the fact that more time is consumed to compute the variances. In addition, this model is also sensitive to different initial contours [41].

The core of local region Chan-Vese (LRCV) model [42] was to replace two fitting constants in the CV model with the two fitting functions in RSF model. In addition, this model utilizes the segmentation result of degraded CV model as the initial contour, which can reduce the dependence on the initial contour to a certain extent and accelerate the segmentation speed at the same time. Considering that many targets and backgrounds in real images are random, the local histogram fitting (LHF) model [43] took the advantage of two fitted histograms to approximate the distribution of the target and background, which can be used to segment regions with unpredictable distributions. However, it is inefficient because it needs to calculate the histogram distribution for each gray level (0-255). Similarly, it is sensitive to the initial contours. The local and global Gaussian distribution fitting (LGGDF) model [44] constructed a linear combination of a local and global Gaussian fit energies with a changeable weight to balance the local and global energies, which further decreases the dependence on the choices of initial contours. However, it is computationally intensive and the adaptive weight does not work well for some images. The local likelihood image fitting (LLIF) model [45] mainly utilized mean intensity and variance information of the local region. In fact, LLIF model is a combination of LIF model and LGDF model, which has enhanced applicability for segmenting images. However, the segmentation efficiency is relatively low, while the robustness to initialization is not appealing [46].

The RSF&LoG model [39] combined RSF model with optimized Laplacian of Gaussian (LoG) energy to improve segmentation results, which further improve sensitivity to different initial contours. Nevertheless, the segmentation time of this model is relatively long [47] due to the unoptimized computation procedure. The local pre-fitting (LPF) model [48] pre-calculated mean intensities of local regions ahead of iteration to obtain faster segmentation speed. Nevertheless, this model still faces some common issues such as stagnation of false boundaries, under- segmentation [49]. Therefore, the segmentation accuracy of this model still has space to be further improved. The LPF & FCM model [41] locally fitted out two fuzzy center points inside and outside the evolution curve ahead of iteration through the fuzzy c-means (FCM) clustering algorithm, which reduces computation cost and improves segmentation efficiency. In addition, this model puts combines an adaptive edge indicator function and an adaptive sign function together to resolve the issue of single direction of evolution contour to realize bidirectional motion.

The super-pixel based via a local similarity factor and saliency (SLSFS) model [50] linked super-pixel with FCM clustering algorithm to create initial contours, which is competent to create adaptive initial contour in the neighborhood of the target and effectively protect weak edge information. The model [51] constructed an adaptive weight ratio to calibrate the relationship between local energy part and global energy part, which is capable of automatically calibrating the direction of curve evolution with respect to the location of the target region. Nevertheless, the initial contour still has to be manually labeled during the process of curve evolution. The approach [52] associated the level set method (LSE) model [32] with region and edge synergetic level set framework (RESLS) model [53] to improve segmentation results, which is able to efficiently segment images with unevenly distributed intensity and extends the two-phase model to multi-phase model. However, this model is sensitive to the choice of parameters and incompetent to in effectively processing natural images with complicated background information. The method [54] employed self- organizing maps (SOM) to cluster the input image into two regions: foreground and background regions, which decreases the interference of noise and enhances system robustness. However, compared with K-mean clustering algorithm, SOM algorithm may obtains relatively smaller lower computation precision owing to the update of neighborhood nodes. The global and local fuzzy image fitting (GLFIF) model [55] utilized a combination of global and local fitting energy to process images with noise and non-uniform intensity, which hugely decreases the influences of background noise and intensity non-uniformity to obtain accurate segmentation result.

The additive bias correction (ABC) model [56] employed the theory of bias field correction to effectively segment images with unevenly distributed intensity and achieved good segmentation results. However, the issue of under- segmentation may occur while segmenting images with Gaussian noise interference, as described in Section 4, which means that the anti-noise robustness of this model still has space to be optimized. The pre-fitting energy (PFE) model [47] calculated median intensities of local regions before iteration begins began to decrease segmentation time. In addition, this model contains a novel single well potential function and its corresponding evolution speed function to facilitate the evolution speed of the level set function to achieve fast image segmentation. However, issues of stagnation of false boundaries and under- segmentation may take placeoccur during the process of evolution process. The above said issues are illustrated and explained in detail in Section $$ 4 $$ . Therefore, this model still has room for improvement in terms of system robustness and segmentation accuracy. The pre-fitting bias correction (PBC) model [57] utilized an optimized FCM algorithm to pre-calculate the bias field before iteration, which is able to effectively segmenting images with unevenly distributed intensity and greatly reduces segmentation time. However, the segmentation accuracy and efficiency may be inversely affected if the FCM algorithm has bad performanceperforms poorly. The local and global Jeffreys divergence (LGJD) model [58] put local and global data fitting energies together to measure the difference between input image and fitted image through the Jeffreys divergence theory, which can effectively segment natural and medical images with intensity non-uniformities. However, this model has a long segmentation time [47] due to the unoptimized computation process. The adaptive local pre-fitting energy function, based on Jeffreys divergence (APFJD) model [49], embedded two pre-fitting functions to construct an enhanced energy function. This model replaces the traditional Euclidean distance with the theory of Jeffreys divergence to measure the distance between real image and fitted image, which is proved to be more capable of segmenting images with intensity non-uniformity efficiently and effectively. Nevertheless, the matter of under- segmentation sometimes happens when segmenting images with Gaussian noise, as described and explained in detail in Section $$ 4 $$ , which indicates that this model still has room for improvement regarding robustness against noise interference.

In the beginning of this paper, the authors have briefly reviewed the diverse ACMs (region-based ACMs, edge-based ACMs, and hybrid ACMs) in the area of image segmentation with their pros and cons. Then, several typical models in region-based ACMs, edge-based ACMs, and hybrid ACMs have beenwere reviewed with their advantages and disadvantages, respectively. After that, $$ 12 $$ typical ACMs chosen from the literature review have beenwere selected to conduct three comparison experiments on different kinds of images (synthetic images, medical images, and natural images). Next, two deep-learning based algorithms have been implemented to segment double-phase images and multi-phase images, whose experimental results are compared with several ACMs to demonstrate their strengths and weaknesses. Lastly, some promising research directions and works have been recommended to subsequent researchers. The rest of this paper is arranged as follows: Section $$ 2 $$ explains some basic knowledge of the ACMs theory. Section $$ 3 $$ reviews several popular ACMs in three categories, including region-based ACMs, edge-based ACMs, and hybrid ACMs, with their advantages and disadvantages. Section $$ 4 $$ describes experimental results with respect to segmentation experiments of synthetic images, medical images, and natural images. Section $$ 5 $$ presents several possible research directions.

2. RELATED KNOWLEDGE

2.1. Curve evolution

Geometric ACMs [59-62] are mainly on the basis of partial differential equations (PDEs) and variational method, whose essence is to continuously evolve toward the direction of energy minimum under the constraint of image information and give conditions. The segmentation process is generally as follows: a closed curve is initialized on the given image. Then, the curve evolves under the combined effect of internal and external energies, and stops evolving when the energy function achieves a minimal value through gradient descent method. Lastly, the zero level set coincides with the target edge to complete segmentation.

The goal of the level set approach is to find out the zero level set, which represents the target boundary as energy function is minimized through standard descent method. In other words, this level set method utilizes zero level set one dimension higher to express the evolution result of low-dimensional target. During the curve evolution, the points on the curve move towards their normal directions at a certain velocity respectively, with time as a variable respectively according to a certain velocity. In addition, the speed and direction of the motion are mainly controlled by two parameters: curvature and unit normal vector.

A closed and smooth curve $$ C $$ is defined in two dimensions [63] as follows:

$$ \begin{equation} \frac{d C}{d p}=T, \frac{d^{2} C}{d p}=k N, \end{equation} $$

where $$ p $$ is the curve parameter, $$ k $$ denotes the curvature, $$ T $$ signifies the tangent line, and $$ N $$ represents the normal line. Note that $$ T(p) $$ and $$ N(p) $$ are perpendicular to each other, so the direction and magnitude of the motion of any point on the curve C can be represented by these two vectors. By adding the time variable $$ t $$ , the evolution of the curve is represented as

$$ \begin{equation} \frac{d C(t)}{d t}=\alpha_{1} T+\alpha_{2} N, \end{equation} $$

where $$ \alpha_{1} $$ denotes the point speed on the curve in the tangential direction, and $$ \alpha_{2} $$ signifies the point speed on the curve in the normal direction. Since the shape change and geometric properties of the curve during evolution process are only related to the speed in the normal direction. Therefore, only the normal speed is taken into consideration, while the velocity component in the tangential direction is chosen to be ignored for better segmentation efficiency in practical applications.

Therefore, Equation (2) can be simplified as

$$ \begin{equation} \frac{d C(t)}{d t}=F_{n} N, \end{equation} $$

where $$ F_{n} $$ is the speed function used to represent the motion speed of all points on the curve.

2.2. Level set function

The fundamental idea of level set method is to express the evolution of a closed curve $$ C $$ in the plane as the evolution of the intersection of a higher dimensional function with the horizontal plane by using the expression of an implicit function, which performs interface tracing and shape modeling by solving the zero level set function [64]. Specifically, the level set method employs a level set function a dimension higher to implicitly express a two-dimensional closed curve, or a three-dimensional surface, or a multi-dimensional hyper-surface, which transforms the process of curve evolution into the evolution problem of level set function one dimension higher.

The level set function is always a valid function when the topology of the closed curve or surface embedded in the level set function changes. Instead of tracking the position of the evolved curve, the level set function is continuously updated under the action of solving a partial differential evolution equation to figure out its zero level set when image segmentation is performed by the level set method. The zero level set at that moment is derived when the evolution process stops under some certain criteria, which means the position of the zero level set is the location of the object contour after segmentation.

2.3. Energy function

The internal energy is determined by the internal properties of the curve, which defines an enlargeable and bendable curve deformation energy term, and maintains the continuity and smoothness of the contour curve by adjusting the weights to control the consistency of the elastic tensor of curve bending and the rigid tensor of stretching. The external energy determined by image information consists of image constraint energy term and image potential energy term [65]. There is no fixed expression formula for the constraint energy term, which is usually constructed according to users' demands or image features. The external energy determines the evolution direction of the active contour, which guides the evolution contour line to evolve towards the target boundary.

3. ACTIVE CONTOUR MODEL

In this section, some representative ACMs in of three types (region-based ACMs, edge-based ACMs, and hybrid ACMs) are reviewed with their pros and cons in detail.

3.1. Region-based ACMs

3.1.1. Mumford Shah model

MS model [26] unifies image data, initial estimation and target contour in a feature extraction process under the constraint of knowledge, which is capable of autonomously converging to the energy minimum energy state after proper initialization. This model converts image segmentation issue into minimization of the energy function as follows:

$$ \begin{equation} {e}^{M S}(v, K)= p \int_{\Omega}(v-I)^{2} d x+q \int_{\Omega \backslash K}|\nabla v|^{2} d x+r |K|, \end{equation} $$

where $$ v $$ is the fitted image, $$ I $$ is the original input image, $$ \nabla $$ denotes gradient operator, $$ |K| $$ is the length of contour line $$ K $$ , and $$ p $$ , $$ q $$ , $$ r $$ are positive coefficients to control the associated segments.

The energy function in Equation (4) is comprised of three terms: the first data fidelity term ($$ p \int_{\Omega}(v-I)^{2} d x $$ ) maintains the similarity between original input image and segmentation result; the second curve smoothing term ($$ q \int_{\Omega \backslash K}|\nabla v|^{2} d x $$ ) makes segmentation result smooth, and the third length constraint term ($$ r |K| $$ ) constrains the curve length. Among these terms, data fidelity term and curve smoothing term utilize the feature of local region information to get rid of unnecessary contours. The most optimized contour $$ K $$ is obtained through the minimization of Mumford and Shah energy function in Equation (4), which segments the original input image $$ I $$ into several non-overlapping areas, and a fitted image $$ v $$ after the process of smoothing. However, it may have the issue of several local minima since that $$ {e}^{M S}(v, K) $$ is not convex. In addition, it is time-consuming and inefficient to solve Equation (4) because of incompatible dimensions of $$ v $$ and $$ K $$ [66].

3.1.2. Chan Vese model

CV model [15] considers the image global characteristics and image statistical information inside and outside the evolution curve to drive the curve to approach the contour of the target area, which achieves success in the segmentation of images with blurred edges and small gradient changes and remains insensitive to noise. The CV energy function is proposed as

$$ \begin{equation} {e}^{C V}\left(c_{1}, c_{2}, C\right)= \int_{\text {outside( } C)}\left(I-c_{1}\right)^{2} d x +\int_{\text {inside }(C)}\left(I-c_{2}\right)^{2} d x+ a|C|, \end{equation} $$

where $$ a $$ is a constant; $$ c_{1} $$ and $$ c_{2} $$ denotes the grayscale averages of the outer and inner regions of the curve, respectively; $$ |C| $$ represents the length of evolution curve; the first two terms in Equation (5) are data-driven terms that are utilized to guide the curve to evolve towards target boundary, and the last term in Equation (5) is length constraint term that is used to control the curve length as well as smooth it. According to Equation (5), the energy function $$ e^{C V} $$ reaches the minimum value when curve $$ C $$ is on the edge of target boundary. In the process of minimizing the CV energy, the curve $$ C $$ can be represented by the level set function $$ \phi $$ , which generates the following rewritten energy function as follows:

$$ \begin{equation} E^{C V}\left(\phi, c_{1}, c_{2}\right)= \int_{\Omega}\left|I-c_{1}\right|^{2} H_{\varepsilon}(\phi(x)) d x+ \int_{\Omega}\left|I-c_{2}\right|^{2}\left[1-H_{\varepsilon}(\phi(x))\right] d x +u \int_{\Omega} \delta_{\varepsilon}(\phi(x))|\nabla \phi(x)| d x, \end{equation} $$

where $$ H_{\varepsilon}(\phi(x)) $$ and $$ \delta_{\varepsilon}(\phi(x)) $$ are approximated Heaviside and Dirac functions defined as

$$ \begin{equation} H_{\varepsilon}(x)=\frac{1}{2}\left(1+\frac{2}{\pi} \arctan \left(\frac{x}{\varepsilon}\right)\right), \end{equation} $$

$$ \begin{equation} \delta_{\varepsilon}(x)=H_{\varepsilon}^{\prime}(x)=\frac{\varepsilon}{\pi\left(\varepsilon^{2}+x^{2}\right)}. \end{equation} $$

Utilizing the standard gradient descent approach to minimize the energy function in Equation (6), therefore, the issue of minimizing the energy function is transformed into solving the gradient descent function, which obtains the following gradient flow function (level set evolution function) as follows:

$$ \begin{equation} \frac{\partial \phi}{\partial t}= -\delta_{\varepsilon}(\phi)\left[\left(I(x)-c_{1}\right)^{2}-\left(I(x)-c_{1}\right)^{2}\right]+a \delta_{\varepsilon}(\phi) \operatorname{div}\left(\frac{\nabla \phi}{|\nabla \phi|}\right), \end{equation} $$

with $$ c_{1} $$ and $$ c_{2} $$ being

$$ \begin{equation} \left\{\begin{array}{l} c_{1}=\frac{\int_{\Omega} I(x) \cdot H_{\varepsilon}(\phi(x)) d x}{\int_{\Omega} H_{\varepsilon}(\phi(x)) d x}, \\ c_{2}=\frac{\int_{\Omega} I(x) \cdot\left[1-H_{\varepsilon}(\phi(x)] d x\right.}{\int_{\Omega}\left[1-H_{\varepsilon}(\phi(x))\right] d x}. \end{array}\right. \end{equation} $$

Lastly, the zero level set can be obtained through iteratively solving $$ \phi^{k+1}=\phi^{k}+\Delta t \cdot \partial \phi / \partial t $$ . The iteration process will stop either when the convergence criteria are satisfied or the maximum iteration number is reached.

CV model has fair segmentation speed and initialization robustness [42,67]. However, $$ c_{1} $$ and $$ c_{2} $$ are only related to the global gray value of the input image. Therefore, the segmentation result will be wrong if the gray values inside and outside the curve C are different. In other words, this model cannot segment images with intensity non-uniformity, which limits its application scope.

3.1.3. Region scalable fitting model

RSF model employs Gaussian kernel function to extract image characteristics locally, which can effectively process images with uneven grayscale. To overcome the drawback of CV model, RSF model [34] is proposed. The RSF energy function based on local gray values is proposed as

$$ \begin{equation} \varepsilon_{x}^{\text {Fit}}\left(C, f_{1}(x), f_{2}(x)\right)= \lambda_{1} \int_{\text {outside }(C)} K(x-y)\left|I(y)-f_{1}(x)\right|^{2} d y +\lambda_{2} \int_{\text {inside }(C)} K(x-y)\left|I(y)-f_{2}(x)\right|^{2} d y, \end{equation} $$

where $$ \lambda_{1} $$ and $$ \lambda_{2} $$ are constant values; $$ f_{1}(x) $$ and $$ f_{2}(x) $$ signify local fitting functions outside and inside curve C; image intensity $$ I(y) $$ denotes local region centered at point $$ x $$ , whose size is controlled by Gaussian kernel function $$ K $$ . In fact, $$ \varepsilon_{x}^{\text {Fit}} $$ denotes the weighted average squared error between the fitted values $$ f_{1}(x) $$ and $$ f_{2}(x) $$ and the truth grayscale values. Therefore, given a centroid $$ x $$ , the fitted energy $$ \varepsilon_{x}^{\text {Fit}} $$ is minimized when the fitted values $$ f_{1}(x) $$ and $$ f_{2}(x) $$ are the best approximation of the local image grayscale values on both sides of the contour C, which means that the contour C is on the target boundary. For all points $$ x $$ in the image domain, the total energy $$ e^{RSF} $$ can be computed by integrating $$ \int \varepsilon_{x}^{F i t}\left(C, f_{1}(x), f_{2}(x)\right) d x $$ , which is expressed as follows:

$$ \begin{equation} \begin{array}{r} \begin{aligned} e^{R S F}\left(\phi, f_{1}(x), f_{2}(x)\right)=&\lambda_{1} \int_{\Omega}\left(\int_{\Omega} K_{\sigma}(x-y)\left|I(y)-f_{1}(x)\right|^{2} H_{\varepsilon}(\phi(y)) d y\right) d x \\ &+\lambda_{2} \int_{\Omega}\left(\int_{\Omega} K_{\sigma}(x-y)\left|I(y)-f_{2}(x)\right|^{2}\left[1-H_{\varepsilon}(\phi(y))\right] d y\right) d x, \end{aligned} \end{array} \end{equation} $$

where $$ H_{\varepsilon}(\phi(x)) $$ and $$ \delta_{\varepsilon}(\phi(x)) $$ are approximated Heaviside and Dirac functions defined in Equation (7) and Equation (8), respectively. In addition, length constraint term $$ L(\phi) $$ is added to smooth and shorten the contour C while distance regularization term $$ P(\phi) $$ is introduced to maintain the regularity of level set function $$ \phi $$ to avoid its re-initialization. Therefore, the total energy of RSF model is defined as

$$ \begin{equation} F^{R S F}\left(\phi, f_{1}(x), f_{2}(x)\right)=e^{R S F}\left(\phi, f_{1}(x), f_{2}(x)\right)+ a_{1} L(\phi)+ a_{2} P(\phi), \end{equation} $$

where $$ a_{1} $$ and $$ a_{2} $$ are constant values related to the length constraint term $$ L(\phi) $$ and the distance regularization term $$ P(\phi) $$ , respectively, which are defined as

$$ \begin{equation} L(\phi)=\int_{\Omega} \delta_{\varepsilon}(\phi(y))|\nabla \phi(y)| d x, \end{equation} $$

$$ \begin{equation} P(\phi)=\int_{\Omega} \frac{1}{2}(|\nabla \phi(y)|-1)^{2} d x. \end{equation} $$

Applying the standard gradient descent method [68] to minimize energy $$ F^{R S F} $$ . Firstly, fix level set function $$ \phi $$ and minimize energy $$ F^{R S F} $$ with respect to $$ f_{1}(x) $$ and $$ f_{2}(x) $$ through partial derivative respectively, which generates following functions as

$$ \begin{equation} \large \left\{\begin{array}{l} f_{1}(x)=\frac{\int_{\Omega} K_{\sigma}(x-y)\left[H_{\varepsilon}(\phi(y)) \cdot I(y)\right] d y}{\int_{\Omega} K_{\sigma}(x-y) H_{\varepsilon}(\phi(y)) d y}, \\ f_{2}(x)=\frac{\int_{\Omega} K_{\sigma}(x-y)\left[\left(1-H_{\varepsilon}(\phi(y)) \cdot I(y)\right] d y\right.}{\int_{\Omega} K_{\sigma}(x-y)\left[1-H_{\varepsilon}(\phi(y))] d y\right.}. \end{array}\right. \end{equation} $$

Secondly, fix $$ f_{1}(x) $$ and $$ f_{2}(x) $$ and minimize energy $$ F^{R S F} $$ with respect to level set function $$ \phi $$ through partial derivative respectively, which generates following gradient flow function as follow:

$$ \begin{equation} \frac{\partial \phi^{RSF}}{\partial t}=-\delta_{\varepsilon}(\phi)\left(\lambda_{1} e_{1}-\lambda_{2} e_{2}\right)+a_{1} \delta_{\varepsilon}(\phi) \operatorname{div}\left(\frac{\nabla \phi}{|\nabla \phi|}\right) +a_{2} \left(\nabla^{2} \phi-\operatorname{div}\left(\frac{\nabla \phi}{|\nabla \phi|}\right)\right), \end{equation} $$

with $$ e_{1}(x) $$ and $$ e_{2}(x) $$ being

$$ \begin{equation} \left\{\begin{array}{l} e_{1}(x)=\int_{\Omega} K_{\sigma}(y-x)\left|I(y)-f_{1}(x)\right|^{2} d y, \\ e_{2}(x)=\int_{\Omega} K_{\sigma}(y-x)\left|I(y)-f_{2}(x)\right|^{2} d y. \end{array}\right. \end{equation} $$

In Equation (17), $$ a_{1}, a_{2} $$ are positive constants, and the gradient flow is composed of three terms: the first term $$ -\delta_{\varepsilon}(\phi)\left(\lambda_{1} e_{1}-\lambda_{2} e_{2}\right) $$ represents the data-driven term that drives curve C towards target boundary to complete segmentation; the second term $$ a_{1}\delta_{\varepsilon}(\phi) \operatorname{div}\left(\frac{\nabla \phi}{|\nabla \phi|}\right) $$ signifies arc length of the contour C, which is used to smooth or shorten the length of the contour C; the third term $$ a_{2}\left(\nabla^{2} \phi-\operatorname{div}\left(\frac{\nabla \phi}{|\nabla \phi|}\right)\right) $$ denotes the regularization term of level set function, which is utilized to maintain the regularity of level set function.

RSF model sufficiently takes advantage of local image information through Gaussian kernel function, which enables it to effectively segment images with intensity non-uniformity. However, the incorporated kernel function only calculates the grayscale values of local image regions, which renders the energy function $$ F^{R S F} $$ to easily fall into the local minimum during the process of iteration. Accordingly, this model is very susceptible to the selection of initial contour. In addition, at least $$ 4 $$ convolutions have been performed to update the $$ 2 $$ fitting functions $$ f_{1}(x) and f_{2}(x) $$ in Equation (16) during each iteration, which leads to inefficient segmentation.

3.1.4. Local image fitting model

To reduce the computation time in RSF model, LIF model [35] is put forward to modify and optimize the calculation procedure of fitting functions in RSF model, which greatly reduces the number of convolution operations required to update the fitting functions.

The LIF energy function is constructed to minimize the difference between the fitted image and the actual one, which is expressed as

$$ \begin{equation} e^{\mathrm{LIF}}(\phi)=\frac{1}{2} \int_{\Omega}\left|I(y)-I_{f}(x, y)\right|^{2} d x, \end{equation} $$

where $$ I_{f}(x) $$ is the local fitted image defined as

$$ \begin{equation} I_{f}(x, y)=m_{1}(x) H(\phi(y))+m_{2}(x)(1-H(\phi(y))). \end{equation} $$

Note that $$ H_{\varepsilon}(\phi(y)) $$ and $$ \delta_{\varepsilon}(\phi(y)) $$ are approximated Heaviside and Dirac functions defined in Equation (7) and Equation (8), respectively; local fitting functions $$ m_{1}(x) $$ and $$ m_{2}(x) $$ are

$$ \begin{equation} \left\{\begin{array}{l} m_{1}(x)=\operatorname{mean}\left(I \in\left(\{y \in \Omega \mid \phi(y)<0\} \cap \Omega_{k}(x)\right)\right), \\ m_{2}(x)=\operatorname{mean}\left(I \in\left(\{y \in \Omega \mid \phi(y)>0\} \cap \Omega_{k}(x)\right)\right), \end{array}\right. \end{equation} $$

where $$ x $$ signifies the center point of initial contour while $$ y $$ denotes all point in a specific chosen region; $$ \Omega_{k}(x) $$ is a truncated Gaussian window $$ K_{\sigma} $$ with size $$ (4k+1) \times (4k+1) $$ and standard deviation $$ \sigma $$ . In fact, $$ m_{1}(x) $$ and $$ m_{2}(x) $$ serves as two local fitting functions outside and inside the contour C. Note that $$ K_{\sigma} $$ is a Gaussian kernel function with size $$ k \times k $$ and standard deviation $$ \sigma $$ . The parameter $$ \sigma $$ is used to control local region size with respect to image features. Because of the localization property of the Gaussian kernel function, the contribution of image intensity $$ I(y) $$ fades away if the distance between the center point $$ x $$ and the point $$ y $$ is far. In other words, the image intensity of point y in the vicinity of the center point $$ x $$ mainly contributes to the existence of LIF energy. Therefore, this model is capable of precisely handling images with unevenly distributed intensity.

Utilizing the steepest descent method [68] to minimize the energy function $$ e^{\mathrm{LIF}}(\phi) $$ in Equation (19), which generates the gradient flow equation as follows:

$$ \begin{equation} \frac{\partial \phi^{LIF}}{\partial t}=\left(I-I_{f}(x)\right)\left(m_1(x)-m_2(x)\right) \delta_{\varepsilon}(\phi), \end{equation} $$

where $$ z_{1}, z_{2} $$ are positive constants; $$ \delta_{\varepsilon}(\phi(x)) $$ are approximated Dirac functions described in Equation (8).

The main contribution of LIF model is to re-write data-driven term $$ \lambda_1 e_1-\lambda_2 e_2 $$ in RSF model in Equation (17), which reduces the convolution number of each iteration to update the fitting functions from $$ 4 $$ to $$ 2 $$ to save a huge amount of computation time.

The LIF model re-writes data-driven term $$ \lambda_1 e_1-\lambda_2 e_2 $$ in RSF model in Equation (17) as follows:

$$ \begin{equation} \lambda_1 e_1-\lambda_2 e_2=\left(\lambda_1-\lambda_2\right) I^2(y)\left[K_\sigma(x) * \textbf{1}\right]-2 I(y)\left[K_\sigma(x) *\left(\lambda_1 f_1(x)-\lambda_2 f_2(x)\right)\right] +K_\sigma(x) *\left(\lambda_1 f_1^2(x)-\lambda_2 f_2^2(x)\right), \end{equation} $$

with $$ e_1(x) $$ and $$ e_2(x) $$

$$ \begin{equation} e_1(x) =\int_{\Omega} K_\sigma(y-x)\left|I(y)-f_1(x)\right|^2 \mathrm{\; d} y =I^2(y)\left[K_\sigma(x) * \textbf{1}\right]-2 I(y)\left[K_\sigma(x) * f_1(x)\right]+K_\sigma(x) * f_1^2(x), \end{equation} $$

$$ \begin{equation} e_2(x)=\int_{\Omega} K_\sigma(y-x)\left|I(y)-f_2(x)\right|^2 \mathrm{\; d} y =I^2(y)\left[K_\sigma(x) * \textbf{1}\right]-2 I(y)\left[K_\sigma(x) * f_2(x)\right]+K_\sigma(x) * f_2^2(x). \end{equation} $$

In Equation (23), the first convolution term $$ K_\sigma(x) * \textbf{1} $$ only needs to be calculated once before iteration begins. Note that $$ \textbf{1} $$ is a matrix full of ones, and $$ K_\sigma(x) * \textbf{1}=\int K_\sigma(y-x) d y $$ , which equals to constant $$ 1 $$ anywhere but the edge of the image region $$ \Omega $$ . Therefore, there are only two convolution terms left $$ K_\sigma(x) *\left(\lambda_1 f_1(x)-\lambda_2 f_2(x)\right) $$ and $$ K_\sigma(x) *\left(\lambda_1 f_1^2(x)-\lambda_2 f_2^2(x)\right) $$ to be calculated in each convolution.

Compared with RSF model, although LIF model does not contain length constraint and distance regularization terms, it utilizes Gaussian filtering to smooth the curve $$ C $$ as well as regularize the level set function, which reduces the possibility of the occurrence of local minima. In addition, there are only $$ 2 $$ convolutions to update fitting functions in Equation (23) in LIF model during each iteration instead of $$ 4 $$ convolution to update fitting functions in Equation (16) in RSF model, which saves a great amount of computation time. However, the incorporated Gaussian kernel function also only computes the grayscale values of local image area, which also makes it easy to get stuck at a local minimum. That is to say, this the model maintains sensitivity to different initial contours.

3.2. Edge-based ACMs

3.2.1. Geodesic active contour model

GAC model [28] constructively integrates the concept of edge indicator function into energy function, which can flexibly deal with topology changes and guide the contour line to converge at the target boundary.

GAC energy function [28] based on edge indicator function is defined as

$$ \begin{equation} e(C)=\int_{0}^{1}\left(e_{\text {int }}(C^{\prime}(q)+e_{\text {ext }}(C(q))\right) d q, \end{equation} $$

where $$ e_{\text {int }} $$ is length constraint term and $$ e_{\text {ext }} $$ is area term that are defined respectively as

$$ \begin{equation} e_{\text {int }}(C^{\prime}(q))=\alpha_{1} \left|C^{\prime}(q)\right|^{2}, \end{equation} $$

$$ \begin{equation} e_{\text {ext }}(C(q))=\gamma_{1} g_{\beta}(I)\left|\nabla I\left(C(q)\right)\right|^{2}. \end{equation} $$

Note that $$ \alpha_{1} $$ and $$ \gamma_{1} $$ are constant values; $$ g_{\beta} $$ is the edge indicator defined as

$$ \begin{equation} g_{\beta}(I)=\frac{1}{1+\left|\nabla\left(K_{\sigma} * I\right)\right|^{2}}, \end{equation} $$

where $$ K_{\sigma} $$ is the Gaussian kernel function with standard deviation $$ \sigma $$ . Utilizing the standard gradient method to minimize energy function in Equation (26), which generates gradient flow function as

$$ \begin{equation} \frac{\partial \phi}{\partial t}=\alpha_{1}|\nabla \phi| \operatorname{div}\left(g_{\beta} \frac{\nabla \phi}{|\nabla \phi|}\right)+\gamma_{1} g_{\beta}(I)|\nabla \phi|. \end{equation} $$

GAC model obtains a closed curve (the zero level set) by continuously updating level set function under certain rules, which can flexibly handles changes in curve topology. However, this model cannot realize adaptive segmentation and requires human intervention. Specifically, the sign and magnitude of evolution speed need to be determined manually with respect to the location of initial contour (inside or outside the target boundary), which leads to the issue of repetitive re-initialization of level set function during iteration process and possible boundary leaking. In addition, this model highly depends on the boundary gradient as well as initial position, which means that only those boundary pixels with relatively strong great gradient changes are likely to be detected.

3.2.2. Distance regularized level set evolution model

To solve the problem of repetitive re-initialization of level set function in GAC model, DRLSE model [31] incorporates a distance regularization term into the classic ACM to calibrate the deviation between the level set function and the standard symbolic distance function (SDF) in the curve evolution process, so that the level set function can maintain its internal stability, and finally avoids the problem of continuous re-initialization in the curve evolution process.

DRLSE energy function is described as

$$ \begin{equation} E^{DRLSE}(\phi)=\mu_{1} \int_{\Omega} p_{2}(|\nabla \phi|) d x+ \mu_{2} \int_{\Omega} g_{\beta} \delta_{\varepsilon}(\phi)|\nabla \phi| d x + \mu_{3} \int_{\Omega} g_{\beta} H_{\varepsilon}(-\phi) d x, \end{equation} $$

where $$ \mu_{1} $$ , $$ \mu_{2} $$ , $$ \mu_{3} $$ are constant values; $$ \nabla $$ denotes gradient operator; $$ \phi $$ is the level set function; $$ g_{\beta} $$ is the edge indicator function defined in Equation (29); $$ H_{\varepsilon}(x) $$ and $$ \delta_{\varepsilon}(x) $$ are regularized Heaviside and Dirac functions denoted in Equation (7) and Equation (8), respectively; and the double well potential function $$ p(s) $$ and its associated derivative $$ p_{2}^{\prime}(s) $$ are defined as follows respectively:

$$ \begin{equation} p_{2}(s)= \begin{cases}\frac{1}{(2 \pi)^{2}}(1-\cos (2 \pi s)), & s \leq 1, \\ \frac{1}{2}(s-1)^{2}, & s \geq 1, \end{cases} \end{equation} $$

$$ \begin{equation} p_{2}^{\prime}(s)= \begin{cases}\frac{1}{2 \pi} \sin (2 \pi s), &s \leq 1, \\ s-1, &s \geq 1.\end{cases} \end{equation} $$

Employing the steepest gradient descent method to minimize energy function in Equation (31), which obtains the following gradient descent flow equation as

$$ \begin{equation} \frac{\partial \phi}{\partial t}=\mu_{1} \operatorname{div}\left(d_{p_{2}}(|\nabla \phi|) \nabla \phi\right)+\mu_{2} \delta_{\varepsilon}(\phi) \operatorname{div}\left(g \frac{\nabla \phi}{|\nabla \phi|}\right)+\mu_{3} g \delta_{\varepsilon}(\phi), \end{equation} $$

where div($$ s $$ ) denotes vector divergence; the evolution speed function $$ d_{p_{2}}(s) $$ is defined as

$$ \begin{equation} d_{p_{2}}(s)=\frac{p_{2}^{\prime}(s)}{s}=\begin{cases}\frac{1}{2 \pi s} \sin (2 \pi s), & s \leq 1, \\ 1- \frac{1}{s}, & s \geq 1.\end{cases} \end{equation} $$

DRLSE model incorporates the distance rule function to solve the deviation between the level set function and the singed distance function, which means that the level set function no longer requires the re-initialization operation in the iterative process. However, this model has several drawbacks, as follows:

● The area term utilized to facilitate the evolution speed of the zero level set is a single value (positive or negative), which can be only chosen either from positive to zero or negative to zero during the process of energy minimization. In a word, this model has no self-adjustment ability and cannot realize adaptive segmentation.

● The area and length terms are highly dependent on the edge indicator function that is constructed by the gradient of the input image. The edge indicator function will be almost zero if the gradient is big, which renders the target boundary after Gaussian filtering blurry and wider. In this case, the target boundaries may be interconnected due to Gaussian smoothing when the distance between targets is very close, which results in segmentation failure.

● The constant $$ \mu_{3} $$ must be set manually, which has a great influence on the segmentation results. the The evolution speed will be slowed down if $$ \mu_{3} $$ is chosen too small, and the evolution speed will be too large that the target boundary leaks if $$ \mu_{3} $$ is set too big.

● The evolution speed function $$ d_{p_{2}}(s) $$ has a maximum value of $$ 1 $$ when $$ s = 0 $$ , which renders the evolution curve evolve so quickly that it may intrude into the target. In addition, the evolution speed function $$ d_{p_{2}}(s) $$ has a small slope when $$ s = 1 $$ , which leads to slow evolution speed.

3.2.3. Adaptive level set evolution model

To solve the issue of unidirectional motion of area term in DRLSE model, ALSE model [69] adds an adaptive sign variable parameter to the area term of the energy function, so that the evolution curve can iterate according to the current position and choose the direction independently. Its corresponding gradient flow function is defined as

$$ \begin{equation} \frac{\partial \phi^{ALSE}}{\partial t}=\mu\left(\Delta \phi-\operatorname{div}\left(\frac{\nabla \phi}{|\nabla \phi|}\right)\right)+\lambda g d i v\left(\frac{\nabla \phi}{|\nabla \phi|}\right) \delta(\phi)+v\left(I, c_3, c_4\right) g \delta(\phi), \end{equation} $$

where $$ \mu $$ and $$ \lambda $$ are constants; div$$ (s) $$ signifies the divergence of vector; the edge indicator function $$ g $$ is denoted as

$$ \begin{equation} g(I)=\exp \left(-\frac{\left|\nabla I_{\sigma}\right|}{k_{3}}\right), \end{equation} $$

where $$ k_{3} $$ is a positive constant used to control the slope of edge indicator function, and the area term $$ v\left(I, c_{3}, c_{4}\right) g \delta_{\varepsilon}(\phi) $$ is defined as

$$ \begin{equation} v\left(I, c_{3}, c_{4}\right)=\alpha \operatorname{sign}\left(I(x, y)-\frac{c_{3}+c_{4}}{2}\right), \end{equation} $$

with

$$ \begin{equation} \large \left\{\begin{array}{l} c_{3}=\frac{\int_{\Omega} I(x, y) H_{\varepsilon}(-\phi) \mathrm{d} x \mathrm{\; d} y}{\int_{\Omega} H_{\varepsilon}(-\phi) \mathrm{d} x \mathrm{\; d} y}, \\ c_{4}=\frac{\int_{\Omega} I(x, y) H_{\varepsilon}(\phi) \mathrm{d} x \mathrm{\; d} y}{\int_{\Omega} H_{\varepsilon}(\phi) \mathrm{d} x \mathrm{\; d} y}. \end{array}\right. \end{equation} $$

In Equation (36), the gradient flow function is composed of three parts: the first part $$ \mu\left(\Delta \phi-\operatorname{div}\left(\frac{\nabla \phi}{|\nabla \phi|}\right)\right) $$ based on the distance rule term is used to reduce the error between the level set function and the signed distance function, which gets rid of re-initialization during the process of iteration; the second part $$ \lambda g d i v\left(\frac{\nabla \phi}{|\nabla \phi|}\right) \delta(\phi) $$ is the length constraint term that is utilized to enhance the effect of shortening and smoothing the zero-level contour, which effectively maintains the regularity of the evolution curve; the third part $$ v\left(I, c_3, c_4\right) g \delta(\phi) $$ is the area term with variable coefficients, which is used to adjust the magnitude and direction of evolution contour line.

Compared with DRLSE model, this model introduces the weighted coefficient $$ v\left(I, c_{3}, c_{4}\right) $$ in Equation (38) to substitute constant value $$ \mu_{3} $$ in area term in Equation (34). The direction of this sign function $$ v\left(I, c_{3}, c_{4}\right) $$ is determined by the mean difference between $$ I(x, y) $$ and the mean values of the images outside and inside the contour line. Therefore, the gradient flow function can adjust the direction of evolution motion with respect to the image grayscale information inside and outside the initial contour, which solves the issue of unidirectional motion of the zero level set in DRLSE model and improves the robustness of the initial contour. However, the issues such as the tendency to fall into false boundaryboundaries, leaking from weak edges, and poor anti-noise ability remain unsolved.

3.2.4. Fuzzy c-means model

To realize bidirectional motion of zero level set to accomplish adaptive segmentation, FCM model [41] links optimized FCM algorithm that calculates local image intensity with optimized adaptive functions, which resolves the issues of leaking from vulnerable boundary and slow computation process.

FCM energy function is constructed as

$$ \begin{equation} \begin{aligned} E^{FCM}(\phi)=& k_{1} R_{p_{w}}(\phi)+ k_{2} L_{g_{\beta_{1}}}(\phi)+\varphi\left(I_{\sigma_{1}}, C_{1}, C_{2}\right) A_{g_{\beta_{1}}}(\phi) \\ =& k_{1} \int_{\Omega} p_{w}(|\nabla \phi|) d x+ k_{2} \int_{\Omega} g_{\beta_{1}} \delta_{\varepsilon}(\phi)|\nabla \phi| d x +\varphi\left(I_{\sigma_{1}}, C_{1}, C_{2}\right) \int_{\Omega} g_{\beta_{1}} H_{\varepsilon}(-\phi) d x, \end{aligned} \end{equation} $$

where $$ k_{1} $$ and $$ k_{2} $$ are positive constants; $$ \phi $$ is the level set function; $$ I_{\sigma_{1}} $$ is the image vector after Gaussian filtering; $$ C_{1} $$ , $$ C_{2} $$ denotes the FCM energy; $$ H_{\varepsilon}(x) $$ and $$ \delta_{\varepsilon}(x) $$ are regularized Heaviside and Dirac functions denoted in Eq. 7 and Eq. 8, respectively; the adaptive edge indicator function $$ g_{\beta_{1}}(I) $$ is described as

$$ \begin{equation} g_{\beta_{1}}(I)=\frac{1}{1+\left|\nabla I_{\sigma_{1}}\right|^{2} / \beta_{1}^{2}}, \end{equation} $$

with

$$ \begin{equation} \beta_{1}(I)=\frac{1+\sqrt{S\left(I_{\sigma_{1}}\right)}}{3}, \end{equation} $$

where $$ S $$ denotes standard deviation value of image after Gaussian filtering, the adaptive sign function $$ \varphi\left(I_{\sigma_{1}}, C_{1}, C_{2}\right) $$ in the area term is defined as

$$ \begin{equation} \varphi\left(I_{\sigma_{1}}, C_{1}, C_{2}\right)= \eta \arctan \left[\left(I_{\sigma_{1}}-\frac{C_{1}+C_{2}}{2}\right) / \tau\right]. \end{equation} $$

In Equation (43), $$ \eta $$ and $$ \tau $$ are positive constant values; $$ I_{\sigma_{1}}=G_\sigma * I $$ ; the two clustering results $$ C_{1}, C_{2} $$ are obtained through the cluster centroids $$ c_{j, 1} $$ and membership function $$ \mu_j\left(x_i\right) $$ , which are described respectively as follows:

$$ \begin{equation} \large c_{j, 1}=\frac{\sum\nolimits_{i=1}^{n \times(2 \omega+1)^2}\left[\mu_j\left(x_i\right)\right]^\alpha x_i}{\sum\nolimits_{i=1}^{n \times(2 \omega+1)^2}\left[\mu_j\left(x_i\right)\right]^\alpha}, \end{equation} $$

$$ \begin{equation} \large \mu_j\left(x_i\right)=\frac{\sum\nolimits_{s=1}^k\left|x_i-c_{j, 1}\right|^{\frac{-2}{\alpha-1}}}{\sum\nolimits_{s=1}^k\left|x_i-c_{s, 1}\right|^{\frac{-2}{\alpha-1}}}, \end{equation} $$

where image size of $$ I(x) $$ is $$ m \times n $$ ; $$ x_i $$ signifies ith pixel in the first row of image region; the weighted ratio $$ \alpha $$ equals to 2; $$ n \times(2 \omega+1)^2 $$ is the total number of elements in particular sample; $$ (2 \omega+1)^2 $$ signifies the width of square frame. In FCM model, the cluster number $$ k $$ equals to 2, and $$ C_{1}=c_{1, 1} $$ and $$ C_{2}=c_{2, 1} $$ .

For the purpose of maintaining evolution stability, the potential function $$ p_{w}(s) $$ and its corresponding $$ p_{w}{ }^{\prime}(s) $$ are denoted respectively as follows:

$$ \begin{equation} p_{w}(s)=\frac{1}{2} s^{2}+\frac{w}{2} \exp \left[-\left(\frac{s-0.75}{w}\right)^{2}\right]+0.375 \sqrt{\pi} \operatorname{erf}\left(\frac{s-0.75}{w}\right), \end{equation} $$

$$ \begin{equation} p_{w}{ }^{\prime}(s)=\frac{4}{3} s\left\{0.75-\exp \left[-\frac{(s-0.75)^{2}}{w^{2}}\right]\right\}, \end{equation} $$

where $$ \text{erf}(\cdot) $$ denotes the Gaussian error function; $$ w $$ is equal to $$ 0.465 $$ ; the evolution speed function $$ d_{p_{w}}(s) $$ is denoted as

$$ \begin{equation} d_{p_{w}}(s)=\frac{p_{w}{ }^{\prime}(s)}{s}=\frac{4}{3}\left\{0.75-\exp \left[-\frac{(s-0.75)^{2}}{w^{2}}\right]\right\}. \end{equation} $$

The evolution speed function $$ d_{p_{w}}(s) $$ in Equation (48) is inspired by evolution speed functions $$ d_{p_{2}(s)} $$ in Equation (35). In DRLSE model [31], the evolution speed functions $$ d_{p_{2}(s)} $$ in Equation (35) has a slow final convergence speed due to a small slope at one well potential $$ |\nabla \phi|=1 $$ as shown in Figure 2. Note that the one well potential is defined at $$ |\nabla \phi|=1 $$ by convention [31]. The motivation of $$ d_{p_{w}}(s) $$ is to raise the slope at one well potential $$ |\nabla \phi|=1 $$ to solve the issue of slow convergence speed of $$ d_{p_{2}(s)} $$ in Equation (35), which also increases the sensitivity of distance regularized term $$ k_{1} \int_{\Omega} p_{w}(|\nabla \phi|) d x $$ in Equation (40).

Applying the gradient descent method to minimize energy function in Equation (40), which obtains the flowing gradient descent flow equation as follows:

$$ \begin{equation} \frac{\partial \phi^{FCM}}{\partial t}=-\frac{\partial E^{FCM}(\phi)}{\partial \phi}=k_{1} \operatorname{div}\left(d_{p_{w}}(|\nabla \phi|) \nabla \phi\right) +k_{2} \delta_{\varepsilon}(\phi) \operatorname{div}\left(g_{\beta_{1}} \frac{\nabla \phi}{|\nabla \phi|}\right) +\varphi\left(I_{\sigma_{1}}, C_{1}, C_{2}\right) g_{\beta_{1}} \delta_{\varepsilon}(\phi). \end{equation} $$

In Equation (49), the gradient descent flow function is mainly made up of two components. The first component is the internal energy part that contains distance regularized term ($$ k_{1} \operatorname{div}\left(d_{p_{w}}(|\nabla \phi|) \nabla \phi\right) $$ ), which offsets the deviation between the sign distance function (SDF) and level set function to resolve the problem of repeated initialization during the process of evolution. The second component is the external energy part that consists of length constraint part ($$ k_{2} \delta_{\varepsilon}(\phi) \operatorname{div}\left(g_{\beta_{1}} \frac{\nabla \phi}{|\nabla \phi|}\right) $$ ) and area part ($$ \varphi\left(I_{\sigma_{1}}, C_{1}, C_{2}\right) g_{\beta_{1}} \delta_{\varepsilon}(\phi) $$ ). The length constraint term is utilized to guide the zero level to evolve towards target boundary as well as control contour length due to the effect of the adaptive edge indicator function $$ g_{\beta_{1}} $$ . The area term is used to adjust contour velocity through the effect of adaptive sign function $$ \varphi\left(I_{\sigma_{1}}, C_{1}, C_{2}\right) $$ , which achieves bidirectional evolution with respect to image grayscale value.

To better understand the working mechanism of FCM model, the corresponding flow chart is illustrated in Figure 1. Note that the convergence criterion is set as $$ \left|\left(S_{i+5}-S_i\right) / S\right|<10^{-5} $$ , and $$ S $$ signifies the entire area of input image.

An overview of intelligent image segmentation using active contour models

Figure 1. The flow chart of FCM model.

FCM model is characterized by pre-fitting the fuzzy two centroids inside and outside the contour line using the local area-based fuzzy C-mean clustering principle before iteration to construct an adaptive edge indicator function, which solves the one-way motion problem of the edge level set model. However, FCM algorithm applied in this model is unoptimized and complex, which results in relatively long segmentation time. In addition, segmentation may fail in the forms of falling into false boundaryboundaries, if FCM algorithm has poor performance.

3.3. Hybrid ACMs

3.3.1. Optimized local pre-fitting image model

To achieve better segmentation accuracy and reduce computation cost, OLPFI model [70] is proposed to associate region-based attributes and edge-based attributes through mean local pre-fitting functions, which is capable of effectively segments segmenting images with uneven intensity and noise disturbance.

OLPFI energy function is defined as

$$ \begin{equation} e^{\text {OLPFI }}(\phi(x))= \frac{A}{2} \int_{\Omega} \int_{\Omega} K_{\sigma}(y-x)\left|I(x)-f^{L P F I}(x, \phi(x))\right|^{2} g_{e}(x) dx d y. \end{equation} $$

Note that $$ A $$ is a positive variable used to manually adjust segmentation speed according to the target size; $$ K_{\sigma} $$ is the Gaussian kernel function with standard deviation $$ \sigma $$ ; the edge indicator function $$ g_{e}(\mathrm{x}) $$ is defined as

$$ \begin{equation} g_{e}(x)=1-\frac{2}{\pi} \arctan \left(\left|\nabla\left(I * K_{\sigma}\right)\right|^{2} / \tau\right), \end{equation} $$

where $$ \phi $$ is the level set function; $$ \tau=\operatorname{std} 2(I(x)) $$ is the standard deviation of the image in the matrix form.

There are $$ 4 $$ variables $$ A $$ , $$ \sigma $$ , $$ w $$ , $$ k $$ to be manually calibrated to meet the ideal segmentation results. Specifically, variable $$ A $$ is used to adjust segmentation speed according to target size to prevent issues of under- segmentation or over- segmentation; variable $$ \sigma $$ in Gaussian kernel function $$ K_{\sigma} $$ is properly adjusted to collect image information locally with respect to object size to prevent issues of under- segmentation or over- segmentation; variable $$ k $$ is the magnitude of average filter, which is appropriately calibrated to filter out irrelevant information and obtain a smoothed final contour; variable $$ w $$ is the size of small local region, which should be increased or decreased properly to entirely cover targets in the input image. In addition, variable $$ w $$ should be increased to filter out noise and unrelated pixel information while segmenting images with strong noise disturbance.

In Equation (50), the local pre-fitted image (LPFI) function is defined as

$$ \begin{equation} f^{L P F I}(x, \phi(x))=L_{1}(x) H_{\varepsilon}(\phi(x))+L_{2}(x)\left(1-H_{\varepsilon}(\phi(x))\right), \end{equation} $$

with $$ L_{1}(x) $$ and $$ L_{2}(x) $$

$$ \begin{equation} \left\{\begin{array}{l} L_{1}(x)=\min \left[I(y) \mid y \in \Omega_{x}\right], \\ L_{2}(x)=\max \left[I(y) \mid y \in \Omega_{x}\right], \end{array}\right. \end{equation} $$

where $$ \Omega_x $$ denotes a small rectangular local area with size $$ (2 w+1)^2 $$ at center point x; $$ I(\mathrm{y}) $$ denotes the gray values of all points $$ \mathrm{y} $$ in $$ \Omega_x $$ ; these two pre-fitting functions $$ L_{1}(x), L_{2}(x) $$ of OLPFI model are inspired by local fitting function $$ m_{1}(x), m_{2}(x) $$ of LIF model in Equation (21). Specifically, $$ m_{1}(x), m_{2}(x) $$ in Equation (21) have to be computed for curve evolution during each iteration, which means that n iterations will calculate $$ m_{1}(x), m_{2}(x) $$ n times. Therefore, LIF model has slow segmentation speed and heavy computation cost due to unoptimized fitting functions $$ m_{1}(x), m_{2}(x) $$ . To address this issue, pre-fitting functions $$ L_{1}(x), L_{2}(x) $$ in in Equation (53) quickly pre-calculates the foreground and background of the input image ahead of iteration process and are independent of iteration process, which saves a great amount of computation time and confers much faster segmentation speed than LIF model. Utilizing the standard descent method to minimize the energy function in Equation (50), which obtains the following gradient descent flow function as follows:

$$ \begin{equation} \frac{\partial \phi}{\partial \mathrm{t}}=-\frac{\partial E^{O L P F I}}{\partial \phi}=-A \delta_{\varepsilon}(\phi) \cdot\left(L_{1}-L_{2}\right) \cdot g_{e}(x) \int_{\Omega} K_{\sigma}(y-x)\left(I-f^{L P F I}\right) d y. \end{equation} $$

Note that the global minimizer can be computed point by point by simply solving $$ \phi(x):=\underset{\psi \in \mathbb{R}}{\operatorname{argmin}}\left|I(x)-f^{L P F I}(x, \psi)\right| $$ . However, to follow up the convention, the method of partial derivative equation (PDE) is applied to conduct the optimization process. The PDE approach also provides an opportunity to regularize the level set function at each time step as described in Equation (58), which improves the segmentation performance.

In order to improve segmentation performance, Equation (54) is rewritten as

$$ \begin{equation} \frac{\partial \phi^{OLPFI}}{\partial \mathrm{t}}=-A \delta_{\varepsilon}(\phi) \cdot \operatorname{esign}(\frac{h(x)}{\tau} \cdot \mathrm{g}_{e}(x)), \end{equation} $$

where $$ \operatorname{esign}{(\cdot)} $$ and $$ h(x) $$ respectively defined as

$$ \begin{equation} \operatorname{esign}(x)=\operatorname{sign}(x)\left(1-e^{-x^{2}}\right), \end{equation} $$

$$ \begin{equation} \begin{aligned} h(x)=\left(L_{1}-L_{2}\right)\left[I\left(K_{\sigma} * \textbf{1}\right)-K_{\sigma} *\left(\left(L_{1}-L_{2}\right) H_{\varepsilon}(\phi)\right)-K_{\sigma} * L_{2}\right]. \end{aligned} \end{equation} $$

In order to effectively regularize the level set function and smooth evolution curve, a regularization function $$ \phi_{R} $$ and a length constraint function $$ \phi_{L} $$ are respectively defined as

$$ \begin{equation} \left\{\begin{array}{l} \begin{aligned} \phi_{R} &=\operatorname{esign}\left(8 \cdot \phi^{i+1}\right), \\ \phi_{L} &=\operatorname{mean}\left(\phi_{R}(y) \mid y \in \Omega_{\mathrm{x}}\right), \end{aligned} \end{array}\right. \end{equation} $$

where the regularization function $$ \phi_{R} $$ is used to regularize the level set function $$ \phi $$ to generate a more stable evolution environment; length constraint function $$ \phi_{L} $$ is utilized to get rid of unrelated curves as well as shorten and smooth evolution curves through an average filter with size $$ k \times k $$ .

In Equation (58) $$ \phi^{i+1} $$ is the level set function $$ \phi $$ at (i+1)-th iteration, which follows level set evolution function defined as

$$ \begin{equation} \phi^{i+1}=\phi^i+\Delta t \cdot \frac{\partial \phi^{OLPFI}}{\partial t}, \end{equation} $$

where $$ \Delta t $$ is time interval; $$ \phi^{i} $$ is the level set function $$ \phi $$ at i-th iteration; $$ \partial \phi^{OLPFI} / \partial t $$ is defined in Equation (55). OLPFI model combines local pre-fitting image functions based on mean intensity and edge indicator function to segment images with uneven intensity and noise interference, which achieves relatively high segmentation accuracy. The pre-fitting function pre-computes local image intensity ahead of iteration, which achieves fast image segmentation and greatly reduces the computation cost. However, the issue of under- segmentation may take place when segmenting images with noise interference due to the traditional Euclidean distance. In addition, boundary leaking may sometimes occur in the form of broken curves when segmenting images with large objects.

3.3.2. re-fitting energy model

To obtain better segmentation precision and decrease CPU elapsed time, PFE model [49] combines median pre-fitting functions with optimized adaptive functions, which solves the issue of unidirectional motion of evolution curve and hugely decreases computation cost.

PFE energy function is constructed as

$$ \begin{equation} \begin{aligned} E^{PFE}(\phi)=& n_{1} R_{p_{3}}(\phi)+n_{2} L_{g_{m}}(\phi)+\varphi_{1}\left(I_{\sigma}, c_{l}, c_{s}\right) A_{g_{m}}(\phi) \\ =& n_{1} \int_{\Omega} p_{3}(|\nabla \phi|) d x+n_{2} \int_{\Omega} g_{m} \delta_{\varepsilon}(\phi)|\nabla \phi| d x +\varphi_{1}\left(I_{\sigma}, c_{l}, c_{s}\right) \int_{\Omega} g_{\beta_{m}} H_{\varepsilon}(-\phi) d x, \end{aligned} \end{equation} $$

where $$ n_{1} $$ , $$ n_{2} $$ , $$ n_{3} $$ are positive constants; $$ \phi $$ is the level set function; $$ I_{\sigma} $$ is the image after Gaussian filtering; $$ H_{\varepsilon}(\phi(x)) $$ and $$ \delta_{\varepsilon}(\phi(x)) $$ are approximated Heaviside and Dirac functions defined in Equation (7) and Equation (8) respectively; $$ \varphi_{1}\left(I_{\sigma}, c_{l}, c_{s}\right) $$ denotes the adaptive sign function $$ \varphi\left(I_{\sigma}, c_{l}, c_{s}\right) $$ that is descried as

$$ \begin{equation} \varphi\left(I_{\sigma}, c_{l}, c_{s}\right)=n_{3} \arctan \left[\left(I_{\sigma}-\frac{c_{l}+c_{s}}{2}\right) / \tau\right]. \end{equation} $$

Note that $$ \tau=\operatorname{std} 2(I(\mathrm{x})) $$ is the standard deviation of the image in the matrix form; the adaptive edge indicator function $$ g_{m}(I) $$ is

$$ \begin{equation} g_{m}(I)=1-\tanh \frac{\left|\nabla K_{\sigma} * I\right|^{2}}{m}, \end{equation} $$

$$ \begin{equation} m(I)=2 S\left(I_{\sigma}\right), \end{equation} $$

with $$ S $$ denoting the standard deviation of image after Gaussian filtering; $$ I_\sigma=K_\sigma * I $$ is the image after Gaussian filtering, and $$ K_\sigma $$ is a Gaussian filtering template with standard deviation $$ \sigma $$ ; two pre-fitting functions $$ c_{l} $$ , $$ c_{s} $$ are defined as

$$ \begin{equation} \left\{\begin{array}{l} f_{\text {median }}({\bf{x}})=\operatorname{median}\left(I({\bf{y}}) \mid {\bf{y}} \in \Omega_{{\bf{x}}}\right), \\ c_{l}({\bf{x}})=\operatorname{mean}\left(I({\bf{y}}) \mid {\bf{y}} \in \Omega_{l}\right), \\ c_{s}({\bf{x}})=\operatorname{mean}\left(I({\bf{y}}) \mid {\bf{y}} \in \Omega_{s}\right), \end{array}\right. \end{equation} $$

where $$ f_{\text {median}} $$ denotes the median intensity in a local area $$ \Omega_{{\bf{x}}} $$ centered at point $$ {\bf{x}} $$ with radius w; $$ c_{l} $$ and $$ c_{s} $$ are average intensities in $$ \Omega_{l} $$ and $$ \Omega_{s} $$ , respectively; $$ I({\bf{y}}) $$ signifies a local area at center point $$ {\bf{y}} $$ ; parameter $$ {\bf{x}} $$ denotes the center point of initial contour. As known that the fitting functions of RSF model in Equation (16) are unoptimized and complex, which results in huge computation costs and low segmentation efficiency. To solve this drawback, these pre-fitting functions of PFE model in Equation (64) quickly fit out the foreground and background before iteration process takes place and are independent of it, which dramatically saves computation cost and increases segmentation efficiency.

In Equation (64), $$ \Omega_{l} $$ and $$ \Omega_{s} $$ are respectively defined as follows:

$$ \begin{equation} \left\{\begin{array}{l} \Omega_{l}=\left\{{\bf{y}} \mid\left(I({\bf{y}})>f_{\text {median }}({\bf{x}})\right\} \cap \Omega_{{\bf{x}}}, \right.\\ \Omega_{s}=\left\{{\bf{y}} \mid\left(I({\bf{y}})<f_{\text {median }}({\bf{x}})\right\} \cap \Omega_{{\bf{x}}} .\right. \end{array}\right. \end{equation} $$

Note that $$ \Omega_{l} $$ is the local region inside $$ \Omega_{{\bf{x}}} $$ , where the image intensities are all bigger than $$ f_{\text {median}} $$ in $$ \Omega_{{\bf{x}}} $$ ; $$ \Omega_{s} $$ is the local region inside $$ \Omega_{{\bf{x}}} $$ , where the image intensities are all less than $$ f_{\text {median}} $$ in $$ \Omega_{{\bf{x}}} $$ .

The single potential function $$ p_{3}(s) $$ and its corresponding evolution speed function $$ d_{p_{3}}(s) $$ are constructed as

$$ \begin{equation} p_{3}(s)=\left\{\begin{array}{l} \frac{1}{3} s^{3}+\frac{1}{\pi^{3}} \sin (\pi(s-1))-\frac{s}{\pi^{2}} \cos (\pi(s-1)), \\ -\frac{1}{2} s^{2}+\frac{1}{6}+\frac{1}{\pi^{2}}, s \in[0, 1], \\ \frac{1}{2} s^{2}-\arctan \left(s^{2}\right)-\frac{1}{2}+\frac{\pi}{4}, s \in[1, +\infty), \end{array}\right. \end{equation} $$

$$ \begin{equation} d_{p_{3}}(s)=\frac{p_{3}^{\prime}(s)}{s}=\left\{\begin{array}{l} s+\frac{1}{\pi} \sin (\pi(s-1))-1, s \in[0, 1], \\ 1-\frac{2}{1+s^{4}}, s \in(1, \infty). \end{array}\right. \end{equation} $$

The evolution speed function $$ d_{p_{3}}(s) $$ in Equation (67) is inspired by evolution speed functions $$ d_{p_{2}(s)} $$ of DRLSE model in Equation (35), $$ d_{p_{w}(s)} $$ of FCM model in Equation (48). In order to visualize these complex evolution speed functions and explain the differences among them, all three evolution speed functions $$ d_{p_{2}(s)} $$ , $$ d_{p_{w}(s)} $$ , $$ d_{p_{3}(s)} $$ are plotted in Figure 2. In this figure, the evolution speed functions $$ d_{p_{2}(s)} $$ , $$ d_{p_{w}(s)} $$ achieve the maximum value of $$ 1 $$ at zero well potential $$ (|\nabla \phi|=0) $$ , which causes the evolution speed of $$ d_{p_{2}(s)} $$ , $$ d_{p_{w}(s)} $$ to be too fast, and the evolution curve may invade inside target interior. On the contrary, the evolution speed function $$ d_{p_{3}(s)} $$ obtains the minimum value of $$ -1 $$ , which decelerates evolution speed to achieve stable evolution and avoid wrong segmentation. In addition, the evolution speed function $$ d_{p_{3}(s)} $$ has the steepest slope at one well potential $$ (|\nabla \phi|=1) $$ among all three evolution speed functions $$ d_{p_{2}(s)} $$ , $$ d_{p_{w}(s)} $$ , $$ d_{p_{3}(s)} $$ , which means the evolution speed function $$ d_{p_{3}(s)} $$ facilitates the convergence process in a faster speed than $$ d_{p_{2}(s)} $$ , $$ d_{p_{w}(s)} $$ as well as enables the distance regularization term more sensitive.

An overview of intelligent image segmentation using active contour models

Figure 2. Contrast of DRLSE, FCM, PFE models on evolution speed equations $$ d_{p_{2}(s)} $$ , $$ d_{p_{w}(s)} $$ , $$ d_{p_{3}(s)} $$ .

Applying the steepest descent approach to minimize the energy function in Equation (60), which achieves the following gradient descent flow function as

$$ \begin{equation} \frac{\partial \phi}{\partial t}=-\frac{\partial E^{PFE}(\phi)}{\partial \phi}=n_{1} \operatorname{div}\left(d_{p_{3}}(|\nabla \phi|) \nabla \phi\right) +n_{2} \delta_{\varepsilon}(\phi) \operatorname{div}\left(g_{m} \frac{\nabla \phi}{|\nabla \phi|}\right) +\varphi_{1}\left(I_{\sigma}, c_{l}, c_{s}\right) g_{m} \delta_{\varepsilon}(\phi). \end{equation} $$

In Equation (68), the gradient descent flow function consists of three parts. The first part denotes internal energy $$ (n_{1} \operatorname{div}\left(d_{p_{3}}(|\nabla \phi|) \nabla \phi\right) $$ ) on the basis of distance regularized term, which is used to minimize the difference between level set function and sign distance function to solve the issue of repeated re-initialization during iteration process. The second part signifies external energy on the foundation of length constraint part ($$ n_{2} \delta_{\varepsilon}(\phi) \operatorname{div}\left(g_{m} \frac{\nabla \phi}{|\nabla \phi|}\right) $$ ) and area part ($$ \varphi_{1}\left(I_{\sigma}, c_{l}, c_{s}\right) g_{m} \delta_{\varepsilon}(\phi) $$ ). Specifically, the length constraint part guides the contour line to evolve towards the target boundary due to the effect of adaptive edge indicator function as well and adjusts the length of contour line. The area part controls the velocity of contour line with respect to image gray-scale information due to the effect of adaptive sign function.

PFE model combines energy function based on median pre-fitting functions with adaptive functions, which realizes and accelerates the bidirectional evolution of contour line and reduces the probability of edge leakage. In addition, this model is able to effectively handle images with uneven intensity. However, the issues of falling into false boundary boundaries and insufficient segmentation at the boundary edge may sometimes happen while segmenting images with a large target.

4. EXPERIMENTAL RESULTS

Different kinds of ACMs have been reviewed in Section $$ 3 $$ and Section $$ 4 $$ , and $$ 12 $$ representative of those models (BC [32], RSF [34], LIF [35], LPF [48], RSF&LoG [39], OLPFI [70], PBC [57], LPF & FCM [41], LGJD [58], ABC [56], PFE [47], APFJD [49]) are selected to conduct comparison experiment to segment various images including synthetic images, medical images, and natural images and compare their segmentation results (The CPU running time $$ T $$ , iteration number $$ N $$ , IOU, and DSC). All the models wereas programmed utilizing MATLAB 2021a and run ran on an AMD Ryzen7 5800H 3.2GHz CPU, 16G RAM, a NVIDIA GeForce RTX 3060 6G GPU, and 64 bit Windows 11 operating system. To explain the time consumed by convolution operation, using APFJD model on image (a) in Figure 4, the CPU running time $$ T $$ is $$ 1.528 $$ seconds, and the computation of convolutions accounts for $$ 70\% $$ of that time. The relevant codes are available on the website https://github.com/sdjswgr.

Common evaluation criteria for assessing different segmentation approaches are segmentation time and segmentation quality. The authors evaluate segmentation time through the CPU running time $$ T $$ and iteration number $$ N $$ . The smaller their values, the less segmentation time and better segmentation efficiency will be. In addition, the segmentation quality of segmented image is measured through Intersection over union (IOU), which is described as

$$ \begin{equation} \mathrm{IOU}=\frac{A_{1} \cap A_{1}^{G}}{A_{1} \cup A_{1}^{G}}. \end{equation} $$

Note that $$ A_{1} $$ signifies the foreground region of the segmented image while $$ A_{1}^{G} $$ denotes the foreground region of the ground-truth image. The IOU value is used to measure the similarity between the foreground region of segmented image and ground-truth image to evaluate the segmentation quality. The range of this coefficient is bounded in $$ [0, 1] $$ , and the closer it is to $$ 1 $$ , the better segmentation quality it is.

4.1. Dataset characteristic

All images utilized in this paper are downloaded from a public open source image library called Berkeley segmentation data set and Benchmarks 500 (BSDS500), which can be reached on the website https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/ for more details. Specifically, for medical images (a-h) in Figure 4, images (a-b) are blood capillaries, image (c) is CT of bone, images (d) is bacteria embryo, image (e) is kidney, image (f) is the internal structure of the brain, and images (g-h) are B-ultrasound of uterus. For natural images (a-h) in Figure 5, image (a) is a piece of maple leaf, image (b) is a shell, image (c) is a starfish, image (d) is a polar bear, image (e) is a bradypod, image (f) is a stone bench, image (g) is plane, and image (h) is an eagle.

4.2. Segmentation experiment of synthetic images

Intensity non-uniformity and noise interference often occur in image segmentation. In Figure 3, the segmentation results of the $$ 12 $$ ACMs on synthetic images (a-h) are described in Figure 3. The former $$ 4 $$ images (a-d) represent images with intensity non-uniformity, while the latter $$ 4 $$ images (e-h) symbolize images with noise interference. The associated segmentation quality (IOU) and segmentation time (the CPU running time $$ T $$ and iteration number $$ N $$ ) are concluded in Table 1. From this table, for images with intensity non-uniformity (a-d), ABC, LIF, and PFE models respectively obtain better segmentation results than other models respectively. In addition, for images with noise interference (e-h), PFE, ABC, LPF achieve the best segmentation results respectively. Particularly, PFE model achieves the most stable segmentation results for all images with noise interference (a-h) in Figure 3. In fact, PFE model takes advantage of novel pre-fitting functions to quickly approximate the background and foreground of the input image ahead of iteration process, which improves the stability of segmenting noisy images and saves computation costs.

An overview of intelligent image segmentation using active contour models

Figure 3. The segmentation results of the first comparative experiment to segment synthetic images. The 1st row represents initial contours, the 2nd to 12th rows denote segmentation results of BC [32], RSF [34], LIF [35], LPF [48], RSF&LoG [39], OLPFI [70], PBC [57], LPF & FCM [41], LGJD [58], ABC [56], PFE [47], and APFJD [49], respectively.

Table 1

Numerical analysis of segmentation results (The CPU running time $$ T $$ , iteration number $$ N $$ , and IOU) of the first comparative experiment in images a-h in Figure 3.

Image(a)(100 × 100)Image(b)(132 × 103)Image(c)(256 × 233)Image(d)(136 × 132)Image(e)(103 × 97)Image(f)(214 × 209)Image(g)(100 × 100)Image(h)(127 × 107)
BC2.159/20/0.9287.726/180/0.79910.288/200/0.7111.015/30/0.5529.912/200/0.8367.445/180/0.9147.831/180/0.82018.086/300/0.581
RSF2.875/300/0.7185.928/300/0.78418.380/300/0.30912.776/220/0.20723.527/500/0.25615.856/300/0.92914.628/300/0.83219.251/300/0.499
LIF1.468/200/0.9172.658/150/0.83513.895/200/0.4085.315/180/0.21115.563/200/0.54710.814/200/0.72910.412/200/0.9411.047/120/0.688
LPF5.221/300/0.6625.117/300/0.78010.429/500/0.4334.585/220/0.2675.606/300/0.55323.248/500/0.9273.148/150/0.9451.425/120/0.431
RSF&LoG5.751/200/0.6746.853/200/0.68310.635/300/0.7014.856/180/0.5547.963/200/0.94420.835/300/0.8794.258/100/0.9491.125/100/0.503
OLPFI0.961/60/0.9454.754/200/0.7278.758/200/0.3941.021/60/0.8805.761/200/0.9458.635/200/0.7587.468/180/0.7141.249/65/0.604
PBC6.142/300/0.8825.856/300/0.5878.967/300/0.5931.165/90/0.7964.821/200/0.9518.617/300/0.9355.804/200/0.9180.346/65/0.571
LPF & FCM3.494/300/0.8765.108/300/0.6617.752/300/0.7170.543/85/0.8753.264/280/0.9499.822/300/0.920/0.9583.365/200/0.913/0.9540.449/60/0.731
LGJD3.952/200/0.7254.585/200/0.5155.856/250/0.8743.658/200/0.2308.635/300/0.6117.423/300/0.89410.528/300/0.9391.437/120/0.474
ABC0.641/20/0.9586.589/300/0.41515.254/300/0.5760.196/20/0.8558.111/300/0.9313.856/150/0.9415.964/250/0.9347.215/280/0.575
PFE2.964/150/0.8033.589/180/0.8345.545/200/0.8790.248/90/0.8913.132/180/0.95412.826/600/0.9023.792/180/0.9260.237/90/0.812
APFJD0.855/65/0.9171.856/100/0.7956.982/250/0.8200.285/65/0.8383.915/200/0.9364.570/200/0.8783.253/180/0.9240.958/100/0.610

4.3. Segmentation experiment of medical images

ACMs are extensively applied to process medical images to find out the location of the lesion. Consequently, the $$ 12 $$ ACMs are utilized to segment $$ 6 $$ medical images (a-h) in Figure 4, and the associated segmentation quality (IOU) and segmentation time (the CPU running time $$ T $$ and iteration number $$ N $$ ) are described in Table 2. From this table, for image (a), OLPFI has the best performance. For image (b), OLPFI, LPF & FCM, ABC, PFE models have very similar segmentation results. However, PFE model is ranked first in terms of the least CPU running time $$ T $$ and iteration number $$ N $$ and the largest IOU value. For image (c), LPF, RSF & LOG, OLPFI, LGJD, ABC, PFE models have similar segmentation results. Nevertheless, OLPFI has the best performance with a small advantage. For image (d), the performance of PFE model is ranked first in terms of all evaluation criteria. For image (e), the IOU value of ABC model are is the biggest, while the CPU running time $$ T $$ and iteration number $$ N $$ of APFJD model is are the least. For image (f), PBC obtains the best segmentation results. For image (g), ABC model obtains the best performance in terms of all evaluation criteria. For image (h), OLPFI model has the best segmentation results in terms of all evaluation criteria. Particularly, ABC model obtains the most stable segmentation results for all images (a-h) in Figure 4. Actually, ABC model utilizes a novel regularization function to normalize the energy range of data driven term, which enables it to effectively process medical images with intensity non-uniformity.

An overview of intelligent image segmentation using active contour models

Figure 4. The segmentation results of the second comparative experiment to segment medical images. The 1st row represents initial contours, the 2nd to 12th rows denote segmentation results of BC [32], RSF [34], LIF [35], LPF [48], RSF&LoG [39], OLPFI [70], PBC [57], LPF & FCM [41], LGJD [58], ABC [56], PFE [47], and APFJD [49], respectively.

Table 2

Numerical analysis of segmentation results (The CPU running time $$ T $$ , iteration number $$ N $$ , and IOU) of the second comparative experiment in images a-h in Figure 4.

Image(a)$$ (111 \times 110) $$ Image(b)$$ (103 \times 131) $$ Image(c)$$ (112 \times 224) $$ Image(d)$$ (152 \times 128) $$ Image(e)$$ (124 \times 66) $$ Image(f)$$ (119 \times 78) $$ Image(g)$$ (200 \times 227) $$ Image(h)$$ (95 \times 93) $$
BC$$ 5.725/380/0.733 $$ $$ 2.407/180/0.766 $$ $$ 2.958/250/0.815 $$ $$ 4.176/280/0.899 $$ $$ 5.285/300/0.835 $$ $$ 15.992/150/0.607 $$ $$ 1.437/30/0.889 $$ $$ 2.564/200/0.633 $$
RSF$$ 2.458/300/0.844 $$ $$ 2.152/140/0.825 $$ $$ 1.886/120/0.852 $$ $$ 4.537/200/0.898 $$ $$ 4.852/200/0.756 $$ $$ 23.974/200/0.622 $$ $$ 14.733/220/0.207 $$ $$ 10.511/220/0.433 $$
LIF$$ 1.525/130/0.851 $$ $$ 1.234/100/0.826 $$ $$ 1.458/100/0.868 $$ $$ 3.172/150/0.909 $$ $$ 2.912/130/0.723 $$ $$ 15.254/150/0.675 $$ $$ 2.305/120/0.714 $$ $$ 2.859/100/0.572 $$
LPF$$ 1.621/140/0.862 $$ $$ 1.575/100/0.802 $$ $$ \textbf{2.245/150/0.932} $$ $$ 5.109/180/0.916 $$ $$ 1.852/120/0.781 $$ $$ 1.653/120/0.635 $$ $$ 0.995/100/0.442 $$ $$ 1.595/120/0.447 $$
RSF&LoG$$ 1.062/100/0.870 $$ $$ 1.250/80/0.821 $$ $$ \textbf{5.582/150/0.934} $$ $$ 9.926/200/0.829 $$ $$ 2.952/135/0.779 $$ $$ 16.582/200/0.718 $$ $$ 8.289/200/0.849 $$ $$ 1.252/120/0.601 $$
OLPFI$$ \textbf{0.805/80/0.875} $$ $$ \textbf{0.905/85/0.837} $$ $$ \textbf{1.584/90/0.935} $$ $$ 0.925/85/0.925 $$ $$ 0.848/95/0.850 $$ $$ 1.465/120/0.604 $$ $$ 2.048/150/0.835 $$ $$ \textbf{0.652/65/0.918} $$
PBC$$ 1.689/120/0.866 $$ $$ 1.612/100/0.831 $$ $$ 4.773/250/0.926 $$ $$ 3.593/180/0.931 $$ $$ 3.525/150/0.845 $$ $$ \textbf{1.081/100/0.784} $$ $$ 1.509/100/0.867 $$ $$ 1.653/100/0.906 $$
LPF & FCM$$ 0.952/100/0.861 $$ $$ \textbf{0.896/85/0.838} $$ $$ 2.545/150/0.896 $$ $$ 3.862/180/0.905 $$ $$ 2.465/150/0.893 $$ $$ 1.868/120/0.653 $$ $$ 7.269/200/0.864 $$ $$ 1.058/100/0.889 $$
LGJD$$ 3.759/280/0.796 $$ $$ 2.225/150/0.815 $$ $$ \textbf{2.587/100/0.932} $$ $$ 2.726/200/0.910 $$ $$ 3.582/220/0.724 $$ $$ 3.341/280/0.689 $$ $$ 2.758/180/0.204 $$ $$ 3.582/200/0.513 $$
ABC$$ 1.259/100/0.885 $$ $$ \textbf{1.036/95/0.838} $$ $$ \textbf{2.848/120/0.930} $$ $$ 2.033/85/0.930 $$ $$ \textbf{0.629/35/0.928} $$ $$ 3.258/120/0.692 $$ $$ \textbf{0.821/85/0.919} $$ $$ 1.275/95/0.898 $$
PFE$$ 1.028/100/0.869 $$ $$ \textbf{0.855/80/0.841} $$ $$ \textbf{2.257/110/0.933} $$ $$ \textbf{0.846/90/0.942} $$ $$ 0.911/90/0.902 $$ $$ 2.586/200/0.573 $$ $$ 0.986/100/0.898 $$ $$ 0.856/95/0.912 $$
APFJD$$ 1.528/105/0.728 $$ $$ 1.043/95/0.815 $$ $$ 1.585/110/0.899 $$ $$ 1.852/120/0.851 $$ $$ \textbf{0.506/30/0.926} $$ $$ 1.962/120/0.662 $$ $$ 1.399/100/0.889 $$ $$ 1.124/100/0.894 $$

4.4. Segmentation experiment of natural images

0Bias correction (BC), Region scalable fitting (RSF), Local image fitting (LIF), Local pre-fitting (LPF), Region scalable fitting and optimized Laplacian of Gaussian (RSF&LoG), Optimized local pre-fitting image (OLPFI), Pre-fitting bias field (PBC), Local pre-fitting and fuzzy c-means (LPF & FCM), Local and global Jeffreys divergence (LGJD), Additive bias correction (ABC), Pre-fitting energy (PFE), and Adaptive pre-fitting function and Jeffreys divergence (APFJD).

The $$ 12 $$ ACMs are applied to segment natural images (a-h) in Figure 5, and the associated segmentation quality (IOU) and segmentation time (the CPU running time $$ T $$ and iteration number $$ N $$ ) are described in Table 3.

An overview of intelligent image segmentation using active contour models

Figure 5. The segmentation results of the third comparative experiment to segment synthetic images. The 1st row represents initial contours, the 2nd to 12th rows denote segmentation results of BC [32], RSF [34], LIF [35], LPF [48], RSF&LoG [39], OLPFI [70], PBC [57], LPF & FCM [41], LGJD [58], ABC [56], PFE [47], and APFJD [49], respectively.

Table 3

Numerical results of segmentation outcomes (The CPU running time $$ T $$ , iteration number $$ N $$ , and IOU) of the third comparative experiment in images a-h in Figure 5.

Image(a)$$ (300 \times 203) $$ Image(b)$$ (300 \times 225) $$ Image(c)$$ (481 \times 321) $$ Image(d)$$ (481 \times 321) $$ Image(e)$$ (481 \times 321) $$ Image(f)$$ (481 \times 321) $$ Image(g)$$ (481 \times 321) $$ Image(h)$$ (481 \times 321) $$
BC$$ 1.221/10/0.950 $$ $$ 2.967/50/0.734 $$ $$ 5.775/120/0.602 $$ $$ 2.315/20/0.161 $$ $$ 2.940/25/0.231 $$ $$ 7.567/150/0.162 $$ $$ 7.491/150/0.289 $$ $$ 8.722/150/0.837 $$
RSF$$ 1.043/220/0.952 $$ $$ 14.658/250/0.709 $$ $$ 15.589/250/0.255 $$ $$ \textbf{8.152/180/0.926} $$ $$ 7.254/165/0.543 $$ $$ 8.952/180/0.582 $$ $$ 18.892/380/0.637 $$ $$ 10.866/150/0.826 $$
LIF$$ 1.716/150/0.964 $$ $$ 2.536/200/0.879 $$ $$ 7.895/180/0.405 $$ $$ 9.255/500/0.657 $$ $$ 5.752/150/0.592 $$ $$ 6.895/150/0.841 $$ $$ 7.125/180/0.775 $$ $$ 6.525/150/0.870 $$
LPF$$ \textbf{1.115/90/0.969} $$ $$ 1.531/100/0.841 $$ $$ 9.528/300/0.617 $$ $$ \textbf{10.588/380/0.922} $$ $$ 12.592/300/0.377 $$ $$ 15.281/300/0.581 $$ $$ 4.338/120/0.543 $$ $$ 5.450/120/0.781 $$
RSF&LoG$$ 5.741/100/0.893 $$ $$ 6.952/120/0.941 $$ $$ 8.258/180/0.524 $$ $$ 7.528/120/0.840 $$ $$ 7.592/150/0.587 $$ $$ 10.896/180/0.844 $$ $$ 6.882/120/0.505 $$ $$ 3.317/95/0.817 $$
OLPFI$$ \textbf{0.506/40/0.955} $$ $$ \textbf{0.731/65/0.951} $$ $$ \textbf{1.638/95/0.889} $$ $$ \textbf{0.546/40/0.927} $$ $$ \textbf{1.867/95/0.794} $$ $$ \textbf{1.105/85/0.794} $$ $$ 1.983/100/0.757 $$ $$ 1.513/100/0.753 $$
PBC$$ 0.696/85/0.954 $$ $$ 1.984/95/0.929 $$ $$ 7.595/200/0.824 $$ $$ \textbf{1.856/95/0.924} $$ $$ 8.215/200/0.791 $$ $$ 3.148/120/0.823 $$ $$ \textbf{4.768/150/0.904} $$ $$ 1.233/95/0.845 $$
LPF & FCM$$ 2.524/120/0.958 $$ $$ 3.158/150/0.889 $$ $$ 13.752/300/0.515 $$ $$ 7.181/180/0.871 $$ $$ 10.537/200/0.858 $$ $$ 9.905/200/0.863 $$ $$ 9.389/200/0.801 $$ $$ 7.851/180/0.831 $$
LGJD$$ 0.715/95/0.886 $$ $$ 1.985/100/0.919 $$ $$ 5.589/180/0.563 $$ $$ 2.755/120/0.135 $$ $$ 11.762/300/0.791 $$ $$ 3.556/120/0.815 $$ $$ 2.789/120/0.876 $$ $$ 2.511/120/0.873 $$
ABC$$ 0.785/80/0.951 $$ $$ 1.259/95/0.954 $$ $$ 9.785/200/0.507 $$ $$ 1.895/100/0.903 $$ $$ 7.892/150/0.847 $$ $$ \textbf{2.048/100/0.886} $$ $$ \textbf{2.638/100/0.902} $$ $$ 2.032/100/0.836 $$
PFE$$ 0.748/65/0.937 $$ $$ 2.468/120/0.638 $$ $$ 2.685/150/0.531 $$ $$ \textbf{3.522/150/0.925} $$ $$ \textbf{4.896/180/0.870} $$ $$ 8.896/200/0.834 $$ $$ 9.541/230/0.816 $$ $$ 3.522/120/0.877 $$
APFJD$$ 0.591/30/0.971 $$ $$ \textbf{1.167/100/0.966} $$ $$ \textbf{2.592/120/0.927} $$ $$ 1.047/100/0.924 $$ $$ \textbf{1.972/100/0.869} $$ $$ \textbf{2.925/120/0.913} $$ $$ \textbf{1.045/95/0.906} $$ $$ \textbf{1.161/100/0.886} $$

In Table 3, for image (a), the IOU value of LPF model is the largest while the CPU running time $$ T $$ and iteration number $$ N $$ of OLPFI model are the least. For image (b), OLPFI model is ranked first in terms of the CPU running time $$ T $$ and iteration number $$ N $$ , and APFJD model is ranked first with respect to the IOU value. For image (c), OLPFI model has the best performance in terms of the CPU running time $$ T $$ and iteration number $$ N $$ , while APFJD model has the best performance with respect to the IOU value. For image (d), RSF, LPF, OLPFI, PBC, PFE, and APFJD models achieve similar segmentation results. However, OLPFI model is ranked first with respect to all evaluation criteria. For image (e), the IOU value of the PFE model are is the biggest, while the CPU running time $$ T $$ and iteration number $$ N $$ of OLPFI model are the least. For image (f), the IOU value of APFJD model is the biggest, while the CPU running time $$ T $$ and iteration number $$ N $$ of OLPFI model are the least. For image (g), although the IOU value of PBC, ABC, and APFJD models are similar, APFJD model has the best segmentation results with the biggest IOU value and lowest CPU running time $$ T $$ and iteration number $$ N $$ . For image (h), APFJD model has the best segmentation results in terms of all evaluation criteria. On average, APFJD model acquires the most stable segmentation results for all natural images (a-h) in Figure 5. In fact, APFJD model employs an adaptive regularization function to normalize the ranges of the level set function and data driven term, which renders it to efficiently process natural images with complex background information.

4.5. Comparison experiments with Deep learning-based algorithms

To compare the segmentation results between ACMs and deep learning-based algorithms, DeepLabv3+ [71] and Mask R-CNN algorithms [72] are selected to segment $$ 6 $$ images (a-f) in Figure 6. Note that DeepLabv3+ and Mask R-CNN algorithms are capable of recognizing all pixels subordinated to the target and painting the target into multiple random colors to demonstrate the final segmentation result. For the pre-training stage of these two deep learning-based algorithms, the DeepLabv3+ algorithm have has used PASCAL Visual Object Classes 2007 (VOC20007), which can be found on the website http://host.robots.ox.ac.uk/pascal/VOC/voc2007/. The entire duration of pre-training stage of DeepLabv3+ neural network lasts roughly 9 hours under the framework of PyTorch deep learning environment. In addition, the Mask R-CNN algorithm utilizes the (Common Objects in Context) COCO dataset to conduct the pre-training process, which can be downloaded from the website https://cocodataset.org/. The whole duration of pre-training stage of DeepLabv3+ neural network lasts roughly 18 hours under the framework of PyTorch deep learning environment. Once the pre-trainings are completed, the trained DeepLabv3+ and Mask R-CNN neural networks are utilized to segment and visualize $$ 6 $$ images (a-f) in Figure 6 and compute their associated IOUs, which is illustrated in Figure 6. Meanwhile, the authors select $$ 3 $$ ACMs (RSF [34], LGJD [58], and APFJD [49]) to segment and visualize $$ 6 $$ images (a-f) in Figure 6 and calculate their corresponding IOUs. The numerical segmentation results (IOUs) of the experiments are all listed in Table 4.

An overview of intelligent image segmentation using active contour models

Figure 6. The segmentation results between DeepLabv3+ algorithm [71], Mask R-CNN algorithm [72], RSF [34] model, LGJD [58] model and APFJD [49] model. The 1st column represents original images, the 2nd to 3rd columns signify segmentation results of DeepLabv3+ algorithm and Mask R-CNN algorithm, respectively, the 4th column denotes initial contours of ACMs, and the 5th to 7th columns represent segmentation results of RSF model, LGJD model and APFJD model, respectively.

Table 4

Numerical analysis of IOUs between DeepLabv3 algorithm, Mask R-CNN algorithm, RSF model, LGJD model, APFJD model in images (a-f) in Figure 6.

DeepLabv3+Mask R-CNNRSFLGJDAPFJD
Image a (481 × 321)$$ 0.935 $$ $$ 0.807 $$ $$ 0.681 $$ $$ 0.853 $$ $$ 0.920 $$
Image b (481 × 321)$$ 0.655 $$ $$ 0.922 $$ $$ 0.925 $$ $$ 0.135 $$ $$ 0.930 $$
Image c (481 × 321)$$ 0.875 $$ $$ 0.883 $$ $$ 0.826 $$ $$ 0.873 $$ $$ 0.886 $$
Image d (481 × 321)$$ 0.215 $$ $$ 0.015 $$ $$ 0.255 $$ $$ 0.563 $$ $$ 0.927 $$
Image e (481 × 321)$$ \textbf{0.911} $$ $$ \textbf{0.895} $$ $$ 0.328 $$ $$ 0.321 $$ $$ 0.468 $$
Image f (321 × 481)$$ \textbf{0.905} $$ $$ \textbf{0.915} $$ $$ 0.479 $$ $$ 0.353 $$ $$ 0.385 $$

In Table 4, for image (a), the DeepLabv3+ algorithm obtains the biggest IOU value due to the most excellent segmentation result. For image (b), the Mask R-CNN algorithm, RSF model and APFJD achieve similar IOU values. For image(c), the RSF model obtains the smallest IOU value due to the issue of edge leakage, and similar results are achieved by the remaining models. For image (d), the DeepLabv3+ and Mask R-CNN algorithms acquire very unsatisfactory IOU values due to failed segmentation. On the contrary, the APFJD model attains the largest IOU value because of clear and clean segmentation. For image (e), the DeepLabv3+ and Mask R-CNN algorithms demonstrate the advantages of segmentation of multi-phase images, which obtains much bigger IOU values than 3 ACMs (RSF, LGJD, APFJD models). The DeepLabv3+ algorithm attains the largest IOU value due to a more fully segmented target. For image (f), the Mask R-CNN algorithm obtains the best segmentation result in terms of the biggest IOU value.

4.6. Summary

Since the images with unevenly distributed intensity, the area inside and outside evolution curves are not intensity uniform. The calculated intensity averages are incapable tocannot represent intensity distribution. Therefore, BC model estimates a bias field to process images with unevenly distributed intensity, which works well with images (a-b) in Figure 3. However, common issues such as falling into false boundary boundaries may occur in images (c-d) in Figure 3. In addition, this model cannot effectively process images with noise interference such as images (e, g, h) in Figure 3. BesidesAdditionally, under- segmentation may take place as image (h) in Figure 3 and images (c, g) in Figure 4 shows. In addition, BC model leaks from target boundary when it segments natural images, as the second row in the Figure 5 indicates.

RSF model is capable of segmenting images with uneven intensity as image (b) in Figure 3. However, the incorporated kernel function only computes the gray values of image local image regions, which makes it easy to fall into a local minimum during the process of energy minimization such as images (a, c) in Figure 3. Nevertheless, falling into a false area still remains unsolved as images (c, d, g) in Figure 3. In addition, the segmentation time is long due to at least four convolution operations to update fitting functions during each iteration. BesidesIn addition, this model has poor anti-noise ability, which is vulnerable to the influence of noise interference, as images (e, g, h) illustrate. Moreover, the issues of under- segmentation and leaking from weak edge still exists, as shown in images (c, e) in Figure 4 and images (g, h) in Figure 4, respectively. Lastly, this model obtains poor segmentation results when segmenting natural images, as shown in the third row in the Figure 5.

Compared with RSF model, LIF model only utilizes two convolution operations to update fitting functions, which greatly reduces the CPU running time T and iteration number N according to Tables $$ 1 $$ , $$ 2 $$ , and $$ 3 $$ . The Gaussian kernel function is also used in this model, so common issues such as boundary leakage and falling into local minimum also occur in this model in some cases (as illustrated in images (a, d, e) in Figure 3). However, this model is still sensitive to noise interference, as shown in images (e, f, h) in Figure 3. BesidesAdditionally, the problem of under- segmentation has been improved to some degree as shown in image (c) in Figure 4. In additionFurthermore, this model has very poor segmentation results while segmenting natural images as indicated in images (b, c, e, g) in Figure 5.

LPF model locally computes average image intensity ahead of iteration process, which reduces the computation cost to some degree. However, the Gaussian kernel function is also used in this model to update the level set function, which results in falling into false boundary boundaries (as described in image (a) in Figure 3) and edge leakage (as illustrated in images (b, c, d) in Figure 3). In addition, this model has further improved in terms of anti-noise ability as shown in images (f, g) in Figure 3. However, the issue of boundary leakage still exists in images (e, h) in Figure 3 and images (g, h) in Figure 4 and images (e, f, g, h) in Figure 5. Besides, the problem of trapping into false boundary boundaries still occurs in images (b, c) in Figure 5.

RSF&LoG model combines RSF model and Laplacian of Gaussian (LoG) energy to smooth the homogeneous areas and enhance boundary characteristics simultaneously to segment images with uneven intensity, which can segment images with uneven intensity to some extent. However, this model may create some common issues such as under- segmentation and falling into false boundary boundaries in images (a, b, c) in Figure 3 and images (d, h) in Figure 3, respectively. In addition, boundary leakage may occur in some cases (as shown in image (h) in Figure 4 and images (c, e) in Figure 5).

OLPFI model calculates the mean intensity of the selected local regions before iteration starts, which dramatically decreases segmentation time. This model puts region-based attributes and edge-based attributes together to handle images with intensity non-uniformity, which obtains excellent results as shown in image (a, d) in Figure 3. However, under- segmentation often occurs in images (b, c, f, g, h) in Figure 3, image (e) in Figure 4, and images (a, g, h) in Figure 5. this This model has relatively poor anti-noise ability in the form of under- segmentation as indicated in images (f, g, h) in Figure 3. Lastly, this model greatly reduces the possibility of boundary leakage and falling into fake false boundaryboundaries.

PBC model utilizes the optimized FCM algorithm to estimate the bias field before iteration process, which gets rid of time-consuming convolution operation during each iteration and greatly reduces segmentation time. In addition, this model can segment images with uneven intensity in images (a, d) in Figure 3. However, boundary leakage may occur in some cases (as indicated in images (b, c) in Figure 3). Moreover, this model is capable of effectively segmenting images with noise interference with high segmentation quality. Nevertheless, under- segmentation may occur in some cases (as indicated in images (h) in Figure 3, image (e) in Figure 4 and images (a, e) in Figure 5). Lastly, falling into fake false boundary boundaries may take place in some cases (as shown in images (b, c) in Figure 5).

LPF & FCM model employs the FCM algorithm and adaptive sign function to solve the issue of boundary leakage, which obtains outstanding performance to segment images with intensity non-uniformity in images (a, d) in Figure 3. However, issues such as under- segmentation and falling into local minima may occur in some cases (as shown in images (b, c) in Figure 3 and images (b, c, g) in Figure 5, respectively). In addition, this model has strong robustness to images with noise (as indicated in images (e-h) in Figure 3). Lastly, this model is capable of effectively segmenting medical images (a-h) in Figure 4 with high accuracy.

LGJD model utilizes the changeable weights to control the local and global data fitting energies based on Jeffreys divergence (JD), which is capable of segmenting images with intensity non-uniformity to some degree. However, this model also has common issues such as over- segmentation or under- segmentation in some cases (as shown in images (a, b, c) in Figure 3 and images (e, f, g) in Figure 5). In addition, boundary leakage may occur during the process of segmenting images with noise interference in some cases (as illustrated in image (e, h) in Figure 3). Besides, the issue of strapping into false boundary boundaries frequently takes place in some cases (as shown in images (d) in Figure 3, images (g, h) in Figure 4, and images (a-d) in Figure 5).

ABC model applies the theory of bias field to segment images with unevenly distributed intensity, which obtains excellent segmentation performance in terms of handling images with intensity non-uniformity in images (a, d) in Figure 3. However, this model has issues of leaking from weak boundary boundaries in some cases (as illustrated in images (b, c) in Figure 3). In addition, this model can effectively handle images with noise interference due to the effect of additive bias correction as shown in images (e, f, g) in Figure 3. Nevertheless, the problem of under- segmentation may happen in image (h) in Figure 4 and images (a, c) in Figure 5. Lastly, this model is also able to effectively segment medical images (a-h) in Figure 4 with high precision.

PFE model computes the median intensity of the chosen local areas ahead of iteration process, which greatly reduces computation cost. According to the twelfth row of Figure 3, this model is able to deal with images with uneven intensity and has excellent noise resistivity. However, common issues such as falling into fake false boundary boundaries and boundary leakage may take place in some cases (as indicated in image (c) in Figure 3 and images (b, c, g) in Figure 5). Lastly, this model is also competent to effectively segment medical images (a-h) in Figure 4 with high efficiency.

APFJD model computes average intensity of selected areas before iteration takes place, which dramatically decreases segmentation time. This model can effectively segment images with uneven intensity and noise interference due to the effect of JD, as shown in images (a-b, d) and images (e-f) in Figure 3. However, the issue of strapping into false boundary boundaries may happen as indicated in image (c) in Figure 3). In addition, under- segmentation may occur in some cases (as indicated in image (e) in Figure 3 and image (c, f) in Figure 4). Lastly, this model segments natural images (a-h) in Figure 5 with excellent accuracy.

To conclude the characteristic of above said ACMs, the calculation processes of BC, RSF, LIF, LPF, RSF&LoG, and LGJD models are too complex to be implemented in practice, which have poor anti-noise capability and spend a huge amount of time for curve evolution. In addition, the computation processes to compute pre-fitting functions in OLPFI, PBC, PFE, and APFJD models are optimized, which enables them to quickly segment different kinds of images within a short amount of time. LPF & FCM takes advantages of FCM clustering to divide the input image region into background region and foreground region before iteration begins, which greatly reduces the computational overhead. ABC model implements K-means $$ ++ $$ clustering to separate input image area into background and foreground regions before iteration starts, which also hugely decreases the computational expense. Nevertheless, LPF & FCM and ABC models may generate unexpected segmentation outcomes such as redundant curves, if the FCM and K-means $$ ++ $$ clustering algorithms have bad performance.

Although ACMs can effectively segment double-phase images with fair segmentation results, the majority of existing ACMs are not able to segment multi-phase images as indicated in images (e, f) in Figure 6. Note that double-phase image means a target in an image either contains black pixels or white pixels, while multi-phase image means a target in an images contains black and white pixels at the same time as shown in images (e, f) in Figure 6. According to Figure 6, the deep learning-based algorithms (DeepLabv3+ and Mask R-CNN algorithms) exhibit an advantage on in segmenting multi-phase images. Specifically, ACMs (RSF, LGJD, APFJD models) can barely segment multi-phase images (e, f) in in Figure 6, while DeepLabv3+ and Mask R-CNN algorithms obtain excellent segmentation results in terms of much bigger IOU value. However, failed segmentation in the form of under- segmentation may take placeoccur while implementing DeepLabv3+ and Mask R-CNN algorithms to segment double-phase images as illustrated in images (b, d) in Figure 6.

5. CHALLENGES AND PROMISING FUTURE DIRECTIONS

Nowadays, there are still various common issues waiting for solutions in the field of image segmentation in practice. The review of the above ACMs points out some common issues and concludes some promising future directions. It is believed that this discussion will be useful for later researchers in this field to design more advanced models.

5.1. The combination of deep learning models

Inspired by the general idea of ACMs, the work [73-75] incorporates region and length constraint terms into the cost or loss function of convolutional neural network (CNN) model based on traditional Dense U-Net for image segmentation, which combines geometric attributes such as edge length with region information to achieves better segmentation accuracy. In addition, compared with traditional ACMs requiring iterations to solve PDEs, the employment of CNN greatly reduces computation cost in image segmentation, although its training process is generally long. In addition, later researchers embed some loss functions in deep learning [76-81] in region-based level set energy functions to improve segmentation efficiency and accuracy. Therefore, one can put the energy function of diverse ACMs mentioned in this paper and other segmentation models in deep learning together to design some new hybrid energy functions to further improve segmentation performance, which is recommended as a promising future research direction in the area of image segmentation.

5.2. The combination of edge-based and region-based ACMs

As discussed in this paper, active contour method can be grouped into two types: the region-based ACM and the edge-based ACM. The region-based ACM utilizes a pre-defined region descriptor or a contour representation to recognize each region on the image, while the edge-based ACM uses the differential property or gradient information of boundary points to construct a contour representation. The region-based ACMs generally utilize regional information (the pixel grayscale information) of the image to construct energy functions, which improves their system robustness and effectiveness. However, the region-based ACMs cannot deal with contours that do not evolve from region boundaries. The edge-based ACMs mainly use the gradient information of the target boundary points as the main driving force to guide the motion of the evolution curve, which is capable of handling topology changes adaptively. Nevertheless, the edge-based ACMs generally need to reinitialize the level set function periodically during the evolution process, which will affect the computational accuracy and slow down segmentation efficiency. In this case, the zero level set may not be able to move towards the target boundary, and how and when to initialize it still remains unsolved.

Therefore, it is necessary to combine the strengths of the region-based and region-based ACMs to obtain better segmentation outcomes. Recently, several hybrid ACMs [41,47,70,82,83] are constructed to take advantage of both metrics of the region-based and edge-based ACMs to achieve higher segmentation efficiency and lower computation cost. It is recommended that future researchers design more hybrid ACMs on the basis of the hybrid ones mentioned above.

5.3. Fast and stable optimization algorithm

The general optimization process of ACMs is to minimize the associated energy function through gradient or steepest descent method. However, it should be aware that it may be hard to figure out the global minima if the energy function is non-convex [33,84-87], which may cause a failed segmentation in the form of falling into a local minima. Specifically, the traditional gradient or steepest descent approach is initialized by the initial level set function and then descends at each iteration, ; the descending direction is controlled by the slope or the derivative of the evolution curve. It is possible to replace the traditional one with other gradient descent methods to design a brand-new series of ACMs, which is capable of optimizing the evolution curve and avoiding falling into local minima.

6. CONCLUSIONS

This paper has presented an overview of different kinds of ACMs in image segmentation in Sections $$ 1 $$ , which helps readers to obtain a comprehensive understanding of different kinds of ACMs. Then, some fundamental theory of ACMs has been explained in Section $$ 2 $$ . Specifically, region-based ACMs, edge-based ACMs, and hybrid ACMs are reviewed in Section $$ 3 $$ . After that, three comparison experiments of $$ 12 $$ different ACMs in terms of several evaluation criteria (the CPU running time $$ T $$ , iteration number $$ N $$ , IOU and DSC) have been conducted to compare their segmentation performance on different kinds of images (synthetic images, medical images, and natural images) in Section $$ 4 $$ . In addition, two deep learning-based algorithms (DeepLabv3+ and Mask R-CNN) have been implemented to segment double-phase images and multi-phase images, whose segmentation results in terms of IOU values have been compared with several ACMs to demonstrate their advantages and disadvantages. According to the experimental results of these comparison experiments, the hybrid ACMs appear to be more suitable for large- scale image segmentation applications due to higher segmentation efficiency and accuracy. In addition, the deep learning-based algorithms (DeepLabv3+ and Mask R-CNN) obtain much superior segmentation results than ACMs while segmenting multi-phase images. Lastly, some challenges and promising future research works in the field of image segmentation have been discussed in Section $$ 5 $$ .

DECLARATIONS

Authors' contributions

Writing- Original draft preparation: Chen Y, Ge P

Writing-Reviewing and Editing: Wang G, Weng G

Conceptualization: Chen Y, Wang G, Chen H

Methodology: Chen Y, Wang G, Chen H

Project administration: Chen Y, Wang G

Recourses: Chen Y, Chen H

Supervision: Weng G, Chen H

Data curation: Chen Y, Ge P

Software: Ge P, Weng G

Investigation: Ge P

Visualization: Ge P

Availability of data and materials

Not applicable.

Financial support and sponsorship

This research paper was supported in part by National Natural Science Foundation of China under Grant 62103293, in part by Natural Science Foundation of Jiangsu Province under Grant BK20210709, in part by Suzhou Municipal Science and Technology Bureau under Grant SYG202138, and in part by Entrepreneurship and Innovation Plan of Jiangsu Province under Grant JSSCBS20210641.

Conflicts of interest

All authors declared that there are no conflicts of interest.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2023.

REFERENCES

1. Son LH, Tuan TM. Dental segmentation from X-ray images using semi-supervised fuzzy clustering with spatial constraints. Eng Appl of Art Int 2017;59:186-95.

2. Civit-Masot J, Luna-Perejón F, Corral JMR, et al. A study on the use of Edge TPUs for eye fundus image segmentation. Eng Appl Art Int 2021;104:104384.

3. Akbari Y, Hassen H, Al-Maadeed S, Zughaier SM. COVID-19 lesion segmentation using lung CT scan images: comparative study based on active contour models. Applied Sciences 2021;11:8039.

4. Guo Q, Wang L, Shen S. Multipleʜchannel local binary fitting model for medical image segmentation. Chin J Electron 2015;24:802-6.

5. Zhang D, Li J, Li X, Du Z, Xiong L, Ye M. Local—global attentive adaptation for object detection. Eng Appl Art Int 2021;100:104208.

6. Yang C, Wu L, Chen Y, Wang G, Weng G. An active contour model based on retinex and pre-Fitting reflectance for fast image segmentation. Symmetry 2022;14:2343.

7. Chen H, Liu Z, Alippi C, Huang B, Liu D. Explainable intelligent fault diagnosis for nonlinear dynamic systems: from unsupervised to supervised learning. IEEE Trans Neur Netw Lear Syst 2022;Early Access.

8. Ge P, Chen Y. An automatic detection approach for wearing safety helmets on construction site based on YOLOv5. In: 2022 IEEE 11th Data Driven Control and Learning Systems Conference (DDCLS). IEEE; 2022. pp. 140-45.

9. Cao Y, Wang G, Yan D, Zhao Z. Two algorithms for the detection and tracking of moving vehicle targets in aerial infrared image sequences. Remote Sensing 2016;8:28.

10. Wu S. A traffic motion object extraction algorithm. Int J Bifurcation Chaos 2015;25:1540039.

11. Paragios N, Deriche R. Geodesic active contours and level sets for the detection and tracking of moving objects. IEEE Trans Pattern Anal Machine Intell 2000;22:266-80.

12. Wu Z, Tian E, Chen H. Covert attack detection for LFC systems of electric vehicles: a dual time-varying coding method. IEEE/ASME Trans Mechatron 2022:1-11.

13. Cootes TF, Edwards GJ, Taylor CJ. Active appearance models. IEEE Trans Pattern Anal Machine Intell 2001;23:681-85.

14. Mille J. Narrow band region-based active contours and surfaces for 2D and 3D segmentation. Compu Vis Image Und 2009;113:946-65.

15. Chan TF, Vese LA. Active contours without edges. IEEE Trans Image Process 2001;10:266-77.

16. Tsai A, Yezzi A, Willsky AS. Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification. IEEE Trans Image Process 2001;10:1169-86.

17. Wang G, Zhang F, Chen Y, Weng G, Chen H. An active contour model based on local pre-piecewise fitting bias corrections for fast and accurate segmentation. IEEE Trans Instrum Meas 2023;72:1-13.

18. Xiang Y, Chung ACS, Ye J. An active contour model for image segmentation based on elastic interaction. J Comput Phys 2006;219:455-76.

19. Huang AA, Abugharbieh R, Tam R. A Hybrid Geometric—Statistical Deformable Model for Automated 3-D Segmentation in Brain MRI. IEEE Trans Biomed Eng 2009;56:1838-48.

20. Pluempitiwiriyawej C, Moura JMF, Wu YJL, Ho C. STACS: new active contour scheme for cardiac MR image segmentation. IEEE Trans Med Imaging 2005;24:593-603.

21. Bowden A, Sirakov NM. Active contour directed by the poisson gradient vector field and edge tracking. J Math Imaging Vis 2021;63:665-80.

22. Fahmi R, Jerebko A, Wolf M, Farag AA. Robust segmentation of tubular structures in medical images. In: Reinhardt JM, Pluim JPW, editors. SPIE Proceedings. SPIE; 2008. pp. 691443-1443-7.

23. Zhang H, Morrow P, McClean S, Saetzler K. Coupling edge and region-based information for boundary finding in biomedical imagery. Pattern Recogn 2012;45:672-84.

24. Wen J, Yan Z, Jiang J. Novel lattice Boltzmann method based on integrated edge and region information for medical image segmentation. Biomed Mater Eng 2014;24:1247-52.

25. Lv H, Zhang Y, Wang R. Active contour model based on local absolute difference energy and fractional-order penalty term. Appl Math Model 2022;107:207-32.

26. Mumford D, Shah J. Optimal approximations by piecewise smooth functions and associated variational problems. Comm Pure Appl Math 1989;42:577-685.

27. Caselles V, Catte F, Coll T, Dibos F. A geometric model for active contours in image processing. Numer Math 1993;66:1-31.

28. Caselles V, Kimmel R, Sapiro G. Geodesic active contours. Int J Compu Vis 1997;22:61-79.

29. Bresson X, Esedoglu S, Vandergheynst P, Thiran JP, Osher S. Fast global minimization of the active contour/snake model. Math Imaging Vis 2007;28:151-67.

30. Cohen LD, Kimmel R. Global minimum for active contour models: a minimal path approach. Int J Compu Vis 1997;24:57-78.

31. Li C, Xu C, Gui C, Fox MD. istance regularized level set evolution and its application to image segmentation. IEEE Trans Image Process 2010;19:3243-54.

32. Li C, Huang R, Ding Z, et al. A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Trans Image Process 2011;20:2007-16.

33. Li C, Kao CY, Gore JC, Ding Z. Implicit active contours driven by local binary fitting energy. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE; 2007. pp. 1-7.

34. Li C, Kao CY, Gore JC, Ding Z. Minimization of region-scalable fitting energy for image segmentation. IEEE Trans Image Process 2008;17:1940-9.

35. Zhang K, Song H, Zhang L. Active contours driven by local image fitting energy. Pattern Recogn 2010;43:1199-206.

36. Chan TF, Esedoglu S, Nikolova M. Algorithms for finding global minimizers of image segmentation and denoising models. SIAM J Appl Math 2006;66:1632-48.

37. Chambolle A, Cremers D, Pock T. A convex approach to minimal partitions. SIAM J Imaging Sci 2012;5:1113-58.

38. Wang L, Li C, Sun Q, Xia D, Kao CY. Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation. Comput Med Imaging Graph 2009;33:520-31.

39. Ding K, Xiao L, Weng G. Active contours driven by region-scalable fitting and optimized Laplacian of Gaussian energy for image segmentation. Signal Proce 2017;134:224-33.

40. Wang L, He L, Mishra A, Li C. Active contours driven by local Gaussian distribution fitting energy. Signal Proce 2009;89:2435-47.

41. Jin R, Weng G. Active contours driven by adaptive functions and fuzzy c-means energy for fast image segmentation. Signal Proce 2019;163:1-10.

42. Liu S, Peng Y. A local region-based Chan—Vese model for image segmentation. Pattern Recogn 2012;45:2769-79.

43. Liu W, Shang Y, Yang X. Active contour model driven by local histogram fitting energy. Pattern Recognit Lett 2013;34:655-62.

44. Wang H, Huang TZ, Xu Z, Wang Y. An active contour model and its algorithms with local and global Gaussian distribution fitting energies. Inform Sciences 2014;263:43-59.

45. Ji Z, Xia Y, Sun Q, Cao G, Chen Q. Active contours driven by local likelihood image fitting energy for image segmentation. Inform Sciences 2015;301:285-304.

46. Yang Y, Ren H, Hou X. Level set framework based on local scalable Gaussian distribution and adaptive-scale operator for accurate image segmentation and correction. Signal Processing: Image Communication 2022;104:116653.

47. Ge P, Chen Y, Wang G, Weng G. A hybrid active contour model based on pre-fitting energy and adaptive functions for fast image segmentation. Pattern Recogn Lett 2022;158:71-79.

48. Ding K, Xiao L, Weng G. Active contours driven by local pre-fitting energy for fast image segmentation. Pattern Recogn Lett 2018;104:29-36.

49. Ge P, Chen Y, Wang G, Weng G. An active contour model driven by adaptive local pre-fitting energy function based on Jeffreys divergence for image segmentation. Expert Syst Appl 2022;210:118493.

50. Liu G, Jiang Y, Chang B, Liu D. Superpixel-based active contour model via a local similarity factor and saliency. Measurement 2022;188:110442.

51. Chen H, Zhang H, Zhen X. A hybrid active contour image segmentation model with robust to initial contour position. Multimed Tools Appl 2022 sep.

52. Yang Y, Hou X, Ren H. Efficient active contour model for medical image segmentation and correction based on edge and region information. Expert Syst Appl 2022;194:116436.

53. Zhang W, Wang X, You W, et al. RESLS: Region and edge synergetic level set framework for image segmentation. IEEE Trans Image Process 2020;29:57-71.

54. Dong B, Weng G, Jin R. Active contour model driven by Self Organizing Maps for image segmentation. Expert Syst Appl 2021;177:114948.

55. Fang J, Liu H, Liu J, et al. Fuzzy region-based active contour driven by global and local fitting energy for image segmentation. Applied Soft Comput 2021;100:106982.

56. Weng G, Dong B, Lei Y. A level set method based on additive bias correction for image segmentation. Expert Syst Appl 2021;185:115633.

57. Jin R, Weng G. A robust active contour model driven by pre-fitting bias correction and optimized fuzzy c-means algorithm for fast image segmentation. Neurocomputing 2019;359:408-19.

58. Han B, Wu Y. Active contour model for inhomogenous image segmentation based on Jeffreys divergence. Pattern Recogn 2020;107:107520.

59. Asim U, Iqbal E, Joshi A, Akram F, Choi KN. Active contour model for image segmentation with dilated convolution filter. IEEE Access 2021;9:168703-14.

60. Costea C, Gavrea B, Streza M, Belean B. Edge-based active contours for microarray spot segmentation. Proce Compu Sci 2021;192:369-75.

61. Fang J, Liu H, Zhang L, Liu J, Liu H. Region-edge-based active contours driven by hybrid and local fuzzy region-based energy for image segmentation. Inform Sciences 2021;546:397-419.

62. Yu H, He F, Pan Y. A novel segmentation model for medical images with intensity inhomogeneity based on adaptive perturbation. Multimed Tools Appl 2019;78:11779-98.

63. Sirakov NM. A new active convex hull model for image regions. J Math Imaging Vis 2006;26:309-25.

64. Osher S, Sethian JA. Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. J Compu Phys 1988;79:12-49.

65. Kass M, Witkin A, Terzopoulos D. Snakes: active contour models. Int J Comput Vision 1988;1:321-31.

66. Bresson X, Vandergheynst P, Thiran JP. A variational model for object segmentation using boundary information and shape prior driven by the mumford-shah functional. Int J Comput Vision 2006;68:145-62.

67. Deriche M, Amin A, Qureshi M. Color image segmentation by combining the convex active contour and the Chan Vese model. Pattern Anal Applic 2019;22:343-57.

68. Aubert G, Kornprobst P. Mathematical problems in Image Processing. New York: Springer; 2006.

69. Wang Y, He C. An adaptive level set evolution equation for contour extraction. Appl Math Comput 2013;219:11420-29.

70. Yan X, Weng G. Hybrid active contour model driven by optimized local pre-fitting image energy for fast image segmentation. Appl Math Model 2022;101:586-99.

71. Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. European Conference on Computer Vision 2018 Feb.

72. He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. arXiv 2017 Mar.

73. Chen X, Williams BM, Vallabhaneni SR, et al. Learning active contour models for medical image segmentation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE; 2019. pp. 11624-32.

74. Ma J, He J, Yang X. Learning geodesic active contours for embedding object global information in segmentation CNNs. IEEE Trans Med Imaging 2021;40:93-104.

75. Gu J, Fang Z, Gao Y, Tian F. Segmentation of coronary arteries images using global feature embedded network with active contour loss. Comput Med Imaging Graph 2020;86:101799.

76. Gur S, Wolf L, Golgher L, Blinder P. Unsupervised microvascular image segmentation using an active contours mimicking neural network. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE; 2019. pp. 10721-30.

77. Kim B, Ye JC. Mumford—Shah Loss Functional for Image Segmentation With Deep Learning. IEEE Trans Image Process 2020;29:1856-66.

78. Tao H, Qiu J, Chen Y, Stojanovic V, Cheng L. Unsupervised cross-domain rolling bearing fault diagnosis based on time-frequency information fusion. J Franklin Ins 2023;360:1454-77.

79. Chen H, Li L, Shang C, Huang B. Fault detection for nonlinear dynamic systems With consideration of modeling errors: a data-Driven approach. IEEE Trans Cybern 2022:1-11.

80. Qu F, Tian E, Zhao X. Chance-Constrained $$ H_\infty$$ State Estimation for Recursive Neural Networks Under Deception Attacks and Energy Constraints: The Finite-Horizon Case. IEEE Trans Neural Netw Learn Syst 2022:1-12.

81. Chen Y, Jiang W, Charalambous T. Machine learning based iterative learning control for non-repetitive time-varying systems. Int J Robust Nonlinear 2022;Early Veiw.

82. Han B, Wu Y. A hybrid active contour model driven by novel global and local fitting energies for image segmentation. Multimed Tools Appl 2018;77:29193-208.

83. Yang X, Jiang X, Zhou L, Wang Y, Zhang Y. Active contours driven by Local and Global Region-Based Information for Image Segmentation. IEEE Access 2020;8:6460-70.

84. Chen Y, Zhou Y. Machine learning based decision making for time varying systems: Parameter estimation and performance optimization. Knowledge-Based Systems 2020;190:105479.

85. Chen Y, Zhou Y, Zhang Y. Machine Learning-Based Model Predictive Control for Collaborative Production Planning Problem with Unknown Information. Electronics 2021;10:1818.

86. Chen H, Chai Z, Dogru O, Jiang B, Huang B. Data-Driven Designs of Fault Detection Systems via Neural Network-Aided Learning. IEEE Trans Neur Net Lear Syst 2021:1-12.

87. Jiang W, Chen Y, Chen H, Schutter BD. A Unified Framework for Multi-Agent Formation with a Non-repetitive Leader Trajectory: Adaptive Control and Iterative Learning Control. TechRxiv 2023 jan.

Cite This Article

Review
Open Access
An overview of intelligent image segmentation using active contour models
Yiyang ChenYiyang  Chen, ... Hongtian ChenHongtian  Chen

How to Cite

Chen, Y.; Ge P.; Wang G.; Weng G.; Chen H. An overview of intelligent image segmentation using active contour models. Intell. Robot. 2023, 3, 23-55. http://dx.doi.org/10.20517/ir.2023.02

Download Citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click on download.

Export Citation File:

Type of Import

Tips on Downloading Citation

This feature enables you to download the bibliographic information (also called citation data, header data, or metadata) for the articles on our site.

Citation Manager File Format

Use the radio buttons to choose how to format the bibliographic data you're harvesting. Several citation manager formats are available, including EndNote and BibTex.

Type of Import

If you have citation management software installed on your computer your Web browser should be able to import metadata directly into your reference database.

Direct Import: When the Direct Import option is selected (the default state), a dialogue box will give you the option to Save or Open the downloaded citation data. Choosing Open will either launch your citation manager or give you a choice of applications with which to use the metadata. The Save option saves the file locally for later use.

Indirect Import: When the Indirect Import option is selected, the metadata is displayed and may be copied and pasted as needed.

About This Article

© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
1432
Downloads
907
Citations
Comments
0
66

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Intelligence & Robotics
ISSN 2770-3541 (Online)
Follow Us

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles are preserved here permanently:

https://www.portico.org/publishers/oae/