\section{The detection method} \label{sec:method} \subsection{Workflow of the detection process} The workflow of the blurred segment detection process is summerized in the following figure. \begin{figure}[h] \center \input{Fig_method/workflow} \caption{The detection process main workflow.} \label{fig:workflow} \end{figure} The initial detection consists in building and extending a blurred segment $\mathcal{B}$ based on the highest gradient points found in each scan of a static directional scanner based on an input segment $AB$. Validity tests are then applied to decide of the detection poursuit. They aim at rejecting a too short or too sparse blurred segment, or a blurred segment with a close orientation to the input segment $AB$. In case of positive response, the position $C$ and direction $\vec{D}$ of this initial blurred segment are extracted. The fine tracking step consists on building and extending a blurred segment $\mathcal{B}'$ based on points that correspond to local maxima of the image gradient, ranked by magnitude order, and with gradient direction close to a reference gradient direction at the segment first point. At this refinement step, a control of the assigned width is applied and an adaptive directional scanner based on the found position $C$ and direction $\vec{D}$ is used in order to extends the segment in the appropriate direction. These two improvements are described in the following sections. The fine track output segment is finally filtered to remove artifacts and outliers, and a final blurred segment $\mathcal{B}''$ is provided. \subsection{Adaptive directional scan} The blurred segment is searched within a directional scan with a position and an orientation approximately provided by the user, or blindly defined in unsupervised mode. Most of the time, the detection stops where the segment escapes sideways from the scan strip (\RefFig{fig:escape} a). A second search is then run using another directional scan aligned on the detected segment (\RefFig{fig:escape} b). In the given example, an outlier added to the initial segment leads to a wrong orientation value. But even in case of a correct detection, this estimated orientation is subject to the numerization rounding, and the longer the real segment to detect, the higher the probability to fail again on a blurred segment escape from the directional scan. %Even in ideal situation where the detected segment is a perfect line, %its width is never null as a result of the discretization process. %The estimated direction accuracy is mostly constrained by the length of %the detected segment. %To avoid these side escapes, the scan should not be a linear strip but %rather a conic shape to take into account the blurred segment preimage. %This side shift is amplified in situations where the blurred segment is %left free to get thicker in order to capture possible noisy features. %The assigned width is then still greater than the detected minimal width, %so that the segment can move within the directional scan. %Knowing the detected blurred segment shape and the image size, it is %possible to define a conic scan area, but this solution is computationaly %expensive because it leads to useless exploration of large image areas. % %\begin{figure}[h] %\center % %\begin{picture}(300,40) % %\end{picture} % \input{Fig_notions/bscone} % \caption{Possible extension area based % on the detected blurred segment preimage.} % \label{fig:cone} %\end{figure} To overcome this issue, in the former work, an additional refinement step is run using the better orientation estimated from the longer segment obtained. It is enough to completely detect most of the tested edges, but certainly not all, especially if larger images with much longer edges are processed. %The solution implemented in the former work was to let some arbitrary %margin between the scan strip width and the assigned width to the detection, %and to perform two fine detection steps, using for each of them the direction %found at the former step. As a solution, this operation could be itered as long as the blurred segment escapes from the directional scanner using as any fine detection steps as necessary. But at each iteration, already tested points are processed again, thus producing a useless computational cost. Here the proposed solution is to dynamically align the scan direction on the blurred segment one all along the expansion stage. At each iteration $i$ of the expansion, the scan strip is aligned on the direction of the blurred segment $\mathcal{B}_{i-1}$ computed at previous iteration $i-1$. More generally, an adaptive directional scan $ADS$ is defined by: \begin{equation} %S_i = \mathcal{D}_{i-1} \cap \mathcal{N}_i ADS = \left\{ S_i = \mathcal{D}_i \cap \mathcal{N}_i \cap \mathcal{I} \left| \begin{array}{l} \delta(\mathcal{N}_i) = - \delta^{-1}(\mathcal{D}_0) \\ \wedge~ h_0(\mathcal{N}_i) = h_0(\mathcal{N}_{i-1}) + p(\mathcal{D}) \\ \wedge~ \mathcal{D}_{i} = \mathcal{D} (C_{i-1}, \vec{D}_{i-1}, w_{i-1}), i > 1 %\wedge~ \mathcal{D}_{i} = D (\mathcal{B}_{i-1},\varepsilon + k), i > 1 \end{array} \right. \right\} \end{equation} %where $D (\mathcal{B}_i,w)$ is the scan strip aligned to the %detected segment at iteration $i$ with width $w$. %In practice, the scan width is set a little greater than the assigned %width $\varepsilon$ ($k$ is a constant arbitrarily set to 4). where $C_{i-1}$, $\vec{D}_{i-1}$ and $w_{i-1}$ are a position, a director vector and a width observed at iteration $i-1$. In the scope of the present detector, $C_{i-1}$ is the intersection of the input selection and the medial axis of $\mathcal{B}_{i-1}$, $\vec{D}_{i-1}$ the support vector of the narrowest digital straight line that contains $\mathcal{B}_{i-1}$, and $w_{i-1}$ a value slightly greater than the minimal width of $\mathcal{B}_{i-1}$. So the last clause expresses the update of the scan bounds at iteration $i$. Compared to static directional scans, the scan strip moves while scan lines remain fixed. This behavior ensures a complete detection of the blurred segment even when the orientation is badly estimated (\RefFig{fig:escape} c). \begin{figure}[h] \center \begin{tabular}{c@{\hspace{0.2cm}}c} \includegraphics[width=0.48\textwidth]{Fig_notions/escapeFirst_zoom.png} & \includegraphics[width=0.48\textwidth]{Fig_notions/escapeSecond_zoom.png} \\ \multicolumn{2}{c}{ \includegraphics[width=0.72\textwidth]{Fig_notions/escapeThird_zoom.png}} \begin{picture}(1,1)(0,0) {\color{dwhite}{ \put(-260,100.5){\circle*{8}} \put(-86,100.5){\circle*{8}} \put(-172,7.5){\circle*{8}} }} \put(-263,98){a} \put(-89,98){b} \put(-175,5){c} \end{picture} \end{tabular} \caption{Aborted detections on side escapes of static directional scans and successful detection using an adaptive directional scan. The last points added to the left of the blurred segment during the initial detection (a) lead to a bad estimation of its orientation, and thus to an incomplete fine detection with a classical directional scanner (b). This scanner is advantageously replaced by an adaptive directional scanner able to continue the segment expansion as far as necessary (c). The input selection is drawn in red color, the scan strip bounds in blue and the detected blurred segment in green.} \label{fig:escape} \end{figure} %\begin{figure}[h] %\center % \begin{tabular}{c@{\hspace{0.2cm}}c} % \includegraphics[width=0.49\textwidth]{Fig_notions/adaptionBounds_zoom.png} % & \includegraphics[width=0.49\textwidth]{Fig_notions/adaptionLines_zoom.png} % \end{tabular} % \caption{Example of blurred segment detection % using an adaptive directional scan. % On the right picture, the scan bounds are displayed in red, the % detected blurred segment in blue, and its bounding lines in green. % The left picture displays the successive scans. % Here the adaption is visible at the crossing of the tile joins.} % \label{fig:adaption} %\end{figure} \subsection{Control of the assigned width} The assigned width $\varepsilon$ to the blurred segment recognition algorithm is initially set to a large value $\varepsilon_0$ in order to allow the detection of large blurred segments. Then, when no more augmentation of the minimal width is observed after $\lambda$ iterations ($\mu_{i+\lambda} = \mu_i$), it is set to a much stricter value able to circumscribe the possible interpretations of the segment, that take into account the digitization margins: \begin{equation} \varepsilon = \mu_{i+\lambda} + \frac{\textstyle 1}{\textstyle 2} \end{equation} This strategy aims at preventing the incorporation of spurious outliers in further parts of the segment. Setting the observation distance to a constant value $\lambda = 20$ seems appropriate in most experimented situations. \subsection{Supervised blurred segment detection} In supervised context, the user draws an input stroke across the specific edge he wants to extract from the image. The detection method previously described is continuously run during mouse dragging and the output blurred segment is displayed on-the-fly. The method is quite sensitive to the local conditions of the initial detection so that the output blurred segment may be quite unstable. In order to temper this undesirable behavior for interactive applications, the initial detection can be optionally run twice, the second fast scan being aligned on the first detection output. This strategy provides a first quick analysis of the local context before extracting the segment and contributes to notably stabilize the overall process. When selecting candidates for the fine detection stage, an option, called {\it edge selection mode}, is left to also filter the points according to their gradient direction. In {\it main edge selection mode}, only the points with a gradient vector in the same direction as the start point gradient vector are added to the blurred segment. In {\it opposite edge selection mode}, only the points with an opposite gradient vector direction are kept. In {\it line selection mode} this direction-based filter is not applied, and all the candidate points are aggregated into a same blurred segment, whatever the direction of their gradient vector. As illustrated on \RefFig{fig:edgeDir}, this mode allows the detection of the two opposite edges of a thin straight object. \begin{figure}[h] \center \begin{tabular}{c@{\hspace{0.2cm}}c} \includegraphics[width=0.48\textwidth]{Fig_method/briques1_zoom.png} & \includegraphics[width=0.48\textwidth]{Fig_method/briques2_zoom.png} \end{tabular} \begin{picture}(1,1)(0,0) {\color{dwhite}{ \put(-260,-17.5){\circle*{8}} \put(-86,-17.5){\circle*{8}} }} \put(-262.5,-20){a} \put(-89,-20){b} \end{picture} \caption{Blurred segments obtained in \textit{line} or \textit{edge selection mode} as a result of the gradient direction filtering when adding points. In \textit{line selection mode} (a), a thick blurred segment is built and extended all along the brick join. In \textit{edge selection mode} (b), a thin blurred segment is built along one of the two join edges. Both join edges are detected with the \textit{multi-selection} option. On that very textured image, they are much shorter than the whole join detected in line selection mode. Blurred segment points are drawn in black color, and the enclosing straight segment with minimal width in blue.} \label{fig:edgeDir} \end{figure} \subsection{Multiple blurred segments detection} Another option, called {\it multi-detection} (Algorithm 1), allows the detection of all the segments crossed by the input stroke $AB$. In order to avoid multiple detections of the same edge, an occupancy mask, initially empty, collects the dilated points of all the blurred segments, so that these points can not be added to another segment. \input{Fig_method/algoMulti} First the positions $M_j$ of the prominent local maxima of the gradient magnitude found under the stroke are sorted from the highest to the lowest. For each of them the main detection process is run with three modifications: \begin{enumerate} \item the initial detection takes $M_j$ and the orthogonal direction $\vec{AB}_\perp$ to the stroke as input to build a static scan of fixed width $2~\varepsilon_{ini}$, and $M_j$ is used as start point of the blurred segment; \item the occupancy mask is filled in with the points of the dilated blurred segments $\mathcal{B}_j''$ at the end of each successful detection (a 21 pixels bowl is used to get the segment dilation); \item points marked as occupied are rejected when selecting candidates for the blurred segment extension in the fine tracking step. \end{enumerate} In edge selection mode (\RefFig{fig:edgeDir} b), the multi-detection algorithm is executed twice, first in main edge selection mode, then in opposite edge selection mode. %Beyond the possible detection of a large set of edges at once, the %multi-detection allows the detection of some unaccessible edges in %classical single detection mode. This is particularly the case of edges %that are quite close to a more salient edge with a higher gradient, %as illustrated in \RefFig{fig:voisins}. %The multi-detection detects both edges and the user may then select %the awaited one. % %\begin{figure}[h] %\center % \begin{tabular}{c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}c} % \includegraphics[width=0.22\textwidth]{Fig_method/voisinImage_zoom.png} & % \includegraphics[width=0.22\textwidth]{Fig_method/voisinGrad_zoom.png} & % \includegraphics[width=0.22\textwidth]{Fig_method/voisinSingle_zoom.png} & % \includegraphics[width=0.22\textwidth]{Fig_method/voisinMulti_zoom.png} \\ % \parbox{0.22\textwidth}{\centering{\scriptsize{a)}}} & % \parbox{0.22\textwidth}{\centering{\scriptsize{b)}}} & % \parbox{0.22\textwidth}{\centering{\scriptsize{c)}}} & % \parbox{0.22\textwidth}{\centering{\scriptsize{d)}}} % \end{tabular} % \caption{Detection of close edges with different sharpness: % a) input selection across the edges, % b) gradient map, % c) in single mode, detection of only the edge with the higher gradient, % d) in multi-detection mode, detection of both edges. } % \label{fig:voisins} %\end{figure} %This detection procedure can be used to detect as well straight edges %as thin straight objects. In the first case, the gradient vectors of all %edge points are assumed to be oriented in the same direction. But if the %sign of the gradient direction is not considered, points with gradient in %opposite directions are merged to build the same blurred segment, allowing %the detection of both edges of a thin linear structure, like for instance %the tile joins of \RefFig{fig:edgeDir}. %On that example, when a straight feature detection is run %(\RefFig{fig:edgeDir} a), %a thick blurred segment which extends up to four tiles is provided. %When a straight edge detection is run, a very thin blurred segment is %built to follow only one join edge. %The multi-detection can also be applied to both thin object or edge detection. %In the latter case, the detection algorithm is run twice using opposite %directions, so that in the exemple of figure (\RefFig{fig:edgeDir} b), %both edges (in different colors) are highlighted. %These two thin blurred segments are much shorter, probably because the %tiles are not perfectly aligned. %This example illustrates the versatility of the new detector. \subsection{Automatic blurred segment detection} An unsupervised mode is also proposed to automatically detect all the straight edges in the image. The principle of this automatic detection is described in Algorithm 2. A stroke that crosses the whole image, is swept in both direction, vertical then horizontal, from the center to the borders. At each position, the multi-detection algorithm is run to collect all the segments found under the stroke. In the present work, the stroke sweeping step $\delta$ is set to 10 pixels. \input{Fig_method/algoAuto} %\RefFig{fig:evalAuto}b gives an idea of the automatic detection performance. %In the example of \RefFig{fig:noisy}, hardly perceptible edges are detected %despite of a quite textured context. %Unsurpringly the length of the detected edges is linked to the initial %value of the assigned width, but a large value also augments the rate %of interfering outliers insertion. % %\begin{figure}[h] %\center \begin{tabular}{c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}c} % \includegraphics[width=0.32\textwidth]{Fig_method/parpaings.png} & % \includegraphics[width=0.32\textwidth]{Fig_method/parpaings2.png} & % \includegraphics[width=0.32\textwidth]{Fig_method/parpaings3.png} % \end{tabular} % \begin{picture}(1,1)(0,0) % {\color{dwhite}{ % \put(-286,-25.5){\circle*{8}} % \put(-171,-25.5){\circle*{8}} % \put(-58,-25.5){\circle*{8}} % }} % \put(-288.5,-28){a} % \put(-173.5,-28){b} % \put(-60.5,-28){c} % \end{picture} % \caption{Automatic detection of blurred segments on a textured image. % a) the input image, % b) automatic detection result with initial assigned width set % to 3 pixels, % c) automatic detection result with initial assigned width set % to 8 pixels.} % \label{fig:noisy} %\end{figure} The automatic detection of blurred segments in a whole image is left available for testing in an online demonstration at the following address: \\ \href{http://ipol-geometry.loria.fr/~kerautre/ipol_demo/AdaptDirBS_IPOLDemo}{ \small{\url{ http://ipol-geometry.loria.fr/~kerautre/ipol_demo/AdaptDirBS_IPOLDemo}}} %The behavior of the unsupervised detection is depicted through the two %examples of \RefFig{fig:auto}. %The example on the left shows the detection of thin straight objects on a %circle with variable width. %On the left half of the circumference, the distance between both edges %exceeds the initial assigned width and a thick blurred segment is build %for each of them. Of course, on a curve, a continuous thickenning is %observed untill the blurred segment minimal width reaches the initial %assigned width. %On the right half, both edges are encompassed in a common blurred segment, %and at the extreme right part of the circle, the few distant residual points %are grouped to form a thick segment. % %The example on the right shows the limits of the edge detector on a picture %with quite dense repartition of gradient. %All the salient edges are well detected but they are surrounded by a lot %of false detections, that rely on the presence of many local maxima of %the gradient magnitude with similar orientations. % %\begin{figure}[h] %\center % \begin{tabular}{c@{\hspace{0.2cm}}c} % \includegraphics[width=0.37\textwidth]{Fig_method/vcercleAuto.png} & % \includegraphics[width=0.58\textwidth]{Fig_method/plafondAuto.png} % \end{tabular} % \caption{Automatic detection of blurred segments.} % \label{fig:auto} %\end{figure}