Skip to content
Snippets Groups Projects
method.tex 18.01 KiB
\section{The detection method}

\label{sec:method}

\subsection{Workflow of the detection process}

The workflow of the blurred segment detection process is summerized
in the following figure.

\begin{figure}[h]
\center
  \begin{picture}(340,34)(0,-4)
    %\put(0,-2.5){\framebox(340,35)}
    \put(-2,18){\scriptsize $(A,B)$}
    \put(-2,15){\vector(1,0){24}}
    \put(24,0){\framebox(56,30)}
    \put(24,16){\makebox(56,10){Initial}}
    \put(24,4){\makebox(56,10){detection}}
    \put(86,18){\scriptsize $\mathcal{B}$}
    \put(80,15){\vector(1,0){22}}
    %\put(102,0){\framebox(56,30)}
    \multiput(102,15)(28,9){2}{\line(3,-1){28}}
    \multiput(102,15)(28,-9){2}{\line(3,1){28}}
    \put(100,0){\makebox(60,30){Valid ?}}
    \put(133,-2){\scriptsize $\emptyset$}
    \put(130,6){\vector(0,-1){10}}
    \put(159,18){\scriptsize $(C,\vec{D})$}
    \put(158,15){\vector(1,0){28}}
    \put(186,0){\framebox(56,30)}
    \put(186,16){\makebox(56,10){Fine}}
    \put(186,4){\makebox(60,10){tracking}}
    \put(250,18){\scriptsize $\mathcal{B}'$}
    \put(242,15){\vector(1,0){24}}
    \put(266,0){\framebox(56,30){Filtering}}
    \put(330,18){\scriptsize $\mathcal{B}''$}
    \put(322,15){\vector(1,0){22}}
  \end{picture}
  \caption{The detection process main workflow.}
  \label{fig:workflow}
\end{figure}

The initial detection consists in building and extending a blurred segment
$\mathcal{B}$ based on the highest gradient points found in each scan
of a static directional scanner based on an input segment $AB$.

Validity tests are then applied to decide of the detection poursuit.
They aim at rejecting a too short or too sparse blurred segment, or a
blurred segment with a close orientation to the input segment $AB$.
In case of positive response, the position $C$ and direction $\vec{D}$
of this initial blurred segment are extracted.

The fine tracking step consists on building and extending a blurred segment
$\mathcal{B}'$ based on points that correspond to local maxima of the
image gradient, ranked by magnitude order, and with gradient direction
close to a reference gradient direction at the segment first point.
At this refinement step, a control of the assigned width is applied
and an adaptive directional scanner based on the found position $C$ and
direction $\vec{D}$ is used in order to extends the segment in the
appropriate direction. These two improvements are described in the
following sections.

The fine track output segment is finally filtered to remove artifacts
and outliers, and a final blurred segment $\mathcal{B}''$ is provided.

\subsection{Adaptive directional scan}

The blurred segment is searched within a directional scan with a position
and an orientation approximately provided by the user, or blindly defined
in unsupervised mode.
Most of the time, the detection stops where the segment escapes sideways
from the scan strip (\RefFig{fig:escape} a).
A second search is then run using another directional scan aligned
on the detected segment (\RefFig{fig:escape} b).
However, even in case of a correct detection, the estimated orientation
of the segment is subject to the numerization rounding,
and the longer the real segment to detect, the higher the probability to
fail again on a blurred segment escape from the directional scan.

%Even in ideal situation where the detected segment is a perfect line,
%its width is never null as a result of the discretization process.
%The estimated direction accuracy is mostly constrained by the length of
%the detected segment.
%To avoid these side escapes, the scan should not be a linear strip but
%rather a conic shape to take into account the blurred segment preimage.
%This side shift is amplified in situations where the blurred segment is
%left free to get thicker in order to capture possible noisy features.
%The assigned width is then still greater than the detected minimal width,
%so that the segment can move within the directional scan.
%Knowing the detected blurred segment shape and the image size, it is
%possible to define a conic scan area, but this solution is computationaly
%expensive because it leads to useless exploration of large image areas.
%
%\begin{figure}[h]
%\center
%  %\begin{picture}(300,40)
%  %\end{picture}
%  \input{Fig_notions/bscone}
%  \caption{Possible extension area based
%           on the detected blurred segment preimage.}
%  \label{fig:cone}
%\end{figure}

To overcome this issue, in the former work, an additional refinement step is
run using the better orientation estimated from the longer segment obtained.
It is enough to completely detect most of the tested edges, but certainly
not all, especially if larger images with much longer edges are processed.
%The solution implemented in the former work was to let some arbitrary
%margin between the scan strip width and the assigned width to the detection,
%and to perform two fine detection steps, using for each of them the direction
%found at the former step.
As a solution, this operation could be itered as long as the blurred segment
escapes from the directional scanner using as any fine detection steps as
necessary.
But at each iteration, already tested points are processed again,
thus producing a useless computational cost.

Here the proposed solution is to dynamically align the scan direction on
the blurred segment one all along the expansion stage.
At each iteration $i$, the scan strip is aligned on the direction of the
blurred segment $\mathcal{B}_{i-1}$ computed at previous iteration $i-1$.
More generally, an adaptive directional scan $ADS$ is defined by:
\begin{equation}
%S_i = \mathcal{D}_{i-1} \cap \mathcal{N}_i
ADS = \left\{
S_i = \mathcal{D}_i \cap \mathcal{N}_i \cap \mathcal{I}
\left| \begin{array}{l}
\delta(\mathcal{N}_i) = - \delta^{-1}(\mathcal{D}_0) \\
\wedge~ h_0(\mathcal{N}_i) = h_0(\mathcal{N}_{i-1}) + p(\mathcal{D}) \\
\wedge~ \mathcal{D}_{i} = \mathcal{D} (C_{i-1}, \vec{D}_{i-1}, w_{i-1}), i > 1
%\wedge~ \mathcal{D}_{i} = D (\mathcal{B}_{i-1},\varepsilon + k), i > 1
\end{array} \right. \right\}
\end{equation}
%where $D (\mathcal{B}_i,w)$ is the scan strip aligned to the
%detected segment at iteration $i$ with width $w$.
%In practice, the scan width is set a little greater than the assigned
%width $\varepsilon$ ($k$ is a constant arbitrarily set to 4).
where $C_{i-1}$, $\vec{D}_{i-1}$ and $w_{i-1}$ are a position, a director 
vector and a width observed at iteration $i-1$.
In the scope of the present detector, $C_{i-1}$ is the intersection of
the input selection and the medial axis of $\mathcal{B}_{i-1}$,
$\vec{D}_{i-1}$ the support vector of the narrowest digital straight line 
that contains $\mathcal{B}_{i-1}$,
and $w_{i-1}$ a value slightly greater than the minimal width of
$\mathcal{B}_{i-1}$.
So the last clause expresses the update of the scan bounds at iteration $i$.
Compared to static directional scans, the scan strip moves while
scan lines remain fixed.
This behavior ensures a complete detection of the blurred segment even
when the orientation is badly estimated (\RefFig{fig:escape} c).

\begin{figure}[h]
\center
  \begin{tabular}{c@{\hspace{0.2cm}}c}
    \includegraphics[width=0.48\textwidth]{Fig_notions/escapeFirst_zoom.png} &
    \includegraphics[width=0.48\textwidth]{Fig_notions/escapeSecond_zoom.png} \\
    \multicolumn{2}{c}{
    \includegraphics[width=0.78\textwidth]{Fig_notions/escapeThird_zoom.png}}
    \begin{picture}(1,1)(0,0)
      {\color{dwhite}{
        \put(-260,108.5){\circle*{8}}
        \put(-86,108.5){\circle*{8}}
        \put(-172,7.5){\circle*{8}}
      }}
      \put(-263,106){a}
      \put(-89,106){b}
      \put(-175,5){c}
    \end{picture}
  \end{tabular}
  \caption{Aborted detections on side escapes of static directional scans
           and successful detection using an adaptive directional scan.
           The last points added to the left of the blurred segment during
           the initial detection (a) lead to a bad estimation of its
           orientation, and thus to an incomplete fine detection with
           a classical directional scanner (b). This scanner is
           advantageously replaced by an adaptive directional scanner
           able to continue the segment expansion as far as necessary (c).
           The input selection is drawn in red color, the scan strip bounds
           in blue and the detected blurred segment in green.}
  \label{fig:escape}
\end{figure}

%\begin{figure}[h]
%\center
%  \begin{tabular}{c@{\hspace{0.2cm}}c}
%    \includegraphics[width=0.49\textwidth]{Fig_notions/adaptionBounds_zoom.png}
%    & \includegraphics[width=0.49\textwidth]{Fig_notions/adaptionLines_zoom.png}
%  \end{tabular}
%  \caption{Example of blurred segment detection
%           using an adaptive directional scan.
%           On the right picture, the scan bounds are displayed in red, the
%           detected blurred segment in blue, and its bounding lines in green.
%           The left picture displays the successive scans.
%           Here the adaption is visible at the crossing of the tile joins.}
%  \label{fig:adaption}
%\end{figure}

\subsection{Control of the assigned width}

The assigned width $\varepsilon$ to the blurred segment recognition algorithm
is initially set to a large value $\varepsilon_0$ in order to allow the
detection of large blurred segments.
Then, when no more augmentation of the minimal width is observed after
$\lambda$ iterations ($\mu_{i+\lambda} = \mu_i$), it is set to a much
stricter value able to circumscribe the possible interpretations of the
segment, that take into account the digitization margins:
\begin{equation}
\varepsilon = \mu_{i+\lambda} + \frac{\textstyle 1}{\textstyle 2}
\end{equation}
This strategy aims at preventing the incorporation of spurious outliers in
further parts of the segment.
Setting the observation distance to a constant value $\lambda = 20$ seems
appropriate in most experimented situations.

\subsection{Supervised blurred segment detection}

In supervised context, the user draws an input stroke across the specific
edge he wants to extract from the image.
The detection method previously described is continuously run during mouse
dragging and the output blurred segment is displayed on-the-fly.

The method is quite sensitive to the local conditions of the initial detection
so that the output blurred segment may be quite unstable.
In order to temper this undesirable behavior for interactive applications,
the initial detection can be optionally run twice, the second fast scan being
aligned on the first detection output.
This strategy provides a first quick analysis of the local context before
extracting the segment and contributes to notably stabilize the overall
process.

When selecting candidates for the fine detection stage, an option is left
to also reject points with a gradient vector in an opposite direction to
the gradient vector of the blurred segment start point.
In that case, called {\it edge selection mode}, all the blurred segment
points have the same direction.
If they are not rejected, points with opposite gradients are aggregated
into a same blurred segment, allowing the detection of the two opposite
edges of a thin straight object. This is called {\it line selection mode}.
This distinction is illustrated on \RefFig{fig:edgeDir}.

\begin{figure}[h]
\center
  \begin{tabular}{c@{\hspace{0.2cm}}c}
    \includegraphics[width=0.48\textwidth]{Fig_method/briques1_zoom.png} &
    \includegraphics[width=0.48\textwidth]{Fig_method/briques2_zoom.png}
  \end{tabular}
  \begin{picture}(1,1)(0,0)
    {\color{dwhite}{
      \put(-260,-17.5){\circle*{8}}
      \put(-86,-17.5){\circle*{8}}
    }}
    \put(-262.5,-20){a}
    \put(-89,-20){b}
  \end{picture}
  \caption{Blurred segments obtained in line or edge selection mode
           as a result of the gradient direction filtering when adding points.
           In line selection mode (a), a thick blurred segment is built and
           extended all along the brick join.
           In edge selection mode (b), a thin blurred segment is built along
           one of the two join edges.
           Both join edges are detected with the multi-selection option.
           On that very textured image, they are much shorter than the whole
           join detected in line selection mode.
           Blurred segment points are drawn in black color, and the enclosing
           straight segment with minimal width in blue.}
  \label{fig:edgeDir}
\end{figure}

Another option, called multi-detection allows the detection of all the
segments crossed by the input stroke $AB$.
The multi-detection algorithm (Algorithm 1) is displayed below.

\input{Fig_method/algoMulti}

First the positions $M_j$ of the local maxima of the gradient magnitude found
under the stroke are sorted from the highest to the lowest.
For each of them the main detection process is run with three modifications:
i) the initial detection takes $M_j$ and the orthogonal direction $AB_\perp$
to the stroke as input to build a static scan of fixed width
$\varepsilon_{ini}$, and $M_j$ is used as start point of the blurred segment;
ii) an occupancy mask, initially empty, is filled in with the points of the
detected blurred segments $\mathcal{B}_j''$ at the end of each successful
detection;
iii) points marked as occupied are rejected when selecting candidates for the
blurred segment extension in the fine tracking step.
Multiple detections of the same edge are thus avoided.

In edge selection mode (\RefFig{fig:edgeDir} b), the multi-detection
algorithm is executed twice.
In the second run, the start point is rejected and only candidate points
with opposite gradient direction are considered to extend the blurred
segment.

%Beyond the possible detection of a large set of edges at once, the
%multi-detection allows the detection of some unaccessible edges in
%classical single detection mode. This is particularly the case of edges
%that are quite close to a more salient edge with a higher gradient,
%as illustrated in \RefFig{fig:voisins}.
%The multi-detection detects both edges and the user may then select
%the awaited one.
%
%\begin{figure}[h]
%\center
%  \begin{tabular}{c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}c}
%    \includegraphics[width=0.22\textwidth]{Fig_method/voisinImage_zoom.png} &
%    \includegraphics[width=0.22\textwidth]{Fig_method/voisinGrad_zoom.png} &
%    \includegraphics[width=0.22\textwidth]{Fig_method/voisinSingle_zoom.png} &
%    \includegraphics[width=0.22\textwidth]{Fig_method/voisinMulti_zoom.png} \\
%    \parbox{0.22\textwidth}{\centering{\scriptsize{a)}}} &
%    \parbox{0.22\textwidth}{\centering{\scriptsize{b)}}} &
%    \parbox{0.22\textwidth}{\centering{\scriptsize{c)}}} &
%    \parbox{0.22\textwidth}{\centering{\scriptsize{d)}}}
%  \end{tabular}
%  \caption{Detection of close edges with different sharpness:
%    a) input selection across the edges,
%    b) gradient map,
%    c) in single mode, detection of only the edge with the higher gradient,
%    d) in multi-detection mode, detection of both edges. }
%  \label{fig:voisins}
%\end{figure}

%This detection procedure can be used to detect as well straight edges
%as thin straight objects. In the first case, the gradient vectors of all
%edge points are assumed to be oriented in the same direction. But if the
%sign of the gradient direction is not considered, points with gradient in
%opposite directions are merged to build the same blurred segment, allowing
%the detection of both edges of a thin linear structure, like for instance
%the tile joins of \RefFig{fig:edgeDir}.

%On that example, when a straight feature detection is run
%(\RefFig{fig:edgeDir} a),
%a thick blurred segment which extends up to four tiles is provided.
%When a straight edge detection is run, a very thin blurred segment is
%built to follow only one join edge.
%The multi-detection can also be applied to both thin object or edge detection.
%In the latter case, the detection algorithm is run twice using opposite
%directions, so that in the exemple of figure (\RefFig{fig:edgeDir} b),
%both edges (in different colors) are highlighted.
%These two thin blurred segments are much shorter, probably because the
%tiles are not perfectly aligned.
%This example illustrates the versatility of the new detector.

\subsection{Automatic blurred segment detection}

An unsupervised mode is also proposed to automatically detect all the
straight edges in the image. The principle of this automatic detection
is described in Algorithm 2. A stroke that crosses the whole image, is
swept in both direction, vertical then horizontal, from the center to
the borders. At each position, the multi-detection algorithm is run
to collect all the segments found under the stroke.
\input{Fig_method/algoAuto}

The behavior of the unsupervised detection is depicted through the two
examples of \RefFig{fig:auto}.
The example on the left shows the detection of thin straight objects on a
circle with variable width.
On the left half of the circumference, the distance between both edges
exceeds the initial assigned width and a thick blurred segment is build
for each of them. Of course, on a curve, a continuous thickenning is
observed untill the blurred segment minimal width reaches the initial
assigned width.
On the right half, both edges are encompassed in a common blurred segment,
and at the extreme right part of the circle, the few distant residual points
are grouped to form a thick segment.

The example on the right shows the limits of the edge detector on a picture
with quite dense repartition of gradient.
All the salient edges are well detected but they are surrounded by a lot
of false detections, that rely on the presence of many local maxima of
the gradient magnitude with similar orientations.

\begin{figure}[h]
\center
  \begin{tabular}{c@{\hspace{0.2cm}}c}
    \includegraphics[width=0.37\textwidth]{Fig_method/vcercleAuto.png} &
    \includegraphics[width=0.58\textwidth]{Fig_method/plafondAuto.png}
  \end{tabular}
  \caption{Automatic detection of blurred segments.}
  \label{fig:auto}
\end{figure}