Newer
Older
The workflow of the detection process is summerized in the following figure.
The initial detection consists in building and extending a blurred segment
$\mathcal{B}$ based on the highest gradient points found in each scan
of a static directional scan defined by the input stroke $AB$.
Validity tests are then applied to decide of the detection poursuit.
They aim at rejecting too short or too sparse blurred segments, or
blurred segments with an orientation close to the input stroke $AB$.
In case of positive response, the position $C$ and direction $\vec{D}$
of this initial blurred segment are extracted.
In the fine tracking step, another blurred segment $\mathcal{B}'$ is built
and extended with points that correspond to local maxima of the
image gradient, ranked by magnitude order, and with gradient direction
close to a reference gradient direction at the segment first point.
At this refinement step, a control of the assigned width is applied
and an adaptive directional scan based on the found position $C$ and
direction $\vec{D}$ is used in order to extends the segment in the
appropriate direction. These two improvements are described in the
following sections.
The output segment $\mathcal{B}'$ is finally tested according to the
application needs. Too short, too sparse or too fragmented segments
can be rejected. Length, sparsity or fragmentation thresholds are
intuitive parameters left at the end user disposal.
%None of these tests are activated for the experimental stage in order
%to put forward achievable performance.
\subsection{Adaptive directional scan}
The blurred segment is searched within a directional scan with a position
and an orientation approximately provided by the user, or blindly defined
in unsupervised mode.
Most of the time, the detection stops where the segment escapes sideways
A second search is then run using another directional scan aligned
on the detected segment (\RefFig{fig:escape} b).
In the given example, an outlier added to the initial segment leads to a
wrong orientation value.
But even in case of a correct detection, this estimated orientation is
subject to the numerization rounding, and the longer the real segment is,
the higher the probability gets to fail again on an escape from the scan strip.
\begin{figure}[h]
\center
\begin{tabular}{c@{\hspace{0.2cm}}c}
\includegraphics[width=0.48\textwidth]{Fig_notions/escapeLightFirst_zoom.png} &
\includegraphics[width=0.48\textwidth]{Fig_notions/escapeLightSecond_zoom.png} \\
\includegraphics[width=0.62\textwidth]{Fig_notions/escapeLightThird_zoom.png}}
\put(-260,78.5){\circle*{8}}
\put(-86,78.5){\circle*{8}}
\put(-172,4.5){\circle*{8}}
\put(-262.5,76){a}
\put(-89,75.5){b}
\put(-174.5,2){c}
\end{picture}
\end{tabular}
\caption{Aborted detections on side escapes of static directional scans
and successful detection using an adaptive directional scan.
The last points added to the left of the blurred segment during
initial detection (a) lead to a bad estimation of its
orientation, and thus to an incomplete fine tracking with a
classical directional scan (b). An adaptive directional scan at
the place of the static one allows to continue the segment
expansion as far as necessary (c).
On the pictures, the input selection is drawn in red color,
the scan strip bounds
in blue and the detected blurred segment in green.}
\label{fig:escape}
\end{figure}
To overcome this issue, in the former work, an additional refinement step is
run in the direction estimated from this longer segment.
It is enough to completely detect most of the tested edges, but certainly
not all, especially if big size images with much longer edges are processed.
As a solution, this operation could be itered as long as the blurred segment
escapes from the directional scan using as any fine detection steps as
necessary.
But at each iteration, already tested points are processed again,
thus producing a useless computational cost.
Here the proposed solution is to dynamically align the scan direction on
At each iteration $i$ of the expansion, the scan strip is aligned on the
direction of the blurred segment $\mathcal{B}_{i-1}$ computed at previous
iteration $i-1$.
More generally, an adaptive directional scan $ADS$ is defined by:
\begin{equation}
ADS = \left\{
S_i = \mathcal{D}_i \cap \mathcal{N}_i \cap \mathcal{I}
\left| \begin{array}{l}
\delta(\mathcal{N}_i) = - \delta^{-1}(\mathcal{D}_0) \\
\wedge~ h(\mathcal{N}_i) = h(\mathcal{N}_{i-1}) + p(\mathcal{D}_0) \\
\wedge~ \mathcal{D}_{i} = \mathcal{D} (C_{i-1}, \vec{D}_{i-1}, w_{i-1}),
i > \lambda
where $C_{i}$, $\vec{D}_{i}$ and $w_{i}$ are respectively a position,
a director vector and a width observed at iteration $i$.
In the scope of the present detector, $C_{i-1}$ is the intersection of
the input selection and the medial axis of $\mathcal{B}_{i-1}$,
$\vec{D}_{i-1}$ the support vector of the enclosing digital segment
$E(\mathcal{B}_{i-1})$, and $w_{i-1}$ a value slightly greater than the
minimal width of $\mathcal{B}_{i-1}$.
So the last clause expresses the update of the scan bounds at iteration $i$.
Compared to static directional scans where the scan strip remains fixed to
the initial line $\mathcal{D}_0$, here the scan strip moves while
This behavior ensures a complete detection of the blurred segment even when
the orientation of $\mathcal{D}_0$ is badly estimated (\RefFig{fig:escape} c).
In practice, it is started after $\lambda = 20$ iterations when the observed
direction becomes more stable.
\subsection{Control of the assigned width}
The assigned width $\varepsilon$ to the blurred segment recognition algorithm
is initially set to a large value $\varepsilon_0$ in order to allow the
detection of large blurred segments.
Then, when no more augmentation of the minimal width is observed after
$\tau$ iterations ($\mu_{i+\tau} = \mu_i$), it is set to a much
stricter value able to circumscribe the possible interpretations of the
segment, that take into account the digitization margins:
\begin{equation}
\varepsilon = \mu_{i+\tau} + \frac{\textstyle 1}{\textstyle 2}
\end{equation}
This strategy aims at preventing the incorporation of spurious outliers in
further parts of the segment.
Setting the observation distance to a constant value $\tau = 20$ seems
\subsection{Supervised blurred segment detection}
In supervised context, the user draws an input stroke across the specific
edge he wants to extract from the image.
The detection method previously described is continuously run during mouse
dragging and the output blurred segment is displayed on-the-fly.
The method is quite sensitive to the local conditions of the initial detection
so that the output blurred segment may be quite unstable.
In order to temper this undesirable behavior for interactive applications,
the initial detection can be optionally run twice, the second fast scan being
aligned on the first detection output.
This strategy provides a first quick analysis of the local context before
extracting the segment and contributes to notably stabilize the overall
process.
When selecting candidates for the fine detection stage, an option, called
{\it edge selection mode}, is left to also filter the points according to
their gradient direction.
In {\it main edge selection mode}, only the points with a gradient vector
in the same direction as the start point gradient vector are added to the
blurred segment.
In {\it opposite edge selection mode}, only the points with an opposite
gradient vector direction are kept.
In {\it line selection mode} this direction-based filter is not applied,
and all the candidate points are aggregated into a same blurred segment,
whatever the direction of their gradient vector.
As illustrated on \RefFig{fig:edgeDir}, this mode allows the detection of
the two opposite edges of a thin straight object.
\begin{figure}[h]
\center
\begin{tabular}{c@{\hspace{0.2cm}}c}
\includegraphics[width=0.4\textwidth]{Fig_method/selectLine_zoom.png} &
\includegraphics[width=0.4\textwidth]{Fig_method/selectEdges_zoom.png}
\end{tabular}
\begin{picture}(1,1)(0,0)
{\color{dwhite}{
\put(-220,-14.5){\circle*{8}}
\put(-74,-14.5){\circle*{8}}
\caption{Blurred segments obtained in \textit{line} or \textit{edge
selection mode} as a result of the gradient direction filtering
when adding points.
In \textit{line selection mode} (a), a thick blurred segment is
built and extended all along the brick join.
In \textit{edge selection mode} (b), a thin blurred segment is
built along one of the two join edges.
Both join edges are detected with the \textit{multi-selection}
option.
On that very textured image, they are much shorter than the whole
join detected in line selection mode.
Blurred segment points are drawn in black color, and the enclosing
\subsection{Multiple blurred segments detection}
Another option, called {\it multi-detection} (Algorithm 1), allows the
detection of all the segments crossed by the input stroke $AB$.
In order to avoid multiple detections of the same edge, an occupancy mask,
initially empty, collects the dilated points of all the blurred segments,
so that these points can not be added to another segment.
First the positions $M_j$ of the prominent local maxima of the gradient
magnitude found under the stroke are sorted from the highest to the lowest.
For each of them the main detection process is run with three modifications:
\item the initial detection takes $M_j$ and the orthogonal direction
$\vec{AB}_\perp$ to the stroke as input to build a static scan of fixed width
$2~\varepsilon_{ini}$, and $M_j$ is used as start point of the blurred
segment;
\item the occupancy mask is filled in with the points of the dilated blurred
segments $\mathcal{B}_j''$ at the end of each successful detection
\item points marked as occupied are rejected when selecting candidates for the
In edge selection mode (\RefFig{fig:edgeDir} b), the multi-detection
algorithm is executed twice, first in main edge selection mode, then
in opposite edge selection mode.
An unsupervised mode is also proposed to automatically detect all the
straight edges in the image. The principle of this automatic detection
is described in Algorithm 2. A stroke that crosses the whole image, is
swept in both direction, vertical then horizontal, from the center to
the borders. At each position, the multi-detection algorithm is run
to collect all the segments found under the stroke.
In the present work, the stroke sweeping step $\delta$ is set to 10 pixels.
The automatic detection of blurred segments in a whole image is left available
for testing from an online demonstration and \textit{GitHub} source code at this address: \\
\href{http://ipol-geometry.loria.fr/~kerautre/ipol_demo/AdaptDirBS_IPOLDemo}{
\small{\url{http://ipol-geometry.loria.fr/~kerautre/ipol_demo/AdaptDirBS_IPOLDemo}}}
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
%The behavior of the unsupervised detection is depicted through the two
%examples of \RefFig{fig:auto}.
%The example on the left shows the detection of thin straight objects on a
%circle with variable width.
%On the left half of the circumference, the distance between both edges
%exceeds the initial assigned width and a thick blurred segment is build
%for each of them. Of course, on a curve, a continuous thickenning is
%observed untill the blurred segment minimal width reaches the initial
%assigned width.
%On the right half, both edges are encompassed in a common blurred segment,
%and at the extreme right part of the circle, the few distant residual points
%are grouped to form a thick segment.
%
%The example on the right shows the limits of the edge detector on a picture
%with quite dense repartition of gradient.
%All the salient edges are well detected but they are surrounded by a lot
%of false detections, that rely on the presence of many local maxima of
%the gradient magnitude with similar orientations.
%
%\begin{figure}[h]
%\center
% \begin{tabular}{c@{\hspace{0.2cm}}c}
% \includegraphics[width=0.37\textwidth]{Fig_method/vcercleAuto.png} &
% \includegraphics[width=0.58\textwidth]{Fig_method/plafondAuto.png}
% \end{tabular}
% \caption{Automatic detection of blurred segments.}
% \label{fig:auto}
%\end{figure}