Newer
Older
The workflow of the blurred segment detection process is summerized
\begin{figure}[h]
\center
\begin{picture}(340,34)(0,-4)
%\put(0,-2.5){\framebox(340,35)}
\put(-2,18){\scriptsize $(A,B)$}
\put(-2,15){\vector(1,0){24}}
\put(24,0){\framebox(56,30)}
\put(24,16){\makebox(56,10){Initial}}
\put(24,4){\makebox(56,10){detection}}
\put(80,15){\vector(1,0){22}}
%\put(102,0){\framebox(56,30)}
\multiput(102,15)(28,9){2}{\line(3,-1){28}}
\multiput(102,15)(28,-9){2}{\line(3,1){28}}
\put(100,0){\makebox(60,30){Valid ?}}
\put(130,6){\vector(0,-1){10}}
\put(159,18){\scriptsize $(C,\vec{D})$}
\put(158,15){\vector(1,0){28}}
\put(186,0){\framebox(56,30)}
\put(186,16){\makebox(56,10){Fine}}
\put(186,4){\makebox(60,10){tracking}}
\put(242,15){\vector(1,0){24}}
\put(266,0){\framebox(56,30){Filtering}}
The initial detection consists in building and extending a blurred segment
$\mathcal{B}_1$ based on the highest gradient points found in each scan
of a static directional scanner based on an input segment $AB$.
Validity tests based on the length or sparsity of $\mathcal{B}_1$ are
applied to decide of the detection poursuit. In case of positive response,
the position $C$ and direction $\vec{D}$ of this initial blurred segment
are extracted.
The fine tracking step consists on building and extending a blurred segment
$\mathcal{B}_2$ based on points that correspond to local maxima of the
image gradient, ranked by magnitude order, and with gradient direction
close to a reference gradient direction at the segment first point.
This step uses an adaptive directional scanner based on the found
position $C$ direction $\vec{D}$ in order to extends the segment in the
appropriate direction.
After $N$ points are added without any augmentation of the segment minimal
width, this width becomes the new assigned width so that the segment
can not thicken any more. This procedure allows to control the blurred
segment width based on the observation of its evolution in the vicinity
of the input stroke.
Setting $N=20$ shows a good behaviour on tested images.
The fine track output segment is finally filtered to remove artifacts
and outliers, and a solution blurred segment $\mathcal{B}_3$ is provided.
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
\subsection{Supervised blurred segment detection}
In supervised context, the user draws an input stroke across the specific
edge he wants to extract from the image.
The detection method previously described is continuously run during mouse
dragging and the output blurred segment is displayed on-the-fly.
The method is very sensitive to the local conditions of the initial detection
so that the output blurred segment may be quite unstable.
In order to temper this undesirable behaviour for particular applications,
the initial detection can be optionally run twice, the second fast scan being
aligned on the first detection output.
This strategy provides a first quick analysis of the local context before
extracting the segment and contributes to notably stabilize the overall
process.
Another option, called multi-detection allows the detection of all the
segments crossed by the input stroke $AB$.
The multi-detection algorithm is displayed below.
First the positions $M_j$ of the local maxima of the gradient magnitude found
under the stroke are sorted from the highest to the lowest.
For each of them the main detection process is run with three modifications:
i) the initial detection takes $M_j$ and the orthogonal direction $AB_\perp$
to the stroke as input to build a static scan of fixed width
$\varepsilon_{ini}$, and $M_j$ is used as start point of the blurred segment;
ii) a occupancy mask, initially empty, is filled in with the points of the
detected blurred segments $\mathcal{B}_{j3}$ at the end of each successful
detection;
iii) points marked as occupied are rejected when selecting candidates for the
blurred selegment extension in the fine tracking step.
\input{Fig_method/algoMulti}
Beyond the possible detection of a large set of edges at once, the
multi-detection allows the detection of some unaccessible edges in
classical single detection mode. This is particularly the case of edges
that are quite close to a more salient edge with a higher gradient.
The multi-detection detects both edges and the user may then select
the awaited one.
\begin{figure}[h]
\center
\begin{tabular}{c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}c}
\includegraphics[width=0.22\textwidth]{Fig_method/voisinImage_zoom.png} &
\includegraphics[width=0.22\textwidth]{Fig_method/voisinGrad_zoom.png} &
\includegraphics[width=0.22\textwidth]{Fig_method/voisinSingle_zoom.png} &
\includegraphics[width=0.22\textwidth]{Fig_method/voisinMulti_zoom.png} \\
\parbox{0.22\textwidth}{\centering{\scriptsize{a)}}} &
\parbox{0.22\textwidth}{\centering{\scriptsize{b)}}} &
\parbox{0.22\textwidth}{\centering{\scriptsize{c)}}} &
\parbox{0.22\textwidth}{\centering{\scriptsize{d)}}}
\end{tabular}
\caption{Example of edge, detected only in multi-detection mode:
a) the stroke on the intensity image,
b) the gradient map,
c) the result of the classical single mode detection,
d) the result of the multi-detection. }
\label{fig:voisins}
\end{figure}
This detection procedure can be used to detect as well straight edges
as thin straight objects. In the first case, the gradient vectors of all
edge points are assumed to be oriented in the same direction. But if the
sign of the gradient direction is not considered, points with gradient in
opposite directions are merged to build the same blurred segment, allowing
the detection of both edges of a thin linear structure, like for instance
\begin{figure}[h]
\center
\begin{tabular}{c@{\hspace{0.2cm}}c}
\includegraphics[width=0.46\textwidth]{Fig_method/multiStroke_zoom.png} &
\includegraphics[width=0.46\textwidth]{Fig_method/multiEdge_zoom.png} \\
\parbox{0.46\textwidth}{\centering{\scriptsize{a)
Detection of straight lines}}} &
\parbox{0.46\textwidth}{\centering{\scriptsize{b)
Detection of straight edges}}}
\end{tabular}
\caption{Testing the gradient direction to detect as well edges as
linear structures from the same input stroke.}
\label{fig:edgeDir}
\end{figure}
On that example, when a straight features detection is run
a thick blurred segment which extends up to four tiles is provided.
When a straight edge detection is run, a very thin blurred segment is
built to follow only one join edge.
The multi-detection can also be applied to both thin object or edge detection.
In the latter case, the detection algorithm is run twice using opposite
directions, so that in the exemple of figure (\RefFig{fig:edgeDir} b)),
both edges (in different colours) are highlighted.
These two thin blurred segments are much shorter, probably because the
tiles are not perfectly aligned.
This example illustrates the versatility of the new detector.
An unsupervised mode is also proposed to automatically detect all the
straight edges in the image. A stroke that crosses the whole image, is
swept in both direction, vertical then horizontal, from the center to
the borders. At each position, the multi-detection algorithm is run
to collect all the segments found under the stroke.
\input{Fig_method/algoAuto}
The behaviour of the unsupervised detection is depicted through the two
The example on the left shows the detection of thin straight objects on a
circle with variable width.
On the left half of the circumference, the distance between both edges
exceeds the initial assigned width and a thick blurred segment is build
for each of them. Of course, on a curve, a continuous thickenning is
observed untill the blurred segment minimal width reaches the initial
assigned width.
On the right half, both edges are encompassed in a common blurred segment,
and at the extreme right part of the circle, the few distant residual points
are grouped to form a thick segment.
The example on the right shows the limits of the edge detector on a picture
with quite dense repartition of gradient.
All the salient edges are well detected but they are surrounded be a lot
of false detections, that rely on the presence of many local maxima of
the gradient magnitude with similar orientations.
\begin{figure}[h]
\center
\begin{tabular}{c@{\hspace{0.2cm}}c}
\includegraphics[width=0.37\textwidth]{Fig_method/vcercleAuto.png} &
\includegraphics[width=0.58\textwidth]{Fig_method/plafondAuto.png}
\end{tabular}
\caption{Automatic detection of blurred segments.}
\label{fig:auto}
\end{figure}