Skip to content
Snippets Groups Projects
Commit d0866f52 authored by even's avatar even
Browse files

Motivations revisited

parent 6d24a975
No related branches found
No related tags found
No related merge requests found
......@@ -175,3 +175,15 @@ in urban imagery},
pages = {1255002},
optdoi = {10.1142/S0218001412550026}
}
@article{GuptaMazumdar13,
title = {Sobel edge detection algorithm},
author = {Gupta, Samta and Mazumdar, Susmita Ghosh},
journal = {Int. Journal of Computer Science and Management Research},
volume = {2},
number = {2},
month = {February},
year = {2013},
pages = {1578--1583}
}
\section{Introduction}
\label{sec:intro}
Straight lines are commonly used visual features for many image analysis
processes.
For instance in computer vision, they are used to estimate the vanishing
points associated to main directions of the 3D world, thus allowing camera
orientation. They are also used to detect structured features to help a
3D reconstruction process.
Therefore, straight line detection is always an active research topic
centered on the quest of still faster, more accurate or more robust-to-noise
methods \cite{AkinlarTopal12,GioiAl10,LuAl15,MatasAl00}.
Most of the times, they are based on the extraction of an edge map based
on gradient magnitude. Gradient orientation is often used to discriminate
candidates and thus provide better efficiency.
However, they seldom provide an exploitable measure of the output line
quality, based on intrinsic properties such as sharpness, connectivity
or scattering.
This information could be useful to get some confidence level and help to
classify these features for further exploitation.
In computer vision applications, it could also be a base for uncertainty
propagation within 3D interpretation tools, in order to dispose of
complementary measures to reprojection errors for local accuracy evaluation.
%Some information may sometimes be drawn from their specific context,
%for example through an analysis of the peak in a Hough transform accumulator.
In digital geometry, new mathematical definitions of classical
geometric objects, such as lines or circles, have been developed
to better fit to the discrete nature of most of todays data to process.
In particular, the notion of blurred segment \cite{Buzer07,DebledAl05} was
introduced to cope with the image noise or other sources of imperfections
from the real world by the mean of a width parameter.
Efficient algorithms have already been designed to recognize
these digital objects in binary images \cite{DebledAl06}.
Blurred segments seem well suited to reflect the required line quality
information.
%Its preimage,
%i.e. the space of geometric entities which numerization matches this
%blurred segment, may be used to compute some confidence level in the delivered
%3D interpretations, as a promising extension of former works
%on discrete epipolar geometry \cite{NatsumiAl08}.
The present work aims at designing a flexible tool to detect blurred segments
with optimal width and orientation in gray-level images for as well
supervised as unsupervised contexts.
User-friendly solutions are sought, with ideally no parameter to set,
or at least quite few values with intuitive meaning.
A first attempt was already made in a previous work \cite{KerautretEven09}
but the segment width was initially fixed by the user and not estimated,
leading to erroneous orientations of the detected lines.
In the present work, the limitations of this first detector were solved
by the introduction of two new concepts:
(i) {\bf adaptive directional scan} designed to get some
compliance to the unpredictable orientation problem;
(ii) {\bf control of the assigned width} to the blurred segment
recognition algorithm, intended to derive more reliable information on the
line orientation and quality.
As a side effect, these two major evolutions also led to a noticeable
improvement of the time performance of the detector.
They are also put forward within a global line extraction algorithm
which can be evaluated through an online demonstration at :
\href{http://ipol-geometry.loria.fr/~kerautre/ipol_demo/FBSD_IPOLDemo}{
\small{\url{http://ipol-geometry.loria.fr/~kerautre/ipol_demo/FBSD_IPOLDemo}}}
In the next section, the main theoretical notions used in this work are
introduced.
The new detector workflow, the adaptive directional scan, the control
of the assigned width and their integration into both supervised and
unsupervised contexts are then presented in \RefSec{sec:method}.
Experiments led to assess the achieved performance of this new detector
are decribed in \RefSec{sec:expe}.
Finally, \RefSec{sec:conclusion} gives a short conclusion
followed by some open perspectives for future works.
\section{The detection method}
\label{sec:method}
In this line detection method, only the gradient information is processed
as it provides a good information on the image dynamics, and hence the
presence of edges.
Trials to also use the intensity signal were made through costly correlation
techniques, but they were mostly successful for detecting shapes with a
stable appearance such as metallic tubular objects \cite{AubryAl17}.
Contrarily to most detectors, no edge map is built here, but gradient
magnitude and orientation are examined in privileged directions to track
edge traces.
Therefore we use a Sobel operator with a 5x5 pixels mask \cite{GuptaMazumdar13}
to get a high quality gradient information.
\subsection{Previous work}
In a former paper \cite{KerautretEven09}, an efficient tool to detect
blurred segments of fixed width in gray-level images was already introduced.
It was based on a first rough detection in a local image area
defined by the user. At that stage, the goal was to disclose the presence
of a straight edge. Therefore as simple a test as the gradient maximal value
was performed.
In case of success, refinement steps were then run through an exploration of
the image in the direction of the detected edge.
In order to prevent local disturbances such as the presence of a sharper
edge nearby, all the local gradient maxima were successively tested
untill a correct candidate with an acceptable gradient orientation was found.
Despite of good performances achieved, several drawbacks remained.
First, the blurred segment width was not measured but initially set by the
user according to the application requirements. The produced information
on the edge quality was rather poor, and especially when the edge is thin,
the risk to incorporate outlier points was quite high, thus producing a
biased estimation of the edge orientation.
Then, two refinement steps were systematically performed.
On the one hand, this is useless when the first detection is successfull.
On the other hand, there is no guarantee that this approach is able to
process larger images.
The search direction relies on the support vector of the blurred segment
detected at the former step.
Because the numerization rounding fixes a limit on this estimated orientation
accuracy, more steps are inevitably necessary to process larger images.
\subsection{Workflow of the detection process}
The workflow of the detection process is summerized in the following figure.
\begin{figure}[h]
\center
\input{Fig_method/workflow}
\caption{The detection process main workflow.}
\label{fig:workflow}
\end{figure}
The initial detection consists in building and extending a blurred segment
$\mathcal{B}$ based on points with highest norm gradient found in each scan
of a static directional scan defined by an input segment $AB$.
Validity tests are then applied to decide of the detection pursuit.
They aim at rejecting too short or too sparse blurred segments, or
those with a close orientation to $AB$.
In case of positive response, the position $C$ and direction $\vec{D}$
of this initial blurred segment are extracted.
In the fine tracking step, another blurred segment $\mathcal{B}'$ is built
and extended with points that correspond to local maxima of the
image gradient, ranked by magnitude order, and with gradient direction
close to start point gradient direction.
At this refinement step, a control of the assigned width is applied
and an adaptive directional scan based on the found position $C$ and
direction $\vec{D}$ is used in order to extends the segment in the
appropriate direction. These two improvements are described in the
following sections.
The output segment $\mathcal{B}'$ is finally tested according to the
application needs. Too short, too sparse or too fragmented segments
can be rejected. Length, sparsity or fragmentation thresholds are
intuitive parameters left at the end user disposal.
%None of these tests are activated for the experimental stage in order
%to put forward achievable performance.
\subsection{Adaptive directional scan}
The blurred segment is searched within a directional scan with a position
and an orientation approximately provided by the user, or blindly defined
in unsupervised mode.
Most of the time, the detection stops where the segment escapes sideways
from the scan strip (\RefFig{fig:escape} a).
A second search is then run using another directional scan aligned
on the detected segment (\RefFig{fig:escape} b).
In the given example, an outlier added to the initial segment leads to a
wrong orientation value.
But even in case of a correct detection, this estimated orientation is
subject to the numerization rounding, and the longer the real segment is,
the higher the probability gets to fail again on an escape from the scan strip.
\begin{figure}[h]
\center
\begin{tabular}{c@{\hspace{0.2cm}}c@{\hspace{0.2cm}}c}
\includegraphics[width=0.24\textwidth]{Fig_notions/escapeLightFirst_half.png} &
\includegraphics[width=0.24\textwidth]{Fig_notions/escapeLightSecond_half.png} &
\includegraphics[width=0.48\textwidth]{Fig_notions/escapeLightThird_zoom.png}
\begin{picture}(1,1)(0,0)
{\color{dwhite}{
\put(-307,4.5){\circle*{8}}
\put(-216,4.5){\circle*{8}}
\put(-127,4.5){\circle*{8}}
}}
\put(-309.5,2){a}
\put(-219,1.5){b}
\put(-129.5,2){c}
\end{picture}
\end{tabular}
\caption{Aborted detections on side escapes of static directional scans
and successful detection using an adaptive directional scan.
The last points added to the left of the blurred segment during
initial detection (a) lead to a bad estimation of its
orientation, and thus to an incomplete fine tracking with a
classical directional scan (b). An adaptive directional scan
instead of the static one allows to continue the segment
expansion as far as necessary (c).
Input selection is drawn in red color, scan strip bounds
in blue and detected blurred segments in green.}
\label{fig:escape}
\end{figure}
To overcome this issue, in the former work, an additional refinement step is
run in the direction estimated from this longer segment.
It is enough to completely detect most of the tested edges, but certainly
not all, especially if big size images with much longer edges are processed.
As a solution, this operation could be itered as long as the blurred segment
escapes from the directional scan using as any fine detection steps as
necessary.
But at each iteration, already tested points are processed again,
thus producing a useless computational cost.
Here the proposed solution is to dynamically align the scan direction on
the blurred segment all along the expansion stage.
At each iteration $i$ of the expansion, the scan strip is aligned on the
direction of the blurred segment $\mathcal{B}_{i-1}$ computed at previous
iteration $i-1$.
More generally, an adaptive directional scan $ADS$ is defined by:
\begin{equation}
ADS = \left\{
S_i = \mathcal{D}_i \cap \mathcal{N}_i \cap \mathcal{I}
\left| \begin{array}{l}
\vec{V}(\mathcal{N}_i) \cdot \vec{V}(\mathcal{D}_0) = 0 \\
\wedge~ h(\mathcal{N}_i) = h(\mathcal{N}_{i-1}) + p(\mathcal{D}_0) \\
\wedge~ \mathcal{D}_{i} = \mathcal{D} (C_{i-1}, \vec{D}_{i-1}, w_{i-1}),
i > \lambda
\end{array} \right. \right\}
\end{equation}
where $C_{i}$, $\vec{D}_{i}$ and $w_{i}$ are respectively a position,
a director vector and a width observed at iteration $i$.
In the scope of the present detector, $C_{i-1}$ is the intersection of
the input selection and the central line of $\mathcal{B}_{i-1}$,
$\vec{D}_{i-1}$ the support vector of the enclosing digital segment
$E(\mathcal{B}_{i-1})$, and $w_{i-1}$ a value slightly greater than the
minimal width of $\mathcal{B}_{i-1}$.
So the last clause expresses the update of the scan bounds at iteration $i$.
Compared to static directional scans where the scan strip remains fixed to
the initial line $\mathcal{D}_0$, here the scan strip moves while
scan lines remain fixed.
This behavior ensures a complete detection of the blurred segment even when
the orientation of $\mathcal{D}_0$ is badly estimated (\RefFig{fig:escape} c).
In practice, it is started after $\lambda = 20$ iterations when the observed
direction becomes more stable.
\subsection{Control of the assigned width}
The assigned width $\varepsilon$ to the blurred segment recognition algorithm
is initially set to a large value $\varepsilon_0$ in order to allow the
detection of large blurred segments.
Then, when no more augmentation of the minimal width is observed after
$\tau$ iterations ($\mu_{i+\tau} = \mu_i$), it is set to a much
stricter value able to circumscribe the possible interpretations of the
segment, that take into account the digitization margins:
\begin{equation}
\varepsilon = \mu_{i+\tau} + \frac{\textstyle 1}{\textstyle 2}
\end{equation}
This strategy aims at preventing the incorporation of spurious outliers in
further parts of the segment.
Setting the observation distance to a constant value $\tau = 20$ seems
appropriate in most experimented situations.
\subsection{Supervised blurred segments detection}
In supervised context, the user draws an input stroke across the specific
edge that he wants to extract from the image.
The detection method previously described is continuously run during mouse
dragging and the output blurred segment is displayed on-the-fly.
%The method is quite sensitive to the local conditions of the initial detection
%so that the output blurred segment may be quite unstable.
%In order to temper this undesirable behavior for interactive applications,
%the initial detection can be optionally run twice, the second fast scan being
%aligned on the first detection output.
%This strategy provides a first quick analysis of the local context before
%extracting the segment and contributes to notably stabilize the overall
%process.
%
%When selecting candidates for the fine detection stage, an option, called
%{\it edge selection mode}, is left to also filter the points according to
%their gradient direction.
%In {\it main edge selection mode}, only the points with a gradient vector
%in the same direction as the start point gradient vector are added to the
%blurred segment.
%In {\it opposite edge selection mode}, only the points with an opposite
%gradient vector direction are kept.
%In {\it line selection mode} this direction-based filter is not applied,
%and all the candidate points are aggregated into a same blurred segment,
%whatever the direction of their gradient vector.
%As illustrated on \RefFig{fig:edgeDir}, this mode allows the detection of
%the two opposite edges of a thin straight object.
%
%\begin{figure}[h]
%\center
% \begin{tabular}{c@{\hspace{0.2cm}}c}
% \includegraphics[width=0.4\textwidth]{Fig_method/selectLine_zoom.png} &
% \includegraphics[width=0.4\textwidth]{Fig_method/selectEdges_zoom.png}
% \end{tabular}
% \begin{picture}(1,1)(0,0)
% {\color{dwhite}{
% \put(-220,-14.5){\circle*{8}}
% \put(-74,-14.5){\circle*{8}}
% }}
% \put(-222.5,-17){a}
% \put(-76.5,-17){b}
% \end{picture}
% \caption{Blurred segments obtained in \textit{line} or \textit{edge
% selection mode} as a result of the gradient direction filtering
% when adding points.
% In \textit{line selection mode} (a), a thick blurred segment is
% built and extended all along the brick join.
% In \textit{edge selection mode} (b), a thin blurred segment is
% built along one of the two join edges.
% Both join edges are detected with the \textit{multi-selection}
% option.
% On that very textured image, they are much shorter than the whole
% join detected in line selection mode.
% Blurred segment points are drawn in black color, and the enclosing
% straight segments in blue.}
% \label{fig:edgeDir}
%\end{figure}
%\subsection{Multiple blurred segments detection}
An option, called {\it multi-detection} (Algorithm 1), allows the
detection of all the segments crossed by the input stroke $AB$.
In order to avoid multiple detections of the same edge, an occupancy mask,
initially empty, collects the dilated points of all the blurred segments,
so that these points can not be added to another segment.
\input{Fig_method/algoMulti}
First the positions $M_j$ of the prominent local maxima of the gradient
magnitude found under the stroke are sorted from the highest to the lowest.
For each of them the main detection process is run with three modifications:
\begin{enumerate}
\item the initial detection takes $M_j$ and the orthogonal direction
$\vec{AB}_\perp$ to the stroke as input to build a static scan of fixed width
$2~\varepsilon_{ini}$, and $M_j$ is used as start point of the blurred
segment;
\item the occupancy mask is filled in with the points of the dilated blurred
segments $\mathcal{B}_j'$ at the end of each successful detection
(a 21 pixels neighborhood is used);
\item points marked as occupied are rejected when selecting candidates for the
blurred segment extension in the fine tracking step.
\end{enumerate}
%In edge selection mode (\RefFig{fig:edgeDir} b), the multi-detection
%algorithm is executed twice, first in main edge selection mode, then
%in opposite edge selection mode.
\subsection{Automatic blurred segment detection}
An unsupervised mode is also proposed to automatically detect all the
straight edges in the image. The principle of this automatic detection
is described in Algorithm 2. A stroke that crosses the whole image, is
swept in both directions, vertical then horizontal, from the center to
the borders. At each position, the multi-detection algorithm is run
to collect all the segments found under the stroke.
In the present work, the stroke sweeping step $\delta$ is set to 10 pixels.
Then small blurred segments are rejected in order to avoid the formation
of small mis-aligned segments when the sweeping stroke crosses an image edge
near one of its ends. In such situation, any nearby disturbing gradient is
likely to deviate the blurred segment direction, and the expansion is
quickly stopped. A length threshold value of 30 pixels is experimentally set.
The automatic detection of blurred segments in a whole image is available
for testing from an online demonstration
and from a \textit{GitHub} source code repository: \\
\href{https://github.com/evenp/FBSD}{
\small{\url{https://github.com/evenp/FBSD}}}
\input{Fig_method/algoAuto}
%The behavior of the unsupervised detection is depicted through the two
%examples of \RefFig{fig:auto}.
%The example on the left shows the detection of thin straight objects on a
%circle with variable width.
%On the left half of the circumference, the distance between both edges
%exceeds the initial assigned width and a thick blurred segment is build
%for each of them. Of course, on a curve, a continuous thickenning is
%observed untill the blurred segment minimal width reaches the initial
%assigned width.
%On the right half, both edges are encompassed in a common blurred segment,
%and at the extreme right part of the circle, the few distant residual points
%are grouped to form a thick segment.
%
%The example on the right shows the limits of the edge detector on a picture
%with quite dense repartition of gradient.
%All the salient edges are well detected but they are surrounded by a lot
%of false detections, that rely on the presence of many local maxima of
%the gradient magnitude with similar orientations.
%
%\begin{figure}[h]
%\center
% \begin{tabular}{c@{\hspace{0.2cm}}c}
% \includegraphics[width=0.37\textwidth]{Fig_method/vcercleAuto.png} &
% \includegraphics[width=0.58\textwidth]{Fig_method/plafondAuto.png}
% \end{tabular}
% \caption{Automatic detection of blurred segments.}
% \label{fig:auto}
%\end{figure}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment