Newer
Older
Our work relies on the notion of digital straight line as classically
defined in the digital geometry literature \cite{KletteRosenfeld04}.
Only the 2-dimensional case is considered here.
\begin{definition}
A digital line $\mathcal{L}(a,b,c,\nu)$, with $(a,b,c,\nu) \in \mathbb{Z}^4$,
is the set of points $P(x,y)$ of $\mathbb{Z}^2$ that satisfy :
$0 \leq ax + by - c < \nu$.
$b/a$ is the slope of $\mathcal{L}$, $c$ its intercept and $\nu$
its arithmetic width.
When $\nu = max (|a|, |b|)$, $\mathcal{L}$ is the narrowest 8-connected
A blurred segment $\mathcal{B}$ of assigned width $\varepsilon$ is a set
of points in $\mathbb{Z}^2$ that all belong to a digital line of
arithmetical width $\varepsilon$.
Linear time algorithms have been developed to recognize a blurred segment
of assigned width $\varepsilon$ \cite{DebledAl05,Buzer07}.
They are based on an incremental growth of the convex hull of the blurred
segment when adding each point successively.
The minimal width $\mu$ of the blurred segment $\mathcal{B}$ is the
arithmetical width of the narrowest digital straight line that contains
$\mathcal{B}$.
It is also the minimal width of the convex hull, that is computed by
Melkman's algorithm \cite{Melkman87}.
The extension of the blurred segment $\mathcal{B}_i$ of assigned width
$\varepsilon$ and minimal width $\mu_i$ at step $i$ with a new input point
$P_{i+1}$ is thus controlled by the recognition test $\mu_{i+1} < \varepsilon$.
\caption{A growing blurred segment $\mathcal{B}_i$ :
when adding the new point $P_{i+1}$, the blurred segment minimal width
augments from $\mu_i$ to $\mu_{i+1}$; if the new width $\mu_{+1}$ exceeds
the assigned width $\varepsilon$, then the new input point is rejected.}
At the beginning, a large width $\varepsilon_{ini}$ is assigned to the
recognition problem to allow the detection of large blurred segments.
Then, when extending the blurred segment, this assigned width is
gradually decremented to reach the detected blurred segment minimal width.
A directional scan is a partition of a digital straight line $\mathcal{D}$,
called the {\it scan strip}, into scans $S_i$, each of them being a
segment of a naive line $\mathcal{S}_i$ orthogonal to $\mathcal{D}$.
\[ S_i = \mathcal{D} \cap \mathcal{S}_i,
\mathcal{S}_i \perp \mathcal{D} \]
The scans $S_i$ are developed on each side of a central scan $S_0$,
and labelled with their manhattan distance ($d_1 = |d_x| + |d_y|$) to
the central line $\mathcal{S}_0$ and a positive (resp. negative) sign
if their are on the left (resp. right) of $\mathcal{S}_0$.
\begin{figure}[h]
\center
%\begin{picture}(300,40)
%\end{picture}
\input{Fig_notions/fig}
\caption{Example of directional scan.}
\label{fig:ds}
\end{figure}
The directional scan can be defined by its central scan $S_0$.
If $A(x_A,y_A)$ and $B(x_B,y_B)$ are the end points of $S_0$,
\[ \mathcal{D}(A,B)
= \mathcal{L}(\delta_x, \delta_y, min (c1,c2), 1 + |c_1-c_2|) \]
where $\delta_x = x_B - x_A, \delta_y = y_B - y_A,
c_1 = a\cdot x_A + b\cdot y_A$ and $c_2 = a\cdot x_B + b\cdot y_B$.
The scan line $\mathcal{S}_i$ is then defined by :
\[ \mathcal{S}_i(A,B) = \mathcal{L}(\delta_y, -\delta_x,
\delta_y\cdot x_A - \delta_X\cdot y_A + i\cdot \nu_{AB}, \nu_{AB}) \]
where $\nu_{AB} = max (|\delta_x|, |\delta_y|)$
The scan lines length is $d_\infty(AB)$ or $d_\infty(AB)-1$, where $d_\infty$
is the chessboard distance ($d_\infty = max (|d_x|,|d_y|)$).
In practice, this difference of length between scan lines is not a drawback,
as the image bounds should also be processed anyway.
The directional scan can also be defined by its central point $C(x_C,y_C)$,
its direction $\vec{D}(x_D,y_D)$ and its width $w$. The scan strip is :
\[ \mathcal{D}(C,\vec{D},w)
= \mathcal{L}(y_D, -x_D, x_C\cdot y_D - y_C\cdot x_D - w / 2, w) \]
and the scan line $\mathcal{S}_i(C,\vec{D},w)$ :
\[ \mathcal{S}_i(C,\vec{D},w)
= \mathcal{L}(x_D, y_D,
x_D\cdot x_C + y_D\cdot y_C - w / 2 + i\cdot w,
max (|x_D|,|y_D|) \]
We try to detect a blurred segment inside a directional scan which position
and orientation are given by the user, or defined in arbitrary direction in
unsupervised mode.
Most of the times, the detection stops where the segment escapes sideways
froms the scan strip.
Therefore a second search is run again using an other directional scan aligned
on the detected segment.
But we only have an estimation of this blurred segment direction, and the
longer the real segment, the higher the probability to fail again on a
blurred segment escape from the directional scan.
\begin{figure}[h]
\center
\begin{picture}(300,40)
\end{picture}
\caption{Example of early detection failures
on side escapes from the directional scan.}
\label{fig:sideEscapes}
\end{figure}
Even in ideal situation where the detected segment is a perfect line,
its width is never null as a result of the discretization process.
The estimated direction accuracy is mostly constrained by the length of
the detected segment.
To avoid these side escapes, the scan should not be a linear strip but
rather a conic shape to take into account the blurred segment preimage.
% Of course this problem is amplified when ideal BS are considered.
% -> FAUX : c'est relativement plus penalisant sur 1 BS reduit a 2 pts
% que sur un BS qu'on n'autorise pas a s'elargir.
This side shift is amplified when we let the blurred segment the capacity
to get thicker in order to capture possible noisy features.
The assigned width is then still greater than the detected minimal width,
so that the segment can move within the directional scan.
Knowing the detected blurred segment shape and the image size, it is
possible to define a conic scan area, but this solution is computationaly
expensive because it leads to useless exploration of large image areas.
\begin{figure}[h]
\center
%\begin{picture}(300,40)
%\end{picture}
\input{Fig_notions/bscone}
\caption{Possible extension area based
on the detected blurred segment preimage.}
\label{fig:cone}
\end{figure}
The solution implemented in the former work was to let some arbitrary
margin between the scan strip width and the assigned width to the detection,
and to perform two fine detection steps, using for each of them the direction
found at the former step.
This process could be itered as long as the blurred segment escapes from
the directional scanner using as any detection steps as necessary.
But it produces a useless computational coast, because of the margin left,
but also of the multiple detection of the same segment start points.
\begin{figure}[h]
\center
\begin{picture}(300,40)
\end{picture}
\caption{Example of blurred segment detection
using an adaptive directional scan.}
\label{fig:adaptiveScan}
\end{figure}
The solution we propose here is to dynamically adapt the scan direction
on the detection result. At each position $i$, the scan strip is updated
using the direction and minimal width of the blurred segment computed
at the former position $i-1$, that is :
\[ S_i = \mathcal{D} \cap \mathcal{S}_{i-1} \]
Compared to static directional scans, the scan strip varies while the
scan lines remain fixed.