Skip to content
Snippets Groups Projects
Unverified Commit 3f02806b authored by Kerautret Bertrand's avatar Kerautret Bertrand :thinking:
Browse files

merge from master

parents 70cc5a60 98e75cff
No related branches found
No related tags found
No related merge requests found
Showing
with 205 additions and 138 deletions
Article/Fig_expe/buroNew.png

554 KiB | W: | H:

Article/Fig_expe/buroNew.png

274 KiB | W: | H:

Article/Fig_expe/buroNew.png
Article/Fig_expe/buroNew.png
Article/Fig_expe/buroNew.png
Article/Fig_expe/buroNew.png
  • 2-up
  • Swipe
  • Onion skin
Article/Fig_expe/buroOld.png

275 KiB

......@@ -6,7 +6,7 @@
\SetKwData{lm}{LocMax}
\SetKwData{nullset}{$\emptyset$}
\SetKwData{ortho}{$\vec{AB}_\perp$}
\SetKwData{eps}{$\varepsilon_{ini}$}
\SetKwData{eps}{$2~\varepsilon_{ini}$}
\SetKwData{pta}{$A$}
\SetKwData{ptb}{$B$}
\SetKwData{Result}{Result}
......
Article/Fig_method/briques.gif

54.5 KiB

Article/Fig_method/briques1_full.png

77.6 KiB

Article/Fig_method/briques1_zoom.png

9.4 KiB

Article/Fig_method/briques2_full.png

77.5 KiB

Article/Fig_method/briques2_zoom.png

9.28 KiB

Images obtenues pas le test testEdgeDir.txt
Images obtenues par le test testEdgeDir.txt
suivi de la commande de detection multiple : Ctrl-M
avec deux modalites pilotees par Ctrl-E
Sur gimp selection de la zone (34,166)(444,314).
Images obtenues par le test testBriques.txt sur l'image briques.gif.
avec une consigne d'epaisseur finale reglee a 8 pixels (touche x).
Les segments sont extraits en detection simple
et les deux aretes sont sauvegardees separement et affichees.
Trop de detections parasites en multi-detection.
Attenuation du contraste a blevel = 50.
Sur gimp selection de la zone (174,160)(-10,104).
Article/Fig_method/parpaings.png

261 KiB

Article/Fig_method/parpaings2.png

379 KiB

Article/Fig_method/parpaings3.png

393 KiB

105 320
118 283
......@@ -11,7 +11,7 @@
title = {Blurred segments in gray level images for
interactive line extraction},
author = {Kerautret, Bertrand and Even, Philippe},
booktitle = {Proc. of Int. Workshop on Computer Image Analysis}},
booktitle = {Proc. of Int. Workshop on Combinatorial Image Analysis},
series = {LNCS},
volume = {5852},
optpublisher = {Springer},
......@@ -131,7 +131,7 @@
@inproceedings{LuAl15,
title = {CannyLines: a parameter-free line segment detector},
title = {Canny{L}ines: a parameter-free line segment detector},
author = {Lu, Xiaohu and Yao, Jian and Li, Kai and Li, Li},
booktitle = {Int. Conf. on Image Processing},
publisher = {IEEE},
......
......@@ -16,25 +16,27 @@ in \RefTab{tab:cmpOldNew}.
\begin{figure}[h]
\center
\begin{tabular}{c@{\hspace{0.2cm}}c}
\includegraphics[width=0.49\textwidth]{Fig_expe/buro.png} &
\includegraphics[width=0.49\textwidth]{Fig_expe/buroOld.png} &
\includegraphics[width=0.49\textwidth]{Fig_expe/buroNew.png}
\begin{picture}(1,1)
\put(-158,46){\circle{8}}
\put(-162,42){\makebox(8,8){\scriptsize 0}}
\put(-18,30){\circle{8}}
\put(-22,26){\makebox(8,8){\scriptsize 1}}
\put(-57,92){\circle{8}}
\put(-61,88){\makebox(8,8){\scriptsize 2}}
\put(-53,104){\circle{8}}
\put(-57,100){\makebox(8,8){\scriptsize 3}}
\put(-89,49){\circle{8}}
\put(-93,45){\makebox(8,8){\scriptsize 4}}
\put(-90,71){\circle{8}}
\put(-94,67){\makebox(8,8){\scriptsize 4}}
\put(-92,23){\circle{8}}
\put(-96,19){\makebox(8,8){\scriptsize 5}}
\put(-134,9){\circle{8}}
\put(-138,5){\makebox(8,8){\scriptsize 6}}
\put(-156,27){\circle{8}}
\put(-160,23){\makebox(8,8){\scriptsize 7}}
\put(-157,82){\circle{8}}
\put(-161,78){\makebox(8,8){\scriptsize 8}}
\put(-150,84){\circle{8}}
\put(-154,80){\makebox(8,8){\scriptsize 8}}
\put(-39,110){\circle{8}}
\put(-43,106){\makebox(8,8){\scriptsize 9}}
\end{picture}
......
......@@ -28,7 +28,7 @@ The present work aims at designing a flexible tool to detect blurred segments
with optimal width and orientation in gray-level images for as well
supervised as unsupervised contexts.
User-friendly solutions are sought, with ideally no parameter to set,
or at least quite few values with intuitive meaning to an end user.
or at least quite few values with intuitive meaning.
\subsection{Previous work}
......@@ -37,7 +37,7 @@ blurred segments of fixed width in gray-level images was already introduced.
It is based on a first rough detection in a local area
of the image either defined by the user in supervised context or blindly
explored in automatic mode. The goal is to disclose the presence of an edge.
Therefore, a simple test as the gradient maximal value is performed.
Therefore a simple test as the gradient maximal value is performed.
In case of success, refinement steps are run through an exploration of
the image in the direction of the detected edge.
......@@ -48,25 +48,23 @@ untill a correct candidate with an acceptable gradient orientation is found.
Only the gradient information is processed as it provides a good information
on the image dynamics, and hence the presence of edges.
Trials to also use the intensity signal were made through costly correlation
techniques, but they were mostly successful for detecting objects with
stable appearance such as metallic pipes \cite{AubryAl17}.
techniques, but they were mostly successful for detecting shapes with a
stable appearance such as metallic tubular objects \cite{AubryAl17}.
Despite of good performances obtained compared to other methods from the
literature, several drawbacks remain.
First, the blurred segment width is not measured, but initially set by the
user to meet the application requirements, so that no quality information
can be derived from the computed segment.
Moreover, the blurred segment hull is left free to shift sidewards, or worst,
to rotate around a thin edge in the image, and the produced orientation
value can be largely biased.
Despite of good performances achieved, several drawbacks remain.
First, the blurred segment width is not measured but initially set by the
user according to the application requirements. The produced information
on the edge quality is rather poor, and especially when the edge is thin,
the risk to incorporate outlier points is quite high, thus producing a
biased estimation of the edge orientation.
Then, two refinement steps are systematically run to cope with most of the
tested data, although this is useless when the first detection is successfull.
Beyond, there is no guarantee that this could treat all kinds of data.
The search direction is fixed by the support vector of the blurred segment
detected at the former step, and because the set of vectors in a bounded
discrete space is finite, there is necessarily a limit on this direction
accuracy.
Then, two refinement steps are systematically run.
On one hand, this is useless when the first detection is successfull.
On the other hand, there is no guarantee that this approach is able to
process larger images.
The search direction relies on the support vector of the blurred segment
detected at the former step, and the numerization rounding fixes a limit
on this estimated orientation accuracy.
It results that more steps would inevitably be necessary to process higher
resolution images.
......@@ -83,10 +81,10 @@ As a side effect, these two major evolutions also led to a noticeable
improvement of the time performance of the detector.
In the next section, the main theoretical notions this work relies on are
introduced, with a specific focus on the new concept of adaptive directional
scanner.
Then the new detector workflow and its integration into both supervised and
unsupervised contexts are presented and discussed in \RefSec{sec:method}.
introduced.
Then the new detector workflow, the adaptive directional scanner, the control
of the assigned with and their integration into both supervised
and unsupervised contexts are presented and discussed in \RefSec{sec:method}.
Experiments led to assess the expected increase of performance are decribed
in \RefSec{sec:expe}.
Finally achieved results are summarized in \RefSec{sec:conclusion},
......
......@@ -114,11 +114,11 @@ necessary.
But at each iteration, already tested points are processed again,
thus producing a useless computational cost.
Here the proposed solution is to dynamically align the scan direction to
Here the proposed solution is to dynamically align the scan direction on
the blurred segment one all along the expansion stage.
At each iteration $i$, the scan strip is updated using the direction
of the blurred segment computed at previous iteration $i-1$.
The adaptive directional scan $ADS$ is then defined by :
At each iteration $i$, the scan strip is aligned on the direction of the
blurred segment $\mathcal{B}_{i-1}$ computed at previous iteration $i-1$.
More generally, an adaptive directional scan $ADS$ is defined by:
\begin{equation}
%S_i = \mathcal{D}_{i-1} \cap \mathcal{N}_i
ADS = \left\{
......@@ -126,14 +126,23 @@ S_i = \mathcal{D}_i \cap \mathcal{N}_i \cap \mathcal{I}
\left| \begin{array}{l}
\delta(\mathcal{N}_i) = - \delta^{-1}(\mathcal{D}_0) \\
\wedge~ h_0(\mathcal{N}_i) = h_0(\mathcal{N}_{i-1}) + p(\mathcal{D}) \\
\wedge~ \mathcal{D}_{i} = D (\mathcal{B}_{i-1},\varepsilon + k), i > 1
\wedge~ \mathcal{D}_{i} = \mathcal{D} (C_{i-1}, \vec{D}_{i-1}, w_{i-1}), i > 1
%\wedge~ \mathcal{D}_{i} = D (\mathcal{B}_{i-1},\varepsilon + k), i > 1
\end{array} \right. \right\}
\end{equation}
where $D (\mathcal{B}_i,w)$ is the scan strip aligned to the
detected segment at iteration $i$ with width $w$.
In practice, the scan width is set a little greater than the assigned
width $\varepsilon$ ($k$ is a constant arbitrarily set to 4).
The last clause expresses the update of the scan bounds at iteration $i$.
%where $D (\mathcal{B}_i,w)$ is the scan strip aligned to the
%detected segment at iteration $i$ with width $w$.
%In practice, the scan width is set a little greater than the assigned
%width $\varepsilon$ ($k$ is a constant arbitrarily set to 4).
where $C_{i-1}$, $\vec{D}_{i-1}$ and $w_{i-1}$ are a position, a director
vector and a width observed at iteration $i-1$.
In the scope of the present detector, $C_{i-1}$ is the intersection of
the input selection and the medial axis of $\mathcal{B}_{i-1}$,
$\vec{D}_{i-1}$ the support vector of the narrowest digital straight line
that contains $\mathcal{B}_{i-1}$,
and $w_{i-1}$ a value slightly greater than the minimal width of
$\mathcal{B}_{i-1}$.
So the last clause expresses the update of the scan bounds at iteration $i$.
Compared to static directional scans, the scan strip moves while
scan lines remain fixed.
This behavior ensures a complete detection of the blurred segment even
......@@ -145,15 +154,15 @@ when the orientation is badly estimated (\RefFig{fig:escape} c).
\includegraphics[width=0.48\textwidth]{Fig_notions/escapeFirst_zoom.png} &
\includegraphics[width=0.48\textwidth]{Fig_notions/escapeSecond_zoom.png} \\
\multicolumn{2}{c}{
\includegraphics[width=0.98\textwidth]{Fig_notions/escapeThird_zoom.png}}
\includegraphics[width=0.72\textwidth]{Fig_notions/escapeThird_zoom.png}}
\begin{picture}(1,1)(0,0)
{\color{dwhite}{
\put(-260,134.5){\circle*{8}}
\put(-86,134.5){\circle*{8}}
\put(-260,100.5){\circle*{8}}
\put(-86,100.5){\circle*{8}}
\put(-172,7.5){\circle*{8}}
}}
\put(-263,132){a}
\put(-89,132){b}
\put(-263,98){a}
\put(-89,98){b}
\put(-175,5){c}
\end{picture}
\end{tabular}
......@@ -195,7 +204,7 @@ $\lambda$ iterations ($\mu_{i+\lambda} = \mu_i$), it is set to a much
stricter value able to circumscribe the possible interpretations of the
segment, that take into account the digitization margins:
\begin{equation}
\varepsilon = \mu_{i+\lambda} + 1/2
\varepsilon = \mu_{i+\lambda} + \frac{\textstyle 1}{\textstyle 2}
\end{equation}
This strategy aims at preventing the incorporation of spurious outliers in
further parts of the segment.
......@@ -218,21 +227,26 @@ This strategy provides a first quick analysis of the local context before
extracting the segment and contributes to notably stabilize the overall
process.
When selecting candidates for the fine detection stage, an option is left
to also reject points with a gradient vector in an opposite direction to
the gradient vector of the blurred segment start point.
In that case, called {\it edge selection mode}, all the blurred segment
points have the same direction.
If they are not rejected, points with opposite gradients are aggregated
into a same blurred segment, allowing the detection of the two opposite
edges of a thin straight object. This is called {\it line selection mode}.
When selecting candidates for the fine detection stage, an option, called
{\it edge selection mode}, is left to filter the points according to their
gradient direction.
In {\it main edge selection mode}, only the points with a gradient vector
in the same direction as the start point gradient vector are added to the
blurred segment.
In {\it opposite edge selection mode}, only the points with an opposite
gradient vector direction are kept.
In {\it line selection mode} this filter is not applied, and all the
candidate points are aggregated into a same blurred segment, whatever the
direction of their gradient vector.
This mode allows the detection of the two opposite edges of a thin straight
object.
This distinction is illustrated on \RefFig{fig:edgeDir}.
\begin{figure}[h]
\center
\begin{tabular}{c@{\hspace{0.2cm}}c}
\includegraphics[width=0.48\textwidth]{Fig_method/multiStroke_zoom.png} &
\includegraphics[width=0.48\textwidth]{Fig_method/multiEdge_zoom.png}
\includegraphics[width=0.48\textwidth]{Fig_method/briques1_zoom.png} &
\includegraphics[width=0.48\textwidth]{Fig_method/briques2_zoom.png}
\end{tabular}
\begin{picture}(1,1)(0,0)
{\color{dwhite}{
......@@ -242,43 +256,47 @@ This distinction is illustrated on \RefFig{fig:edgeDir}.
\put(-262.5,-20){a}
\put(-89,-20){b}
\end{picture}
\caption{Blurred segment obtained in line selection mode (a) and in
edge selection mode (b) as a result of the test of the gradient
direction of added points.
In line selection mode, a thick blurred segment is built and
extended up to four tiles.
In edge selection mode, a thin blurred segment is built along
one of the two tile join edges.
Both join edges, drawn with distinct colors, are detected with
the multi-selection option.
They are much shorter than the whole join detected in line
selection mode, because the tiles are not perfectly aligned.}
\caption{Blurred segments obtained in line or edge selection mode
as a result of the gradient direction filtering when adding points.
In line selection mode (a), a thick blurred segment is built and
extended all along the brick join.
In edge selection mode (b), a thin blurred segment is built along
one of the two join edges.
Both join edges are detected with the multi-selection option.
On that very textured image, they are much shorter than the whole
join detected in line selection mode.
Blurred segment points are drawn in black color, and the enclosing
straight segment with minimal width in blue.}
\label{fig:edgeDir}
\end{figure}
Another option, called multi-detection allows the detection of all the
segments crossed by the input stroke $AB$.
The multi-detection algorithm is displayed below.
\subsection{Multiple blurred segments detection}
\input{Fig_method/algoMulti}
Another option, called {\it multi-detection} (Algorithm 1), allows the
detection of all the segments crossed by the input stroke $AB$.
In order to avoid multiple detections of the same edge, an occupancy mask,
initially empty, collects the dilated points of all the blurred segments,
so that these points can not be add to another segment.
First the positions $M_j$ of the local maxima of the gradient magnitude found
under the stroke are sorted from the highest to the lowest.
First the positions $M_j$ of the prominent local maxima of the gradient
magnitude found under the stroke are sorted from the highest to the lowest.
For each of them the main detection process is run with three modifications:
i) the initial detection takes $M_j$ and the orthogonal direction $AB_\perp$
to the stroke as input to build a static scan of fixed width
$\varepsilon_{ini}$, and $M_j$ is used as start point of the blurred segment;
ii) an occupancy mask, initially empty, is filled in with the points of the
detected blurred segments $\mathcal{B}_j''$ at the end of each successful
detection;
iii) points marked as occupied are rejected when selecting candidates for the
\begin{enumerate}
\item the initial detection takes $M_j$ and the orthogonal direction
$\vec{AB}_\perp$ to the stroke as input to build a static scan of fixed width
$2~\varepsilon_{ini}$, and $M_j$ is used as start point of the blurred
segment;
\item the occupancy mask is filled in with the points of the detected blurred
segments $\mathcal{B}_j''$ at the end of each successful detection;
\item points marked as occupied are rejected when selecting candidates for the
blurred segment extension in the fine tracking step.
\end{enumerate}
\input{Fig_method/algoMulti}
In edge selection mode (\RefFig{fig:edgeDir} b), the multi-detection
algorithm is executed twice.
In the second run, the start point is rejected and only candidate points
with opposite gradient direction are considered to extend the blurred
segment.
algorithm is executed twice, first in main edge selection mode, then
in opposite edge selection mode.
%Beyond the possible detection of a large set of edges at once, the
%multi-detection allows the detection of some unaccessible edges in
......@@ -332,38 +350,72 @@ segment.
\subsection{Automatic blurred segment detection}
An unsupervised mode is also proposed to automatically detect all the
straight edges in the image. A stroke that crosses the whole image, is
straight edges in the image. The principle of this automatic detection
is described in Algorithm 2. A stroke that crosses the whole image, is
swept in both direction, vertical then horizontal, from the center to
the borders. At each position, the multi-detection algorithm is run
to collect all the segments found under the stroke.
\input{Fig_method/algoAuto}
The behavior of the unsupervised detection is depicted through the two
examples of \RefFig{fig:auto}.
The example on the left shows the detection of thin straight objects on a
circle with variable width.
On the left half of the circumference, the distance between both edges
exceeds the initial assigned width and a thick blurred segment is build
for each of them. Of course, on a curve, a continuous thickenning is
observed untill the blurred segment minimal width reaches the initial
assigned width.
On the right half, both edges are encompassed in a common blurred segment,
and at the extreme right part of the circle, the few distant residual points
are grouped to form a thick segment.
The example on the right shows the limits of the edge detector on a picture
with quite dense repartition of gradient.
All the salient edges are well detected but they are surrounded by a lot
of false detections, that rely on the presence of many local maxima of
the gradient magnitude with similar orientations.
\RefFig{fig:evalAuto}b gives an idea of the automatic detection performance.
In the example of \RefFig{fig:noisy}, hardly perceptible edges are detected
despite of a quite textured context.
Unsurpringly the length of the detected edges is linked to the initial
value of the assigned width, but a large value also augments the rate
of interfering outliers insertion.
\begin{figure}[h]
\center
\begin{tabular}{c@{\hspace{0.2cm}}c}
\includegraphics[width=0.37\textwidth]{Fig_method/vcercleAuto.png} &
\includegraphics[width=0.58\textwidth]{Fig_method/plafondAuto.png}
\begin{tabular}{c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}c}
\includegraphics[width=0.32\textwidth]{Fig_method/parpaings.png} &
\includegraphics[width=0.32\textwidth]{Fig_method/parpaings2.png} &
\includegraphics[width=0.32\textwidth]{Fig_method/parpaings3.png}
\end{tabular}
\caption{Automatic detection of blurred segments.}
\label{fig:auto}
\begin{picture}(1,1)(0,0)
{\color{dwhite}{
\put(-286,-25.5){\circle*{8}}
\put(-171,-25.5){\circle*{8}}
\put(-58,-25.5){\circle*{8}}
}}
\put(-288.5,-28){a}
\put(-173.5,-28){b}
\put(-60.5,-28){c}
\end{picture}
\caption{Automatic detection of blurred segments on a textured image.
a) the input image,
b) automatic detection result with initial assigned width set
to 3 pixels,
c) automatic detection result with initial assigned width set
to 8 pixels.}
\label{fig:noisy}
\end{figure}
%The behavior of the unsupervised detection is depicted through the two
%examples of \RefFig{fig:auto}.
%The example on the left shows the detection of thin straight objects on a
%circle with variable width.
%On the left half of the circumference, the distance between both edges
%exceeds the initial assigned width and a thick blurred segment is build
%for each of them. Of course, on a curve, a continuous thickenning is
%observed untill the blurred segment minimal width reaches the initial
%assigned width.
%On the right half, both edges are encompassed in a common blurred segment,
%and at the extreme right part of the circle, the few distant residual points
%are grouped to form a thick segment.
%
%The example on the right shows the limits of the edge detector on a picture
%with quite dense repartition of gradient.
%All the salient edges are well detected but they are surrounded by a lot
%of false detections, that rely on the presence of many local maxima of
%the gradient magnitude with similar orientations.
%
%\begin{figure}[h]
%\center
% \begin{tabular}{c@{\hspace{0.2cm}}c}
% \includegraphics[width=0.37\textwidth]{Fig_method/vcercleAuto.png} &
% \includegraphics[width=0.58\textwidth]{Fig_method/plafondAuto.png}
% \end{tabular}
% \caption{Automatic detection of blurred segments.}
% \label{fig:auto}
%\end{figure}
......@@ -67,7 +67,7 @@ DS = \left\{ S_i = \mathcal{D} \cap \mathcal{N}_i \cap \mathcal{I}
\end{array} \right. \right\}
%S_i = \mathcal{D} \cap \mathcal{N}_i, \mathcal{N}_i \perp \mathcal{D}
\end{equation}
In this expression, the clause
In this definition, the clause
$\delta(\mathcal{N}_i) = - \delta^{-1}(\mathcal{D})$
expresses the othogonality constraint between the scan lines $\mathcal{N}_i$
and the scan strip $\mathcal{D}$.
......@@ -79,7 +79,7 @@ The scans $S_i$ are developed on each side of a start scan $S_0$,
and ordered by their distance to the start line $\mathcal{N}_0$ with
a positive (resp. negative) sign if they are on the left (resp. right)
side of $\mathcal{N}_0$ (\RefFig{fig:ds}).
The directional scan is iterately processed from the start scan to both ends.
The directional scan is iterately parsed from the start scan to both ends.
At each iteration $i$, the scans $S_i$ and $S_{-i}$ are successively processed.
\begin{figure}[h]
......@@ -111,29 +111,30 @@ At each iteration $i$, the scans $S_i$ and $S_{-i}$ are successively processed.
\put(-60,30){$\mathcal{N}_8$}
\put(-169,8){$\mathcal{N}_{-5}$}
\end{picture}
\caption{A directional scan: the start scan $S_0$ in blue, odd scans in
green, even scans in red, scan lines bounds $\mathcal{N}_i$ in
plain lines and scan strip bounds $\mathcal{D}$ in dotted lines.}
\caption{A directional scan.
The start scan $S_0$ is drawn in blue, odd scans in green,
even scans in red, the bounds of scan lines $\mathcal{N}_i$
with plain lines and the bounds of scan strip $\mathcal{D}$
with dotted lines.}
\label{fig:ds}
\end{figure}
A directional scan can be defined by its start scan $S_0$.
If $A(x_A,y_A)$ and $B(x_B,y_B)$ are the end points of $S_0$,
the scan strip is defined by :
and if we note $\delta_x = x_B - x_A$, $\delta_y = y_B - y_A$,
$c_1 = \delta_x\cdot x_A + \delta_y\cdot y_A$,
$c_2 = \delta_x\cdot x_B + \delta_y\cdot y_B$ and
$\nu_{AB} = max (|\delta_x|, |\delta_y|)$, it is then defined by
the following scan strip $\mathcal{D}^{A,B}$ and scan lines
$\mathcal{N}_i^{A,B}$:
\begin{equation}
\mathcal{D}(A,B) =
\mathcal{L}(\delta_x,~ \delta_y,~ min (c1,c2),~ 1 + |c_1-c_2|)
\end{equation}
\noindent
where $\delta_x = x_B - x_A$, $\delta_y = y_B - y_A$,
$c_1 = \delta_x\cdot x_A + \delta_y\cdot y_A$ and
$c_2 = \delta_x\cdot x_B + \delta_y\cdot y_B$.
The scan line $\mathcal{N}_i$ is then defined by :
\begin{equation}
\mathcal{N}_i(A,B) = \mathcal{L}(\delta_y,~ -\delta_x,~
\left\{ \begin{array}{l}
\mathcal{D}^{A,B} =
\mathcal{L}(\delta_x,~ \delta_y,~ min (c1,c2),~ 1 + |c_1-c_2|) \\
\mathcal{N}_i^{A,B} = \mathcal{L}(\delta_y,~ -\delta_x,~
\delta_y\cdot x_A - \delta_x\cdot y_A + i\cdot \nu_{AB},~ \nu_{AB})
\end{array} \right.
\end{equation}
where $\nu_{AB} = max (|\delta_x|, |\delta_y|)$
%The scan lines length is $d_\infty(AB)$ or $d_\infty(AB)-1$, where $d_\infty$
%is the chessboard distance ($d_\infty = max (|d_x|,|d_y|)$).
......@@ -141,15 +142,16 @@ where $\nu_{AB} = max (|\delta_x|, |\delta_y|)$
%as the image bounds should also be processed anyway.
A directional scan can also be defined by its central point $C(x_C,y_C)$,
its direction $\vec{D}(X_D,Y_D)$ and its width $w$. The scan strip is :
\begin{equation}
\mathcal{D}(C,\vec{D},w)
= \mathcal{L}(Y_D,~ -X_D,~ x_C\cdot Y_D - y_C\cdot X_D - w / 2,~ w)
\end{equation}
\noindent
and the scan line $\mathcal{N}_i(C,\vec{D},w)$ :
its direction $\vec{D}(X_D,Y_D)$ and its width $w$. If we note
$c_3 = x_C\cdot Y_D - y_C\cdot X_D$ and
$c_4 = X_D\cdot x_C + Y_D\cdot y_C$, it is then defined by
the following scan strip $\mathcal{D}^{C,\vec{D},w}$ and scan lines
$\mathcal{N}_i^{C,\vec{D},w}$:
\begin{equation}
\mathcal{N}_i(C,\vec{D},w) = \mathcal{L}(X_D,~ Y_D,~
X_D\cdot x_C + Y_D\cdot y_C - w / 2 + i\cdot w,~ max (|X_D|,|Y_D|)
\left\{ \begin{array}{l}
\mathcal{D}^{C,\vec{D},w}
= \mathcal{L}(Y_D,~ -X_D,~ c_3 - w / 2,~ w) \\
\mathcal{N}_i^{C,\vec{D},w} = \mathcal{L}(X_D,~ Y_D,~
c_4 - w / 2 + i\cdot w,~ max (|X_D|,|Y_D|)
\end{array} \right.
\end{equation}
......@@ -96,7 +96,11 @@ int main (int argc, char *argv[])
cout << "siez i:"<< pts.size() << endl;
}
fout << "# Line detection generated from " << argv[0] << "with format : X1 Y1 X2 Y2 on each line" << std::endl;
<<<<<<< HEAD
=======
>>>>>>> 98e75cffc2ac74f66288076537880443741a9cc8
// Blurred segment detection
vector<BlurredSegment *> bss;
BSDetector detector;
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment