Skip to content
Snippets Groups Projects
Commit bc818b0b authored by even's avatar even
Browse files

Article: remarks BK updated

parent 7757664e
No related branches found
No related tags found
No related merge requests found
Showing
with 178 additions and 145 deletions
Article/Expe_auto/buroDetail.png

26 KiB

\begin{tabular}{|l||r|r|r|r|}
\hline
Measure $M$ & \multicolumn{1}{c|}{$T$ (ms)} & \multicolumn{1}{c|}{$N$}
& \multicolumn{1}{c|}{$L$ (pixels)} & \multicolumn{1}{c|}{$W$ (pixels)} \\
Measure $M$ & \multicolumn{1}{c|}{$T$} & \multicolumn{1}{c|}{$N$}
& \multicolumn{1}{c|}{$L$} & \multicolumn{1}{c|}{$W$} \\
\hline
$M_{old}$ on image of \RefFig{fig:auto} & 29.51 & 306 & 38.58 & 2.47 \\
$M_{new}$ on image of \RefFig{fig:auto} & 25.85 & 352 & 33.25 & 2.17 \\
$M_{new}/M_{old}$ (\%)
$M_{old}$ on image of \RefFig{fig:auto}
& 29.51 ms & 306 segments & 38.58 pixels & 2.47 pixels \\
$M_{new}$ on image of \RefFig{fig:auto}
& 25.85 ms & 352 segments & 33.25 pixels & 2.17 pixels \\
\hline \hline
$M_{new}/M_{old}$ (\%) & & & & \\
\hspace{0.4cm} on image of \RefFig{fig:auto}
& \multicolumn{1}{l|}{87.60} & \multicolumn{1}{l|}{115.03}
& \multicolumn{1}{l|}{86.18} & \multicolumn{1}{l|}{87.85} \\
\hline
$M_{new}/M_{old}$ (\%) & & & & \\
on CannyLines images
\hspace{0.4cm} on the set of test images & & & & \\
\hspace{0.4cm} on CannyLines images
& 86.02 $\pm$ 2.44 & 110.15 $\pm$ 6.51 & 89.23 $\pm$ 5.11 & 84.70 $\pm$ 2.98 \\
\hline
\end{tabular}
Article/Expe_hard/hardDetailNew.png

27.6 KiB

Article/Expe_hard/hardDetailOld.png

27.4 KiB

......@@ -7,3 +7,7 @@ m : detection auto avec le nouveau detecteur
p : sauve l'image capture.png -> ismm/Expe_hard/hardNew.png
6 : detection auto avec l'ancien detecteur
p : sauve l'image capture.png -> ismm/Expe_hard/hardOld.png
Pour les images hardDetailNew.png et hardDetailOld.png,
selectionner le coin haut gauche de l'image depuis le
point (346, 140).
\begin{picture}(220,60)
\multiput(0,6)(2,-6){2}{\line(3,1){150}}
\multiput(0,8)(30,6){8}{\color{red}{\line(5,1){10}}}
\multiput(0,18)(30,6){8}{\color{red}{\line(5,1){10}}}
\multiput(0,8)(30,6){8}{\color{blue}{\line(5,1){10}}}
\multiput(0,18)(30,6){8}{\color{blue}{\line(5,1){10}}}
\put(45,21){\circle*{3}}
\put(55,21){\circle*{3}}
\put(65,21){\circle*{3}}
......@@ -15,8 +15,8 @@
\put(140,36){\vector(0,1){10}}
\put(140,62.66){\vector(0,-1){10}}
\put(120,58){$\mu_{i-1}$}
\put(160,30){\color{red}{\vector(0,1){10}}}
\put(160,60){\color{red}{\vector(0,-1){10}}}
\put(164,36){\color{red}{$\mu_i$}}
\put(180,60){\color{red}{$\mathcal{B}_{i}$}}
\put(160,30){\color{blue}{\vector(0,1){10}}}
\put(160,60){\color{blue}{\vector(0,-1){10}}}
\put(164,36){\color{blue}{$\mu_i$}}
\put(180,60){\color{blue}{$\mathcal{B}_{i}$}}
\end{picture}
Article/Fig_notions/escapeLightFirst_full.png

34.3 KiB

Article/Fig_notions/escapeLightFirst_zoom.png

12.6 KiB

Article/Fig_notions/escapeLightSecond_full.png

32 KiB

Article/Fig_notions/escapeLightSecond_zoom.png

11.8 KiB

Article/Fig_notions/escapeLightThird_full.png

33.3 KiB

Article/Fig_notions/escapeLightThird_zoom.png

12.6 KiB

Seg_adaption est le code qui a fourni adaption*_*.png
a partir de test_adapt.txt
0 pour lancer le test
1 pour ouvrir la vue de structure
+ pour zoomer une fois
se caler en bas a droite avec les fleches
i - i pour voir les scans bounds
p pour le screenshot -> structure.png
I pour voir les scans
p pour le screenshot -> structure.png
Avec gimp, selection de la zone (516,70) (174,250)
Le meme exemple a servi a produire une premiere figure escape
sur le segment P1 (181, 226), P2 (178, 205)
Avec gimp, selection de la zone (159,115) (355,203)
Seg_escape est le code qui a fourni escape*_*.png
a partir de test_escape.txt
En vue principale :
Ctrl-q pour basculer entre scan statique ou adaptatif
En vue d'analyse (touche 1) :
i pour selectionner les vues de scan
Ctrl-j pour visualiser la selection utilisateur
Avec gimp, selection de la zone (333,227)(344,208)
Production des images plus claires :
6 y pour avoir un niveau de sombreur a 30
Avec gimp, selection de la zone (58,127)(520,275)
pour avoir des images 454x148 a l'arrivee.
1000 cartes avec 1000 segments noirs sur fond blanc
re-estimation de la largeur en retirant un biais de 1.4
RESULTS FOR THE OLD DETECTOR
69.719 (pm 20.3805) segments searches (local min) / image
25.23 (pm 6.65638) provided segments / image
10.766 (pm 1.96395) provided long segments / image
2.298 (pm 2.82935) undetected segments per image
88.3997 (pm 4.31352) % of points found
2.27666 (pm 1.50916) % of points found more than once (redetections)
32.2193 (pm 13.8132) % false points produced
Precision : 0.732884 (pm 0.0844506)
Biased width : 4.57922 (0.38789) per matched segment
Width : 3.37696 (0.350989) per matched segment
Width difference : 0.64275 (0.319964) per matched segment
Absolute width difference : 0.874094 (0.288227) per matched segment
Angle difference : 0.0497484 (0.756378) degrees per matched segment
Absolute angle difference : 1.2933 (0.92141) per matched segment
Absolute long edge angle difference : 0.44657 (0.55775) per matched segment
66.841 (pm 23.0353) segments searches (local min) / image
25.351 (pm 7.16695) provided segments / image
10.41 (pm 1.84318) provided long segments / image
2.527 (pm 2.54081) undetected segments per image
Recall : 89.0911 (pm 3.94178) % of points found
2.40181 (pm 1.52098) % of points found more than once (redetections)
34.8326 (pm 16.4621) % false points produced
Precision : 0.718919 (pm 0.0973014)
Statistical precision : 0.727566 (pm 0.0969161)
Statistical recall : 0.891995 (pm 0.0394028)
Statistical F-measure : 0.798543 (pm 0.0677593)
Biased width : 4.64042 (0.340364) per matched segment
Width : 3.42451 (0.334641) per matched segment
Width difference : 0.717033 (0.34932) per matched segment
Absolute width difference : 0.920479 (0.309156) per matched segment
Angle difference : -0.246099 (1.33358) degrees per matched segment
Absolute angle difference : 1.48162 (1.41907) per matched segment
Absolute long edge angle difference : 0.542687 (1.08617) per matched segment
RESULTS FOR THE NEW DETECTOR
63.6 (pm 17.4179) segments searches (local min) / image
23.722 (pm 4.90028) provided segments / image
11.619 (pm 2.08619) provided long segments / image
0.475 (pm 0.756272) undetected segments per image
89.813 (pm 3.17276) % of points found
1.83739 (pm 1.3875) % of points found more than once (redetections)
22.5376 (pm 10.2339) % false points produced
Precision : 0.799399 (pm 0.0701648)
Biased width : 4.49693 (0.320961) per matched segment
Width : 3.26288 (0.297711) per matched segment
Width difference : 0.408348 (0.27149) per matched segment
Absolute width difference : 0.741648 (0.24851) per matched segment
Angle difference : 0.0354951 (0.641921) degrees per matched segment
Absolute angle difference : 0.982936 (0.696344) per matched segment
Long edge absolute angle difference : 0.480284 (0.568648) per matched segment
62.393 (pm 17.1718) segments searches (local min) / image
24.351 (pm 5.02624) provided segments / image
11.145 (pm 1.91824) provided long segments / image
0.637 (pm 0.909978) undetected segments per image
Recall : 90.0194 (pm 2.77084) % of points found
1.78012 (pm 1.21784) % of points found more than once (redetections)
23.7471 (pm 8.87023) % false points produced
Precision : 0.791264 (pm 0.0630463)
Statistical precision : 0.791896 (pm 0.0630431)
Statistical recall : 0.900763 (pm 0.0277026)
Statistical F-measure : 0.84169 (pm 0.0416617)
Biased width : 4.5098 (0.252717) per matched segment
Width : 3.28158 (0.239712) per matched segment
Width difference : 0.462984 (0.255638) per matched segment
Absolute width difference : 0.764102 (0.231734) per matched segment
Angle difference : -0.141146 (0.721153) degrees per matched segment
Absolute angle difference : 1.05426 (0.801184) per matched segment
Long edge absolute angle difference : 0.529121 (0.684013) per matched segment
......@@ -3,18 +3,20 @@
Detector : & \multicolumn{3}{c|}{old} & \multicolumn{3}{c|}{new} \\
\hline
Detected blurred segments per image
& 25.23 & $\pm$ & 6.66 & 23.72 & $\pm$ & 4.90 \\
& 25.35 & $\pm$ & 7.17 & 24.35 & $\pm$ & 5.03 \\
Detected long (> 40 pixels) blurred segments per image
& 10.77 & $\pm$ & 1.96 & 11.62 & $\pm$ & 2.09 \\
& 10.41 & $\pm$ & 1.84 & 11.14 & $\pm$ & 1.92 \\
Undetected input segments per image
& 2.30 & $\pm$ & 2.83 & 0.47 & $\pm$ & 0.76 \\
Ratio of true detection (\%) : $\#(D\cap S)/\#S$
& 88.40 & $\pm$ & 4.31 & 89.81 & $\pm$ & 3.17 \\
Precision (\%) : $\#(D\cap S)/\#D$
& 73.29 & $\pm$ & 8.45 & 79.94 & $\pm$ & 7.02 \\
& 2.53 & $\pm$ & 2.54 & 0.64 & $\pm$ & 0.91 \\
Precision (\%) : $P = \#(D\cap S)/\#D$
& 72.76 & $\pm$ & 9.69 & 79.19 & $\pm$ & 6.30 \\
Recall (ratio of true detection) (\%) : $R = \#(D\cap S)/\#S$
& 89.20 & $\pm$ & 3.94 & 90.08 & $\pm$ & 2.77 \\
F-measure (harmonic mean) (\%) : $F = 2\times P\times R/(P+R)$
& 79.85 & $\pm$ & 6.78 & 84.17 & $\pm$ & 4.17 \\
Width difference (in pixels) to matched input segment
& 0.87 & $\pm$ & 0.29 & 0.74 & $\pm$ & 0.24 \\
& 0.92 & $\pm$ & 0.31 & 0.76 & $\pm$ & 0.23 \\
Angle difference (in degrees) to matched input segment
& 1.29 & $\pm$ & 0.92 & 0.98 & $\pm$ & 0.70 \\
& 1.48 & $\pm$ & 1.42 & 1.05 & $\pm$ & 0.80 \\
\hline
\end{tabular}
......@@ -32,7 +32,7 @@
@inproceedings{DebledAl05,
title = {Blurred Segments Decomposition in Linear Time},
title = {Blurred segments decomposition in linear time},
author = {Debled-Rennesson, Isabelle and Feschet, Fabien and
Rouyer-Degli, Jocelyne},
booktitle = {Proc. of Int. Conf. on DGCI},
......@@ -43,7 +43,7 @@
pages = {371-382},
optaddress = {Poitiers, France},
optmonth = {April},
publisher = {Springer}
optpublisher = {Springer}
}
......
......@@ -3,36 +3,39 @@
\label{sec:conclusion}
This paper introduced a new straight edge detector based on a local analysis of
the image gradient and on the use of blurred segments to vehiculate an
the image gradient and on the use of blurred segments to embed an
estimation of the edge thickness.
It relies on directional scans of the image around maximal values of the
gradient magnitude, that have previously been presented in
\cite{KerautretEven09}.
Despite of good performances achieved, the former approach suffers of two
major drawbacks: the inaccurate estimation of the blurred segment width
and orientation, and the lack of guarantee that it is completely detected.
These limitations were solved through the integration of two new concepts:
%Despite of good performances achieved, the former approach suffers of two
%major drawbacks: the inaccurate estimation of the blurred segment width
%and orientation, and the lack of guarantee that it is completely detected.
%These limitations were solved through the integration of two new concepts:
%adaptive directional scans that continuously adjust the scan strip
%to the detected blurred segment direction;
The main limitations of the former approach were solved through the integration
of two new concepts:
adaptive directional scans that continuously adjust the scan strip
to the detected blurred segment direction;
to the detected edge direction;
the control of the assigned width based on the observation of the
blurred segment thickenning in the early stage of its expansion.
blurred segment growth.
Expected gains in accuracy and execution time were confirmed by the
held experiments.
Expected gains in execution time linked to the suppression of a useless
repetition of the fine tracking stage were confirmed by the experiments
both in supervised and unsupervised contexts.
The residual weakness is the high sensitivity to the initial conditions
despite of the valuable enhancement brought by the duplication of the
initial detection.
Disturbing gradient perturbations in the early stage of the edge expansion,
possibly due to the presence of close edges, can deeply affect the output
blurred segment.
In supervised context, the user can easily select a favourable area where
A residual weakness of the approach is the sensitivity to the initial
conditions.
In supervised context, the user can select a favourable area where
the awaited edge is dominant.
But this default remains quite sensible in unsupervised context.
In future works, we intend to provide some solutions for this drawback
by scoring the detection result on the base of a characterization of the
initial context.
This task is made quite easier, thanks to the stabilization produced by
the duplication of the initial detection.
But in unsupervised context, gradient perturbations in the early stage of
the edge expansion, msotly due to the presence of close edges, can deeply
affect the result.
In future works, we intend to provide solutions to this drawback
by scoring the detection result on the basis of a characterization of the
local context.
%
Then experimental validation of the consistency of the estimated width and
orientation values on real situations are planned in different application
fields.
......
......@@ -11,6 +11,12 @@ For a fair comparison, the process flow of the former method (the initial
detection followed by two refinement steps) is integrated as an option
into the code of the new detector, so that both methods rely on the same
optimized basic routines.
During all these experiments, only the blurred segment size and its
orientation compared to the initial stroke were tested at the end of
the initial detection, and only the segment size was tested at the end
of the fine tracking stage.
All the other tests, sparsity or fragmentation, were disabled.
The segment minimal size was set to 5 pixels, except where precised.
The first test (\RefFig{fig:synth}) compares the performance of both
detectors on a set of 1000 synthesized images containing 10 randomly
......@@ -18,20 +24,14 @@ placed input segments with random width between 2 and 5 pixels.
The absolute value of the difference of each found segment to its
matched input segment is measured.
On such perfect image, the numerical error on the gradient extraction
biases the line width measures. This bias was estimated in quasi ideal
context (only one input segment, thus no risk of perturbation)
biases the line width measures. This bias was first estimated using 1000
images containing only one input segment (no possible interaction)
and the found value (1.4 pixel) was taken into account in the test.
Although the effect of the old detector weaknesses should be reduced in
this low gradient noise context, the results of \RefTab{tab:synth} show
The results of \RefTab{tab:synth} show
slightly better width and angle measurements for the new detector.
The new detector shows more precise, with a smaller amount of false
detections and succeeds in finding most of the input segments.
The second test (\RefFig{fig:hard}) visually compares the results of both
detectors on quite noisy images, also difficult to process for other
detectors from the literature.
The new detector provides less outliers and misaligned segments, and
globally more relevant informations to infere the structure of the brick wall.
\begin{figure}[h]
%\center
\begin{tabular}{
......@@ -65,29 +65,7 @@ $S$ is the set of all the input segments,
$D$ the set of all the detected blurred segments.}
\label{tab:synth}
\end{table}
\begin{figure}
%\center
\begin{tabular}{
c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}}
\includegraphics[width=0.32\textwidth]{Fig_method/parpaings.png} &
\includegraphics[width=0.32\textwidth]{Expe_hard/hardOld.png} &
\includegraphics[width=0.32\textwidth]{Expe_hard/hardNew.png}
\begin{picture}(1,1)
{\color{dwhite}{
\put(-286,4.5){\circle*{8}}
\put(-171,4.5){\circle*{8}}
\put(-58,4.5){\circle*{8}}
}}
\put(-288.5,2){a}
\put(-173.5,2){b}
\put(-60.5,2){c}
\end{picture}
\end{tabular}
\caption{Detection results on quite textured images:
one of the tested images (a),
the segments found by the old detector (b)
and those found by the new detector (c).}
\label{fig:hard}
\end{figure}
\input{expeAuto}
\input{expeHard}
The third campaign of tests aims at evaluating the performance of the new
detector with respect to the previous one on a selection of more standard
images.
The next experiments aim at evaluating the performance of the new
detector with respect to the previous one on a test set composed of a
selection of 20 real images.
One of them is displayed on \RefFig{fig:auto}.
Compared measures $M$ are the execution time $T$, the amount $N$ of detected
blurred segments, the mean length $L$ and the mean width $W$ of the detected
segments.
blurred segments, their mean length $L$ and their mean width $W$.
For the sake of objectivity, these results are also compared to the same
measurements made on the 20 images data base used for the CannyLine line
segment detector \cite{LuAl15}.
\RefTab{tab:auto} gives the measures obtained on one of the selected images
(\RefFig{fig:auto}) and the result of a systematic test on the CannyLine
data base.
\RefTab{tab:auto} gives the achieved results.
\begin{figure}[h]
%\center
\begin{tabular}{
......@@ -20,7 +17,8 @@ data base.
% \includegraphics[width=0.32\textwidth]{Expe_auto/coloredNew.png} \\
\includegraphics[width=0.32\textwidth]{Expe_auto/bsOld.png} &
\includegraphics[width=0.32\textwidth]{Expe_auto/bsNew.png} \\
& \includegraphics[width=0.22\textwidth]{Expe_auto/dssDetailOld.png} &
\includegraphics[width=0.22\textwidth]{Expe_auto/buroDetail.png} &
\includegraphics[width=0.22\textwidth]{Expe_auto/dssDetailOld.png} &
\includegraphics[width=0.22\textwidth]{Expe_auto/dssDetailNew.png}
\begin{picture}(1,1)
{\color{red}{
......@@ -28,25 +26,30 @@ data base.
\put(-5.5,31){\vector(-2,-1){20}}
\put(-133.5,31){\framebox(28,9)}
\put(-117.5,31){\vector(-2,-1){20}}
\put(-247.5,31){\framebox(28,9)}
\put(-231.5,31){\vector(-2,-1){20}}
}}
{\color{dwhite}{
\put(-291,32.5){\circle*{8}}
\put(-177,32.5){\circle*{8}}
\put(-63,32.5){\circle*{8}}
\put(-302,4.5){\circle*{8}}
\put(-188,4.5){\circle*{8}}
\put(-75,4.5){\circle*{8}}
}}
\put(-293.5,30){a}
\put(-179.5,30){b}
\put(-65.5,30){c}
\put(-191,2){d}
\put(-77.5,2){e}
\put(-305,2){d}
\put(-191,2){e}
\put(-77.5,2){f}
\end{picture}
\end{tabular}
\caption{Automatic detection on standard images:
\caption{Automatic detection on real images:
an input image (a), the segments found by the old detector (b)
and those found by the new detector (c), and a detail of the
enclosing digital segments for both old (d) and new (e) detectors.}
image (d) and the enclosing digital segments for both old (e)
and new (f) detectors.}
\label{fig:auto}
\end{figure}
\begin{table}
......@@ -69,7 +72,7 @@ Found edges are thus more fragmented.
The relevance of this behavior depends strongly on application requirements.
Therefore the control of the assigned width is left as an option, the user
can let or cancel it.
In both case, it could be interesting to combine the detector with a tool
In both case, it could be useful to combine the detector with a tool
to merge aligned segments.
%Although these observations in unsupervised context should be reproduced
......
The second test (\RefFig{fig:hard}) visually compares the results of
both detectors on quite difficult images with a lot of gradient noise.
The new detector provides less outliers and misaligned segments, and
globally more relevant informations to infere the structure of the brick wall.
\begin{figure}[h]
%\center
The last test visually compares the results of both detectors on quite textured
images, also difficult to process for other detectors from the literature.
The minimal size parameter was raised to 12 pixels to reject small segments
considered as outliers.
On the example of \RefFig{fig:hard}, the new detector provides less residual
outliers and misaligned segments, and globally more relevant informations
to infere the structure of the brick wall.
\begin{figure}
\center
\begin{tabular}{
c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}c@{\hspace{0.1cm}}}
\includegraphics[width=0.32\textwidth]{Fig_method/parpaings.png} &
\includegraphics[width=0.32\textwidth]{Fig_hard/hardOld.png} &
\includegraphics[width=0.32\textwidth]{Fig_hard/hardNew.png}
\includegraphics[width=0.205\textwidth]{Fig_method/parpaings.png} &
\includegraphics[width=0.32\textwidth]{Expe_hard/hardDetailOld.png} &
\includegraphics[width=0.32\textwidth]{Expe_hard/hardDetailNew.png}
\begin{picture}(1,1)
{\color{red}{
\put(-302,34){\framebox(29,11.5)}
}}
{\color{dwhite}{
\put(-286,4.5){\circle*{8}}
\put(-266,4.5){\circle*{8}}
\put(-171,4.5){\circle*{8}}
\put(-58,4.5){\circle*{8}}
}}
\put(-288.5,2){a}
\put(-268.5,2){a}
\put(-173.5,2){b}
\put(-60.5,2){c}
\end{picture}
\end{tabular}
\caption{Detection results on quite textured images:
one of the tested images (a),
the segments found by the old detector (b)
and those found by the new detector (c).}
\caption{Results on quite textured images: test image (a),
detail (top left corner) on the segments found by the old
detector (b) and on those found by the new detector (c).}
\label{fig:hard}
\end{figure}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment